Documentation ¶
Overview ¶
Ristretto is a fast, fixed size, in-memory cache with a dual focus on throughput and hit ratio performance. You can easily add Ristretto to an existing system and keep the most valuable data where you need it.
Index ¶
- type Cache
- func (c *Cache[K, V]) Clear()
- func (c *Cache[K, V]) Close()
- func (c *Cache[K, V]) Del(key K)
- func (c *Cache[K, V]) Get(key K) (V, bool)
- func (c *Cache[K, V]) GetTTL(key K) (time.Duration, bool)
- func (c *Cache[K, V]) MaxCost() int64
- func (c *Cache[K, V]) Set(key K, value V, cost int64) bool
- func (c *Cache[K, V]) SetWithTTL(key K, value V, cost int64, ttl time.Duration) bool
- func (c *Cache[K, V]) UpdateMaxCost(maxCost int64)
- func (c *Cache[K, V]) Wait()
- type Config
- type Item
- type Key
- type Metrics
- func (p *Metrics) Clear()
- func (p *Metrics) CostAdded() uint64
- func (p *Metrics) CostEvicted() uint64
- func (p *Metrics) GetsDropped() uint64
- func (p *Metrics) GetsKept() uint64
- func (p *Metrics) Hits() uint64
- func (p *Metrics) KeysAdded() uint64
- func (p *Metrics) KeysEvicted() uint64
- func (p *Metrics) KeysUpdated() uint64
- func (p *Metrics) LifeExpectancySeconds() *z.HistogramData
- func (p *Metrics) Misses() uint64
- func (p *Metrics) Ratio() float64
- func (p *Metrics) SetsDropped() uint64
- func (p *Metrics) SetsRejected() uint64
- func (p *Metrics) String() string
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Cache ¶
type Cache[K Key, V any] struct { // Metrics contains a running log of important statistics like hits, misses, // and dropped items. Metrics *Metrics // contains filtered or unexported fields }
Cache is a thread-safe implementation of a hashmap with a TinyLFU admission policy and a Sampled LFU eviction policy. You can use the same Cache instance from as many goroutines as you want.
func (*Cache[K, V]) Clear ¶
func (c *Cache[K, V]) Clear()
Clear empties the hashmap and zeroes all cachePolicy counters. Note that this is not an atomic operation (but that shouldn't be a problem as it's assumed that Set/Get calls won't be occurring until after this).
func (*Cache[K, V]) Close ¶
func (c *Cache[K, V]) Close()
Close stops all goroutines and closes all channels.
func (*Cache[K, V]) Del ¶
func (c *Cache[K, V]) Del(key K)
Del deletes the key-value item from the cache if it exists.
func (*Cache[K, V]) Get ¶
Get returns the value (if any) and a boolean representing whether the value was found or not. The value can be nil and the boolean can be true at the same time. Get will not return expired items.
func (*Cache[K, V]) GetTTL ¶
GetTTL returns the TTL for the specified key and a bool that is true if the item was found and is not expired.
func (*Cache[K, V]) Set ¶
Set attempts to add the key-value item to the cache. If it returns false, then the Set was dropped and the key-value item isn't added to the cache. If it returns true, there's still a chance it could be dropped by the policy if its determined that the key-value item isn't worth keeping, but otherwise the item will be added and other items will be evicted in order to make room.
To dynamically evaluate the items cost using the Config.Coster function, set the cost parameter to 0 and Coster will be ran when needed in order to find the items true cost.
Set writes the value of type V as is. If type V is a pointer type, It is ok to update the memory pointed to by the pointer. Updating the pointer itself will not be reflected in the cache. Be careful when using slice types as the value type V. Calling `append` may update the underlined array pointer which will not be reflected in the cache.
func (*Cache[K, V]) SetWithTTL ¶
SetWithTTL works like Set but adds a key-value pair to the cache that will expire after the specified TTL (time to live) has passed. A zero value means the value never expires, which is identical to calling Set. A negative value is a no-op and the value is discarded.
See Set for more information.
func (*Cache[K, V]) UpdateMaxCost ¶
UpdateMaxCost updates the maxCost of an existing cache.
type Config ¶
type Config[K Key, V any] struct { // NumCounters determines the number of counters (keys) to keep that hold // access frequency information. It's generally a good idea to have more // counters than the max cache capacity, as this will improve eviction // accuracy and subsequent hit ratios. // // For example, if you expect your cache to hold 1,000,000 items when full, // NumCounters should be 10,000,000 (10x). Each counter takes up roughly // 3 bytes (4 bits for each counter * 4 copies plus about a byte per // counter for the bloom filter). Note that the number of counters is // internally rounded up to the nearest power of 2, so the space usage // may be a little larger than 3 bytes * NumCounters. // // We've seen good performance in setting this to 10x the number of items // you expect to keep in the cache when full. NumCounters int64 // MaxCost is how eviction decisions are made. For example, if MaxCost is // 100 and a new item with a cost of 1 increases total cache cost to 101, // 1 item will be evicted. // // MaxCost can be considered as the cache capacity, in whatever units you // choose to use. // // For example, if you want the cache to have a max capacity of 100MB, you // would set MaxCost to 100,000,000 and pass an item's number of bytes as // the `cost` parameter for calls to Set. If new items are accepted, the // eviction process will take care of making room for the new item and not // overflowing the MaxCost value. // // MaxCost could be anything as long as it matches how you're using the cost // values when calling Set. MaxCost int64 // BufferItems determines the size of Get buffers. // // Unless you have a rare use case, using `64` as the BufferItems value // results in good performance. // // If for some reason you see Get performance decreasing with lots of // contention (you shouldn't), try increasing this value in increments of 64. // This is a fine-tuning mechanism and you probably won't have to touch this. BufferItems int64 // Metrics is true when you want variety of stats about the cache. // There is some overhead to keeping statistics, so you should only set this // flag to true when testing or throughput performance isn't a major factor. Metrics bool // OnEvict is called for every eviction with the evicted item. OnEvict func(item *Item[V]) // OnReject is called for every rejection done via the policy. OnReject func(item *Item[V]) // OnExit is called whenever a value is removed from cache. This can be // used to do manual memory deallocation. Would also be called on eviction // as well as on rejection of the value. OnExit func(val V) // KeyToHash function is used to customize the key hashing algorithm. // Each key will be hashed using the provided function. If keyToHash value // is not set, the default keyToHash function is used. // // Ristretto has a variety of defaults depending on the underlying interface type // https://github.com/dgraph-io/ristretto/blob/master/z/z.go#L19-L41). // // Note that if you want 128bit hashes you should use the both the values // in the return of the function. If you want to use 64bit hashes, you can // just return the first uint64 and return 0 for the second uint64. KeyToHash func(key K) (uint64, uint64) // Cost evaluates a value and outputs a corresponding cost. This function is ran // after Set is called for a new item or an item is updated with a cost param of 0. // // Cost is an optional function you can pass to the Config in order to evaluate // item cost at runtime, and only whentthe Set call isn't going to be dropped. This // is useful if calculating item cost is particularly expensive and you don't want to // waste time on items that will be dropped anyways. // // To signal to Ristretto that you'd like to use this Cost function: // 1. Set the Cost field to a non-nil function. // 2. When calling Set for new items or item updates, use a `cost` of 0. Cost func(value V) int64 // IgnoreInternalCost set to true indicates to the cache that the cost of // internally storing the value should be ignored. This is useful when the // cost passed to set is not using bytes as units. Keep in mind that setting // this to true will increase the memory usage. IgnoreInternalCost bool // TtlTickerDurationInSec sets the value of time ticker for cleanup keys on TTL expiry. TtlTickerDurationInSec int64 }
Config is passed to NewCache for creating new Cache instances.
type Item ¶
type Item[V any] struct { Key uint64 Conflict uint64 Value V Cost int64 Expiration time.Time // contains filtered or unexported fields }
Item is a full representation of what's stored in the cache for each key-value pair.
type Metrics ¶
type Metrics struct {
// contains filtered or unexported fields
}
Metrics is a snapshot of performance statistics for the lifetime of a cache instance.
func (*Metrics) CostAdded ¶
CostAdded is the sum of costs that have been added (successful Set calls).
func (*Metrics) CostEvicted ¶
CostEvicted is the sum of all costs that have been evicted.
func (*Metrics) GetsDropped ¶
GetsDropped is the number of Get counter increments that are dropped internally.
func (*Metrics) Hits ¶
Hits is the number of Get calls where a value was found for the corresponding key.
func (*Metrics) KeysAdded ¶
KeysAdded is the total number of Set calls where a new key-value item was added.
func (*Metrics) KeysEvicted ¶
KeysEvicted is the total number of keys evicted.
func (*Metrics) KeysUpdated ¶
KeysUpdated is the total number of Set calls where the value was updated.
func (*Metrics) LifeExpectancySeconds ¶
func (p *Metrics) LifeExpectancySeconds() *z.HistogramData
func (*Metrics) Misses ¶
Misses is the number of Get calls where a value was not found for the corresponding key.
func (*Metrics) Ratio ¶
Ratio is the number of Hits over all accesses (Hits + Misses). This is the percentage of successful Get calls.
func (*Metrics) SetsDropped ¶
SetsDropped is the number of Set calls that don't make it into internal buffers (due to contention or some other reason).
func (*Metrics) SetsRejected ¶
SetsRejected is the number of Set calls rejected by the policy (TinyLFU).