Documentation
¶
Overview ¶
Ristretto is a fast, fixed size, in-memory cache with a dual focus on throughput and hit ratio performance. You can easily add Ristretto to an existing system and keep the most valuable data where you need it.
This package includes multiple probabalistic data structures needed for admission/eviction metadata. Most are Counting Bloom Filter variations, but a caching-specific feature that is also required is a "freshness" mechanism, which basically serves as a "lifetime" process. This freshness mechanism was described in the original TinyLFU paper 1, but other mechanisms may be better suited for certain data distributions.
Index ¶
- func IsEmpty(ch <-chan setEvent) bool
- type Cache
- func (c *Cache) Clear()
- func (c *Cache) Close()
- func (c *Cache) Del(key uint64) interface{}
- func (c *Cache) Get(key uint64) (interface{}, bool)
- func (c *Cache) GetOrCompute(key uint64, f func() (interface{}, int64, error)) (interface{}, error)
- func (c *Cache) Set(key uint64, value interface{}, cost int64)
- func (c *Cache) SetNewMaxCost(newMaxCost int64)
- type Config
- type Metrics
- func (p *Metrics) Clear()
- func (p *Metrics) CostAdded() uint64
- func (p *Metrics) CostEvicted() uint64
- func (p *Metrics) GetsDropped() uint64
- func (p *Metrics) GetsKept() uint64
- func (p *Metrics) Hits() uint64
- func (p *Metrics) KeysAdded() uint64
- func (p *Metrics) KeysEvicted() uint64
- func (p *Metrics) KeysUpdated() uint64
- func (p *Metrics) Misses() uint64
- func (p *Metrics) Ratio() float64
- func (p *Metrics) SetsDropped() uint64
- func (p *Metrics) SetsRejected() uint64
- func (p *Metrics) String() string
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Cache ¶
type Cache struct { // Metrics contains a running log of important statistics like hits, misses, // and dropped items Metrics *Metrics // contains filtered or unexported fields }
Cache is a thread-safe implementation of a hashmap with a TinyLFU admission policy and a Sampled LFU eviction policy. You can use the same Cache instance from as many goroutines as you want.
func (*Cache) Clear ¶
func (c *Cache) Clear()
Clear empties the hashmap and zeroes all policy counters. Note that this is not an atomic operation (but that shouldn't be a problem as it's assumed that Set/Get calls won't be occurring until after this).
func (*Cache) Get ¶
Get returns the value (if any) and a boolean representing whether the value was found or not. The value can be nil and the boolean can be true at the same time.
func (*Cache) GetOrCompute ¶
GetOrCompute returns the value of key. If there is no such key, it will compute the value using the factory function `f`. If there are concurrent call on same key, the factory function will be called only once.
func (*Cache) Set ¶
Set attempts to add the key-value item to the cache. There's still a chance it could be dropped by the policy if its determined that the key-value item isn't worth keeping, but otherwise the item will be added and other items will be evicted in order to make room.
To dynamically evaluate the items cost using the Config.Coster function, set the cost parameter to 0 and Coster will be ran when needed in order to find the items true cost.
func (*Cache) SetNewMaxCost ¶
SetNewMaxCost set maxCost to newMaxCost
type Config ¶
type Config struct { // NumCounters determines the number of counters (keys) to keep that hold // access frequency information. It's generally a good idea to have more // counters than the max cache capacity, as this will improve eviction // accuracy and subsequent hit ratios. // // For example, if you expect your cache to hold 1,000,000 items when full, // NumCounters should be 10,000,000 (10x). Each counter takes up 4 bits, so // keeping 10,000,000 counters would require 5MB of memory. NumCounters int64 // MaxCost can be considered as the cache capacity, in whatever units you // choose to use. // // For example, if you want the cache to have a max capacity of 100MB, you // would set MaxCost to 100,000,000 and pass an item's number of bytes as // the `cost` parameter for calls to Set. If new items are accepted, the // eviction process will take care of making room for the new item and not // overflowing the MaxCost value. MaxCost int64 // BufferItems determines the size of Get buffers. // // Unless you have a rare use case, using `64` as the BufferItems value // results in good performance. BufferItems int64 // Metrics determines whether cache statistics are kept during the cache's // lifetime. There *is* some overhead to keeping statistics, so you should // only set this flag to true when testing or throughput performance isn't a // major factor. Metrics bool // OnEvict is called for every eviction and passes the hashed key and // value to the function. OnEvict func(key uint64, value interface{}) // Cost evaluates a value and outputs a corresponding cost. This function // is ran after Set is called for a new item or an item update with a cost // param of 0. Cost func(value interface{}) int64 }
Config is passed to NewCache for creating new Cache instances.
type Metrics ¶
type Metrics struct {
// contains filtered or unexported fields
}
Metrics is a snapshot of performance statistics for the lifetime of a cache instance.
func (*Metrics) CostAdded ¶
CostAdded is the sum of costs that have been added (successful Set calls).
func (*Metrics) CostEvicted ¶
CostEvicted is the sum of all costs that have been evicted.
func (*Metrics) GetsDropped ¶
GetsDropped is the number of Get counter increments that are dropped internally.
func (*Metrics) Hits ¶
Hits is the number of Get calls where a value was found for the corresponding key.
func (*Metrics) KeysAdded ¶
KeysAdded is the total number of Set calls where a new key-value item was added.
func (*Metrics) KeysEvicted ¶
KeysEvicted is the total number of keys evicted.
func (*Metrics) KeysUpdated ¶
KeysUpdated is the total number of Set calls where the value was updated.
func (*Metrics) Misses ¶
Misses is the number of Get calls where a value was not found for the corresponding key.
func (*Metrics) Ratio ¶
Ratio is the number of Hits over all accesses (Hits + Misses). This is the percentage of successful Get calls.
func (*Metrics) SetsDropped ¶
SetsDropped is the number of Set calls that don't make it into internal buffers (due to contention or some other reason).
func (*Metrics) SetsRejected ¶
SetsRejected is the number of Set calls rejected by the policy (TinyLFU).