Documentation ¶
Index ¶
- type Age
- type Core
- type GenNo
- type Index
- type Key
- type Strategy
- type TrimFunc
- type UintCache
- func (p *UintCache) Allocated() int
- func (p *UintCache) Contains(key Key) bool
- func (p *UintCache) Delete(key Key) bool
- func (p *UintCache) Get(key Key) (Value, bool)
- func (p *UintCache) Occupied() int
- func (p *UintCache) Peek(key Key) (Value, bool)
- func (p *UintCache) Put(key Key, value Value) bool
- func (p *UintCache) Replace(key Key, value Value) bool
- type Value
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Core ¶
type Core struct {
// contains filtered or unexported fields
}
Core provides all data management functions for cache implementations. This implementation is focused to minimize a number of links and memory overheads per entry. Behavior is regulated by provided Strategy. Core functions are thread-unsafe and must be protected by cache's implementation.
func NewCore ¶
NewCore creates a new core instance for the given Strategy and provides behavior that combines LFU and LRU strategies: * recent and the most frequently used entries are handled by LRU strategy. Accuracy of LRU logic depends on number*size of generation pages. * other entries are handled by LRU strategy. Cache implementation must provide a trim callback. The trim call back can be called during Add, Touch and Update operations. Provided instance can be copied, but only one copy can be used. Core uses entry pages to track entries (all pages of the same size) and generation pages (size can vary). Every cached entry gets a unique index, that can be reused after deletion / expiration of entries.
type Strategy ¶
type Strategy interface { // TrimOnEachAddition is called once on Core creation. When result is true, then CanTrimEntries will be invoked on every addition. // When this value is false, cache capacity can be exceeded upto an average size of a generation page. TrimOnEachAddition() bool // CurrentAge should provide time marks when this Strategy needs to use time-based retention. CurrentAge() Age // AllocationPageSize is called once on Core creation to provide a size of entry pages (# of items). // It is recommended for cache implementation to use same size for paged storage. AllocationPageSize() int // NextGenerationCapacity is called on creation of every generation page. // Parameters are length and capacity of the previous generation page, or (-1, -1) for a first page, // It returns a capacity for a new page and a flag when fencing must be applied. A new generation page will be created when capacity is exhausted. // Fence - is a per-generation map to detect and ignore multiple touches for the same entry. It reduced cost to track frequency to once per generations // and is relevant for heavy load scenario only. NextGenerationCapacity(prevLen int, prevCap int) (pageSize int, useFence bool) // InitGenerationCapacity is an analogue of NextGenerationCapacity but when a first page is created. InitGenerationCapacity() (pageSize int, useFence bool) // CanAdvanceGeneration is intended to provide custom logic to switch to a new generation page. // This logic can consider a number of hits and age of a current generation. // This function is called on a first update being added to a new generation page, and will be called again when provided limits are exhausted. // When (createGeneration) is (true) - a new generation page will be created and the given limits (hitCount) and (ageLimit) will be applied. // When (createGeneration) is (false) - the given limits (hitCount) and (ageLimit) will be applied to the current generation page. // NB! A new generation page will always be created when capacity is exhausted. See NextGenerationCapacity. CanAdvanceGeneration(curLen int, curCap int, hitRemains uint64, start, end Age) (createGeneration bool, hitLimit uint64, ageLimit Age) // InitialAdvanceLimits is an analogue of CanAdvanceGeneration applied when a generation is created. // Age given as (start) is the age of the first record to be added. InitialAdvanceLimits(curCap int, start Age) (hitLimit uint64, ageLimit Age) // CanTrimGenerations should return a number of LFU generation pages to be trimmed. This trim does NOT free cache entries, but compacts // generation pages by converting LFU into LRU entries. // It receives a total number of entries, a total number of LFU generation pages, and ages of the recent generation, // of the least-frequent generation (rarest) and of the oldest LRU generation. // Zero or negative result will skip trimming. CanTrimGenerations(totalCount, freqGenCount int, recent, rarest, oldest Age) int // CanTrimEntries should return a number entries to be cleaned up from the cache. // It receives a total number of entries, and ages of the recent and of the the oldest LRU generation. // Zero or negative result will skip trimming. // Trimming, initiated by positive result of CanTrimEntries may prevent CanTrimGenerations to be called. CanTrimEntries(totalCount int, recent, oldest Age) int }
Strategy defines behavior of cache Core
type UintCache ¶
type UintCache struct {
// contains filtered or unexported fields
}
UintCache is an example/template implementation of a cache that uses the Core.
func NewUintCache ¶
func (*UintCache) Allocated ¶
Allocated returns a total number of cache entries allocated, but some of them may be unused. NB! Cache can only grow.
func (*UintCache) Contains ¶
Contains returns (true) when the key is present. Access to the key is not updated.
func (*UintCache) Delete ¶
Delete removes key and zero out relevant value. Returns (false) when key wasn't present. Access to the key is not updated. Cache entry will become unavailable, but will only be freed after relevant expiry / eviction.
func (*UintCache) Get ¶
Get returns value and presence flag for the given key. Access to the key is updated when key exists.
func (*UintCache) Peek ¶
Peek returns value and presence flag for the given key. Access to the key is not updated.