Documentation ¶
Overview ¶
Package cache implements the CLOCK-Pro caching algorithm.
CLOCK-Pro is a patent-free alternative to the Adaptive Replacement Cache, https://en.wikipedia.org/wiki/Adaptive_replacement_cache. It is an approximation of LIRS ( https://en.wikipedia.org/wiki/LIRS_caching_algorithm ), much like the CLOCK page replacement algorithm is an approximation of LRU.
This implementation is based on the python code from https://bitbucket.org/SamiLehtinen/pyclockpro .
Slides describing the algorithm: http://fr.slideshare.net/huliang64/clockpro
The original paper: http://static.usenix.org/event/usenix05/tech/general/full_papers/jiang/jiang_html/html.html
It is MIT licensed, like the original.
Index ¶
- func Free(v *Value)
- type Cache
- func (c *Cache) Delete(id uint64, fileNum base.DiskFileNum, offset uint64)
- func (c *Cache) EvictFile(id uint64, fileNum base.DiskFileNum)
- func (c *Cache) Get(id uint64, fileNum base.DiskFileNum, offset uint64) Handle
- func (c *Cache) MaxSize() int64
- func (c *Cache) Metrics() Metrics
- func (c *Cache) NewID() uint64
- func (c *Cache) Ref()
- func (c *Cache) Reserve(n int) func()
- func (c *Cache) Set(id uint64, fileNum base.DiskFileNum, offset uint64, value *Value) Handle
- func (c *Cache) Size() int64
- func (c *Cache) Unref()
- type Handle
- type Metrics
- type Value
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Cache ¶
type Cache struct {
// contains filtered or unexported fields
}
Cache implements Pebble's sharded block cache. The Clock-PRO algorithm is used for page replacement (http://static.usenix.org/event/usenix05/tech/general/full_papers/jiang/jiang_html/html.html). In order to provide better concurrency, 4 x NumCPUs shards are created, with each shard being given 1/n of the target cache size. The Clock-PRO algorithm is run independently on each shard.
Blocks are keyed by an (id, fileNum, offset) triple. The ID is a namespace for file numbers and allows a single Cache to be shared between multiple Pebble instances. The fileNum and offset refer to an sstable file number and the offset of the block within the file. Because sstables are immutable and file numbers are never reused, (fileNum,offset) are unique for the lifetime of a Pebble instance.
In addition to maintaining a map from (fileNum,offset) to data, each shard maintains a map of the cached blocks for a particular fileNum. This allows efficient eviction of all of the blocks for a file which is used when an sstable is deleted from disk.
Memory Management ¶
In order to reduce pressure on the Go GC, manual memory management is performed for the data stored in the cache. Manual memory management is performed by calling into C.{malloc,free} to allocate memory. Cache.Values are reference counted and the memory backing a manual value is freed when the reference count drops to 0.
Manual memory management brings the possibility of memory leaks. It is imperative that every Handle returned by Cache.{Get,Set} is eventually released. The "invariants" build tag enables a leak detection facility that places a GC finalizer on cache.Value. When the cache.Value finalizer is run, if the underlying buffer is still present a leak has occurred. The "tracing" build tag enables tracing of cache.Value reference count manipulation and eases finding where a leak has occurred. These two facilities are usually used in combination by specifying `-tags invariants,tracing`. Note that "tracing" produces a significant slowdown, while "invariants" does not.
func New ¶
New creates a new cache of the specified size. Memory for the cache is allocated on demand, not during initialization. The cache is created with a reference count of 1. Each DB it is associated with adds a reference, so the creator of the cache should usually release their reference after the DB is created.
c := cache.New(...) defer c.Unref() d, err := pebble.Open(pebble.Options{Cache: c})
func (*Cache) Delete ¶
func (c *Cache) Delete(id uint64, fileNum base.DiskFileNum, offset uint64)
Delete deletes the cached value for the specified file and offset.
func (*Cache) EvictFile ¶
func (c *Cache) EvictFile(id uint64, fileNum base.DiskFileNum)
EvictFile evicts all of the cache values for the specified file.
func (*Cache) Get ¶
Get retrieves the cache value for the specified file and offset, returning nil if no value is present.
func (*Cache) Ref ¶
func (c *Cache) Ref()
Ref adds a reference to the cache. The cache only remains valid as long a reference is maintained to it.
func (*Cache) Reserve ¶
Reserve N bytes in the cache. This effectively shrinks the size of the cache by N bytes, without actually consuming any memory. The returned closure should be invoked to release the reservation.
type Handle ¶
type Handle struct {
// contains filtered or unexported fields
}
Handle provides a strong reference to a value in the cache. The reference does not pin the value in the cache, but it does prevent the underlying byte slice from being reused.
type Metrics ¶
type Metrics struct { // The number of bytes inuse by the cache. Size int64 // The count of objects (blocks or tables) in the cache. Count int64 // The number of cache hits. Hits int64 // The number of cache misses. Misses int64 }
Metrics holds metrics for the cache.
type Value ¶
type Value struct {
// contains filtered or unexported fields
}
Value holds a reference counted immutable value.
func Alloc ¶ added in v1.1.0
Alloc allocates a byte slice of the specified size, possibly reusing previously allocated but unused memory. The memory backing the value is manually managed. The caller MUST either add the value to the cache (via Cache.Set), or release the value (via Cache.Free). Failure to do so will result in a memory leak.
func (*Value) Buf ¶
Buf returns the buffer associated with the value. The contents of the buffer should not be changed once the value has been added to the cache. Instead, a new Value should be created and added to the cache to replace the existing value.