cache

package
v0.0.0-...-f98dcff Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 9, 2023 License: BSD-3-Clause, MIT Imports: 14 Imported by: 0

Documentation

Overview

Package cache implements the CLOCK-Pro caching algorithm.

CLOCK-Pro is a patent-free alternative to the Adaptive Replacement Cache, https://en.wikipedia.org/wiki/Adaptive_replacement_cache. It is an approximation of LIRS ( https://en.wikipedia.org/wiki/LIRS_caching_algorithm ), much like the CLOCK page replacement algorithm is an approximation of LRU.

This implementation is based on the python code from https://bitbucket.org/SamiLehtinen/pyclockpro .

Slides describing the algorithm: http://fr.slideshare.net/huliang64/clockpro

The original paper: http://static.usenix.org/event/usenix05/tech/general/full_papers/jiang/jiang_html/html.html

It is MIT licensed, like the original.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Free

func Free(v *Value)

Free frees the specified value. The buffer associated with the value will possibly be reused, making it invalid to use the buffer after calling Free. Do not call Free on a value that has been added to the cache.

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache implements Pebble's sharded block cache. The Clock-PRO algorithm is used for page replacement (http://static.usenix.org/event/usenix05/tech/general/full_papers/jiang/jiang_html/html.html). In order to provide better concurrency, 4 x NumCPUs shards are created, with each shard being given 1/n of the target cache size. The Clock-PRO algorithm is run independently on each shard.

Blocks are keyed by an (id, fileNum, offset) triple. The ID is a namespace for file numbers and allows a single Cache to be shared between multiple Pebble instances. The fileNum and offset refer to an sstable file number and the offset of the block within the file. Because sstables are immutable and file numbers are never reused, (fileNum,offset) are unique for the lifetime of a Pebble instance.

In addition to maintaining a map from (fileNum,offset) to data, each shard maintains a map of the cached blocks for a particular fileNum. This allows efficient eviction of all of the blocks for a file which is used when an sstable is deleted from disk.

Memory Management

In order to reduce pressure on the Go GC, manual memory management is performed for the data stored in the cache. Manual memory management is performed by calling into C.{malloc,free} to allocate memory. Cache.Values are reference counted and the memory backing a manual value is freed when the reference count drops to 0.

Manual memory management brings the possibility of memory leaks. It is imperative that every Handle returned by Cache.{Get,Set} is eventually released. The "invariants" build tag enables a leak detection facility that places a GC finalizer on cache.Value. When the cache.Value finalizer is run, if the underlying buffer is still present a leak has occurred. The "tracing" build tag enables tracing of cache.Value reference count manipulation and eases finding where a leak has occurred. These two facilities are usually used in combination by specifying `-tags invariants,tracing`. Note that "tracing" produces a significant slowdown, while "invariants" does not.

func New

func New(size int64) *Cache

New creates a new cache of the specified size. Memory for the cache is allocated on demand, not during initialization. The cache is created with a reference count of 1. Each DB it is associated with adds a reference, so the creator of the cache should usually release their reference after the DB is created.

c := cache.New(...)
defer c.Unref()
d, err := pebble.Open(pebble.Options{Cache: c})

func (*Cache) Delete

func (c *Cache) Delete(id uint64, fileNum base.DiskFileNum, offset uint64)

Delete deletes the cached value for the specified file and offset.

func (*Cache) EvictFile

func (c *Cache) EvictFile(id uint64, fileNum base.DiskFileNum)

EvictFile evicts all of the cache values for the specified file.

func (*Cache) Get

func (c *Cache) Get(id uint64, fileNum base.DiskFileNum, offset uint64) Handle

Get retrieves the cache value for the specified file and offset, returning nil if no value is present.

func (*Cache) MaxSize

func (c *Cache) MaxSize() int64

MaxSize returns the max size of the cache.

func (*Cache) Metrics

func (c *Cache) Metrics() Metrics

Metrics returns the metrics for the cache.

func (*Cache) NewID

func (c *Cache) NewID() uint64

NewID returns a new ID to be used as a namespace for cached file blocks.

func (*Cache) Ref

func (c *Cache) Ref()

Ref adds a reference to the cache. The cache only remains valid as long a reference is maintained to it.

func (*Cache) Reserve

func (c *Cache) Reserve(n int) func()

Reserve N bytes in the cache. This effectively shrinks the size of the cache by N bytes, without actually consuming any memory. The returned closure should be invoked to release the reservation.

func (*Cache) Set

func (c *Cache) Set(id uint64, fileNum base.DiskFileNum, offset uint64, value *Value) Handle

Set sets the cache value for the specified file and offset, overwriting an existing value if present. A Handle is returned which provides faster retrieval of the cached value than Get (lock-free and avoidance of the map lookup). The value must have been allocated by Cache.Alloc.

func (*Cache) Size

func (c *Cache) Size() int64

Size returns the current space used by the cache.

func (*Cache) Unref

func (c *Cache) Unref()

Unref releases a reference on the cache.

type Handle

type Handle struct {
	// contains filtered or unexported fields
}

Handle provides a strong reference to a value in the cache. The reference does not pin the value in the cache, but it does prevent the underlying byte slice from being reused.

func (Handle) Get

func (h Handle) Get() []byte

Get returns the value stored in handle.

func (Handle) Release

func (h Handle) Release()

Release releases the reference to the cache entry.

type Metrics

type Metrics struct {
	// The number of bytes inuse by the cache.
	Size int64
	// The count of objects (blocks or tables) in the cache.
	Count int64
	// The number of cache hits.
	Hits int64
	// The number of cache misses.
	Misses int64
}

Metrics holds metrics for the cache.

type Value

type Value struct {
	// contains filtered or unexported fields
}

Value holds a reference counted immutable value.

func Alloc

func Alloc(n int) *Value

Alloc allocates a byte slice of the specified size, possibly reusing previously allocated but unused memory. The memory backing the value is manually managed. The caller MUST either add the value to the cache (via Cache.Set), or release the value (via Cache.Free). Failure to do so will result in a memory leak.

func (*Value) Buf

func (v *Value) Buf() []byte

Buf returns the buffer associated with the value. The contents of the buffer should not be changed once the value has been added to the cache. Instead, a new Value should be created and added to the cache to replace the existing value.

func (*Value) Truncate

func (v *Value) Truncate(n int)

Truncate the buffer to the specified length. The buffer length should not be changed once the value has been added to the cache as there may be concurrent readers of the Value. Instead, a new Value should be created and added to the cache to replace the existing value.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL