lru

package module
v0.0.0-...-180e201 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 2, 2025 License: MIT Imports: 9 Imported by: 0

README

Documentation

LRU | Least Recently Used Cache

Different implementations of the Cache interface, all providing different approaches to Least Recently Used caches.

BasicCache

Implements a concurrency-safe LRU cache, either with finite or unlimited cache capacity. When finite capacity is specified then LRU processing evicts that oldest entry when a new entry is added.

A new cache is created by calling NewBasicCache and all cache instances are independent of each other.

If specified, the timeout value limits the wait time whilst attempting to interact with the cache, and generates an error when the timeout is exceeded. Setting timeout to zero (infinite wait time on the cache action) avoids the error.

Note the context passed to NewBasicCache() controls the lifetime of the cache as a whole. This can be different from the context passed to the Get() which can then control behaviour for each session that is interacting with the cache.

func main() {
    ctx := context.Background()

    cache, _ := NewBasicCache(ctx, 100, 1*time.Millisecond)
    defer cache.Close()

    cache.Put("key", 123) 

    if v, _, _ := cache.Get(ctx, "key"); v != 123 {
        panic("should not happen!")
    }
}

If the context is completed in some way, then the cache will be invalidated.

Always call Close() for the cache, to release internal resources (this is automatic if the context completes).

LoadingCache

This cache extends BasicCache to use a Loader function to attempt to retrieve and add entries if they are requested and don't already exist in the cache.

This simplifies data retrieval logic as it only needs to request entries from the cache, rather than needing additional code to specify do to load the data (and add to the cache) should the entry be missing from the cache.

If OpenTelemetry is being used, and the context passed to Get() contains a Span, then if the loader is called, events will be added to that Span to record how many keys are requested and retrieved, together with timestamps.

func main() {
    loader := func(ctx context.Context, key Key) (any, error) {
        // Interact with storage to retrieve
    }

    ctx := context.Background()

    cache, _ := NewLoadingCache(ctx, loader, 100, 1*time.Millisecond)
    defer cache.Close()

    cache.Put("key", 123) 

    if v, _, _ := cache.Get(ctx, "key"); v != 123 {
        panic("should not happen!")
    }
}

PartitionedCache

A partitioned cache is useful when some entries are considered to age more slowly than others; i.e. it is beneficial to retain some of the data in the cache when by normal LRU rules it should be evicted.

The PartitionedCache is simply a facade to any number of Cache implementations, that are associated with a specific partition Name.

The cache uses a Partitioner function that uses the Key to resolve which partition (cache) holds the entry, and then cache management of the entry is delegated to the Cache implementation of that partition.

This type of cache can for example, avoid reference / static / configuration data being evicted from a BasicCache implementation due to other types of data being added to the cache.

func main() {
    ctx := context.Background()

    partitioner := func(key Key) (Partition, error) {
        // Rules specific to Key structure determine partition to return
        return "SomeThing", nil
    }

    someThingsCache, _ := NewBasicCache(ctx, 100, 0)
    otherThingsCache, _ := NewBasicCache(ctx, 500, 0)

    info := []PartitionInfo{
        {
            Name: "SomeThing", 
            Cache: someThingsCache,
        },
        {
            Name: "OtherThings", 
            Cache: otherThingsCache,
        },
    }

    cache, _ := NewPartitionedCache(ctx, partitioner, info)
    defer cache.Close()

    cache.Put("key", 123) 

    if v, _, _ := cache.Get(ctx, "key"); v != 123 {
        panic("should not happen!")
    }
}

Documentation

Index

Examples

Constants

This section is empty.

Variables

View Source
var ErrAttemptToUseInvalidCache = errors.New("cache has been Closed() and is unusable")
View Source
var ErrInvalidContext = errors.New("context has already ended")
View Source
var ErrInvalidLoader = errors.New("loader must not be nil")
View Source
var ErrInvalidMaxEntries = errors.New("maxEntries must be zero or positive integer")
View Source
var ErrInvalidPartition = errors.New("partitioner returned unknown partition for key")
View Source
var ErrInvalidPartitionInfo = errors.New("caches must not be an empty slice")
View Source
var ErrInvalidPartitioner = errors.New("partitioner must not be nil")
View Source
var ErrPartitionInfoHasDuplicates = errors.New("partitions must have unique names")
View Source
var ErrPartitionWithNoCache = errors.New("all partitions must have a non-nil cache")
View Source
var ErrTimeout = errors.New("timeout exceeded")
View Source
var ErrUnknown = errors.New("unknown error")

Functions

This section is empty.

Types

type BasicCache

type BasicCache struct {
	// contains filtered or unexported fields
}

BasicCache provides a concurrency-safe implementation of a bounded least-recently-used cache

func NewBasicCache

func NewBasicCache(ctx context.Context, maxEntries int, timeout time.Duration) (*BasicCache, error)

NewBasicCache creates a new LRU cache instance with the specified capacity and timeout for request processing. If capacity > 0 then a new addition will trigger eviction of the least recently used item. If capacity = 0 then cache will grow indefinitely. If timeout <= 0 then an infinite timeout is used (not recommended) Close() should be called when the cache is no longer needed, to release resources

Example
ctx := context.Background()

c, _ := NewBasicCache(ctx, 10, 1*time.Millisecond)

// BasicCache implements the Cache interface
var cache Cache = c

key := "Key1"
val := 1234

cache.Put(key, val) // Add

v, _, _ := cache.Get(ctx, key) // Retrieve

size1, _ := cache.Len() // Len

cache.Put(key, val) // Overwrite

sizeUnchanged, _ := cache.Len() // Has entry

cache.Remove(key) // Removed

size0, _ := cache.Len() // Now empty

_, ok, _ := cache.Get(ctx, key) // Not found

fmt.Println(val == v, size1, sizeUnchanged, size0, ok)
Output:

true 1 1 0 false

func (*BasicCache) Close

func (c *BasicCache) Close()

Close releases all resources associated with the cache

func (*BasicCache) Get

func (c *BasicCache) Get(ctx context.Context, key Key) (v any, ok bool, err error)

Get will retrieve the item with the specified key into the cache, updating its lru status. An error is raised if the Close() has been called, or the timeoout for the operation is exceeded.

func (*BasicCache) GetBatch

func (c *BasicCache) GetBatch(ctx context.Context, keys []Key) (cr []*CacheResult, err error)

GetBatch retrieves all the provided keys, returning a CacheResult for each one, which provides the details of the retrieval of the key

func (*BasicCache) Len

func (c *BasicCache) Len() (l int, err error)

Len returns the number of items in the cache An error is raised if the Close() has been called, or the timeoout for the operation is exceeded.

func (*BasicCache) Put

func (c *BasicCache) Put(key Key, val any) (err error)

Put will insert the item with the specified key into the cache, replacing what was previously there (if anything). An error is raised if the Close() has been called, or the timeoout for the operation is exceeded.

func (*BasicCache) Remove

func (c *BasicCache) Remove(key Key) (err error)

Remove will remove the item with the specified key from the cache, ignoring if it does not exist. An error is raised if the Close() has been called, or the timeoout for the operation is exceeded.

type Cache

type Cache interface {
	// Close empties the cache, releases all resources
	Close()
	// Get retrieves the value at the specified key
	Get(ctx context.Context, key Key) (v any, ok bool, err error)
	// GetBatch retrieves multiple keys at once
	GetBatch(ctx context.Context, keys []Key) ([]*CacheResult, error)
	// Len returns the current usage of the cache
	Len() (l int, err error)
	// Put inserts the value at the specified key, replacing any prior content
	Put(key Key, val any) (err error)
	// Remove evicts the key and its associated value
	Remove(key Key) (err error)
}

Cache defines the features of a cache

type CacheResult

type CacheResult struct {
	// Key requested to be retrieved
	Key Key
	// Value retrieved for the key, if found
	Value any
	// OK set to true indicates successful retrieval for the key
	OK bool
	// Err holds any errors encountered during retrieval of this key
	Err error
}

CacheResult describes the outcome of attempting to retrieve the value at the key

type Key

type Key interface{}

A Key may be any value that is comparable. See http://golang.org/ref/spec#Comparison_operators

type Loader

type Loader func(ctx context.Context, key []Key) ([]LoaderResult, error)

Loader is a func that returns the value for the specified keys

type LoaderResult

type LoaderResult struct {
	Key   Key
	Value any
	Err   error
}

LoaderResult provides the outcome of an attempt to load the specified key

type LoadingCache

type LoadingCache struct {
	// contains filtered or unexported fields
}

LoadingCache is an implementation of Cache that will attempt to populate itself for a missing Key, using a specified Loader function

func NewLoadingCache

func NewLoadingCache(ctx context.Context, loader Loader, maxEntries int, timeout time.Duration) (*LoadingCache, error)

NewLoadingCache creates a new LRU cache instance with the specified capacity and timeout for request processing, plus it will invoke the specified Loader function to populate the cache, if the requested key is not already in the cache. If capacity > 0 then a new addition will trigger eviction of the least recently used item. If capacity = 0 then cache will grow indefinitely. If timeout <= 0 then an infinite timeout is used (not recommended) Close() should be called when the cache is no longer needed, to release resources

func (*LoadingCache) Close

func (l *LoadingCache) Close()

Close empties the cache, releases all resources

func (*LoadingCache) Get

func (l *LoadingCache) Get(ctx context.Context, key Key) (any, bool, error)

Get retrieves the value at the specified key

func (*LoadingCache) GetBatch

func (l *LoadingCache) GetBatch(ctx context.Context, keys []Key) (res []*CacheResult, err error)

GetBatch retrieves the values at the specified keys

func (*LoadingCache) Len

func (l *LoadingCache) Len() (int, error)

Len returns the current usage of the cache

func (*LoadingCache) Put

func (l *LoadingCache) Put(key Key, val any) (err error)

Put inserts the value at the specified key, replacing any prior content

func (*LoadingCache) Remove

func (l *LoadingCache) Remove(key Key) (err error)

Remove evicts the key and its associated value

type Partition

type Partition string

Partitions mutually divide the cached data

type PartitionInfo

type PartitionInfo struct {
	Name  Partition
	Cache Cache
}

PartitionInfo specifies the Cache to be used for a given Named partition

type PartitionedCache

type PartitionedCache struct {
	// contains filtered or unexported fields
}

PartitionedCache is an implementation of a Cache that splits entries in partitions by their Keys using the specified Partitioner function. This allows commonly used but slowly changing data to avoid eviction and improve responsiveness.

func NewPartitionedCache

func NewPartitionedCache(ctx context.Context, partitioner Partitioner, caches []PartitionInfo) (*PartitionedCache, error)

NewPartitionedCache creates a new LRU cache instance consisting of named partitions, each of whose data is managed within the provided Cache instance. The provided Cache instances are assumed to be owned by the PartitionedCache instance once they are added. Close() should be called when the cache is no longer needed, to release resources.

Example
ctx := context.Background()

partitioner := func(key Key) (Partition, error) {
	// Rules specific to Key structure determine partition to return
	// Here, just always put into the same partition
	return "SomeThing", nil
}

someThingsCache, _ := NewBasicCache(ctx, 100, 0) // Could be LoadingCache or any implementation of Cache
otherThingsCache, _ := NewBasicCache(ctx, 500, 0)

info := []PartitionInfo{
	{
		Name:  "SomeThing",
		Cache: someThingsCache,
	},
	{
		Name:  "OtherThings",
		Cache: otherThingsCache,
	},
}

cache, _ := NewPartitionedCache(ctx, partitioner, info)
defer cache.Close()

err := cache.Put("MeaningOfLife", 42)
if err != nil {
	panic(err)
}

v, _, err := cache.Get(ctx, "MeaningOfLife")
if err != nil {
	panic(err)
}

fmt.Println("Meaning of life is still 42?", v.(int) == 42)
Output:

Meaning of life is still 42? true

func (*PartitionedCache) Close

func (p *PartitionedCache) Close()

Close empties the cache, releases all resources

func (*PartitionedCache) Get

func (p *PartitionedCache) Get(ctx context.Context, key Key) (v any, ok bool, err error)

Get retrieves the value at the specified key

func (*PartitionedCache) GetBatch

func (p *PartitionedCache) GetBatch(ctx context.Context, keys []Key) (res []*CacheResult, err error)

GetBatch retrieves the values at the specified keys

func (*PartitionedCache) Len

func (p *PartitionedCache) Len() (l int, err error)

Len returns the current usage of the cache

func (*PartitionedCache) Put

func (p *PartitionedCache) Put(key Key, val any) (err error)

Put inserts the value at the specified key, replacing any prior content

func (*PartitionedCache) Remove

func (p *PartitionedCache) Remove(key Key) (err error)

Remove evicts the key and its associated value

type Partitioner

type Partitioner func(key Key) (Partition, error)

Partitioner returns the Partition for a given Key, or an error

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL