cache

package
v1.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 26, 2024 License: MIT Imports: 5 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type LRU

type LRU struct {
	// contains filtered or unexported fields
}

LRU implements a Least-Recently-Used pokeapi.Cache. An LRU cache is a queue-like structure, with some extra semantics on Lookup & a map to allow for O(1) lookups:

  • On Lookup, if a cache hit occurs, the entry is moved to the top of the list (it is now the youngest entry).
  • On Put, the entry is moved to/inserted at the top of the list. If an insert causes the length of the list to exceed the cache capacity, the oldest entry is dropped. A Put occurs on the first call to pokeapi.CacheLookup.Hydrate on a pokeapi.CacheLookup returned by LRU.Lookup (as long as pokeapi.CacheLookup.Close has not yet been called).

If multiple cache lookups are opened for the same url, the LRU cache will ensure that they are not executed in parallel - instead ensuring these lookups occur one-after-another. Parallel cache lookups for multiple urls will be serviced as normal.

A TTL may be set, in which case the LRU cache will expire entries in accordance with it. As key expiry briefly locks the entire cache, it is only run once within a specified period. See LRUOpts.ExpiryDelay for details.

func NewLRU

func NewLRU(opts *LRUOpts) *LRU

NewLRU constructs a new LRU cache for use in accordance with the provided LRUOpts.

func (*LRU) Lookup

func (lru *LRU) Lookup(
	ctx context.Context,
	url string,
	loadOnMiss pokeapi.CacheLoader,
) (any, error)

type LRUOpts added in v0.2.0

type LRUOpts struct {
	// The maximum capacity of the cache. Default 500.
	Size int

	// Provide a custom time function - useful for testing. Default time.Now().
	Clock func() time.Time

	// How long cached entries should be stored for. Default 0 (forever). Key
	// expiry briefly locks the cache, so ExpiryDelay can be used to limit how
	// often TTL is checked.
	TTL time.Duration

	// How long to wait between expiry runs. Default ~1 week. If set to 0, will
	// check for expired keys every Lookup. Only applies if TTL != 0.
	ExpiryDelay *time.Duration

	// If SkipURL returns true, the LRU won't cache the response for that URL.
	// This can be useful to avoid filling the cache with parameterised List*
	// requests that will not be reused.
	SkipURL func(url string) bool
}

type Wrapper

type Wrapper struct {
	// contains filtered or unexported fields
}

The Wrapper cache wraps a standard get/put cache and converts it to the transactional pokeapi.Cache interface. It's useful if you want to make use of the pokeapi.Client's concurrency guarantees.

func NewWrapper

func NewWrapper(
	getFn func(ctx context.Context, url string) (any, bool),
	putFn func(ctx context.Context, url string, value any),
) *Wrapper

NewWrapper accepts the Get and Put (or equivalent) method references of the cache it wraps and returns a pokeapi.Cache that loads and stores values from/to that cache.

Example usage:

r := NewRedisCache(redisConn, defaultTTL)
c := pokeapi.Client(
	&pokeapi.ClientOpts{
		Cache: cache.NewWrapper(r.Get, r.Put),
	}
)

It's worth noting that the url passed will be the raw url, query parameters included. Advanced cache implementations may choose to make use of this.

func (*Wrapper) Lookup

func (w *Wrapper) Lookup(
	ctx context.Context,
	url string,
	loadOnMiss pokeapi.CacheLoader,
) (any, error)

Directories

Path Synopsis
Package cachetest is a test suite for [pokeapi.Cache] implementations.
Package cachetest is a test suite for [pokeapi.Cache] implementations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL