Documentation
¶
Overview ¶
Package sc provides a simple, idiomatic in-memory caching layer.
Example ¶
package main import ( "context" "fmt" "time" "github.com/motoki317/sc" ) type HeavyData struct { Data string // and all the gazillion fields you may have in your data } func retrieveHeavyData(_ context.Context, name string) (*HeavyData, error) { // Query to database or something... return &HeavyData{ Data: "my-data-" + name, }, nil } func main() { // Wrap your 'retrieveHeavyData' function with sc - it will automatically cache the values. // (production code should not ignore errors) cache, _ := sc.New[string, *HeavyData](retrieveHeavyData, 1*time.Minute, 2*time.Minute, sc.WithLRUBackend(500)) // Query the values - the cache will automatically trigger 'retrieveHeavyData' for each key. foo, _ := cache.Get(context.Background(), "foo") bar, _ := cache.Get(context.Background(), "bar") fmt.Println(foo.Data) // Use the values... fmt.Println(bar.Data) // Previous results are reused, so 'retrieveHeavyData' is called only once for each key in this test. foo, _ = cache.Get(context.Background(), "foo") bar, _ = cache.Get(context.Background(), "bar") fmt.Println(foo.Data) // Use the values... fmt.Println(bar.Data) }
Output: my-data-foo my-data-bar my-data-foo my-data-bar
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Cache ¶
type Cache[K comparable, V any] struct { // contains filtered or unexported fields }
Cache represents a single cache instance. All methods are safe to be called from multiple goroutines.
Notice that Cache doesn't have Set(key K, value V) method - this is intentional. Users are expected to delegate the cache replacement logic to Cache by simply calling Get.
func New ¶
func New[K comparable, V any](replaceFn replaceFunc[K, V], freshFor, ttl time.Duration, options ...CacheOption) (*Cache[K, V], error)
New creates a new cache instance. You can specify ttl longer than freshFor to achieve 'graceful cache replacement', where stale item is served via Get while a single goroutine is launched in the background to retrieve a fresh item.
func NewMust ¶ added in v1.2.6
func NewMust[K comparable, V any](replaceFn replaceFunc[K, V], freshFor, ttl time.Duration, options ...CacheOption) *Cache[K, V]
NewMust is similar to New, but panics on error.
func (Cache) Forget ¶
func (c Cache) Forget(key K)
Forget instructs the cache to forget about the key. Corresponding item will be deleted, ongoing cache replacement results (if any) will not be added to the cache, and any future Get calls will immediately retrieve a new item.
func (Cache) ForgetIf ¶ added in v1.5.0
func (c Cache) ForgetIf(predicate func(key K) bool)
ForgetIf instructs the cache to Forget about all keys that match the predicate.
func (Cache) Get ¶
Get retrieves an item. If an item is not in the cache, it automatically loads a new item into the cache. May return a stale item (older than freshFor, but younger than ttl) while a new item is being fetched in the background. Returns an error as it is if replaceFn returns an error.
The cache prevents 'cache stampede' problem by coalescing multiple requests to the same key.
func (Cache) GetIfExists ¶ added in v1.6.0
func (c Cache) GetIfExists(key K) (v V, ok bool)
GetIfExists retrieves an item without triggering value replacements.
This method doesn't wait for value replacement to finish, even if there is an ongoing one.
func (Cache) Notify ¶ added in v1.7.0
Notify instructs the cache to retrieve value for key if value does not exist or is stale, in a non-blocking manner.
type CacheOption ¶
type CacheOption func(c *cacheConfig)
CacheOption represents a single cache option. See other package-level functions which return CacheOption for more details.
func EnableStrictCoalescing ¶
func EnableStrictCoalescing() CacheOption
EnableStrictCoalescing enables 'strict coalescing check' with a slight overhead. The check prevents Get() calls coming later in time to be coalesced with already stale response generated by a Get() call earlier in time.
Ordinary cache users should not need this behavior.
This is similar to 'automatically calling' (*Cache).Forget after a value is expired, but different in that it does not allow initiating new request until the current one finishes or (*Cache).Forget is explicitly called.
Using this option, one may construct a 'throttler' / 'coalescer' not only of get requests but also of update requests.
This is a generalization of so-called 'zero-time-cache', where the original zero-time-cache behavior is achievable with zero freshFor/ttl values. see also: https://qiita.com/methane/items/27ccaee5b989fb5fca72 (ja)
## Example with freshFor == 0 and ttl == 0
1st Get() call will return value from the first replaceFn call.
2nd Get() call will NOT return value from the first replaceFn call, since by the time 2nd Get() call is made, value from the first replaceFn call is already considered expired. Instead, 2nd Get() call will initiate the second replaceFn call, and will return that value. Without EnableStrictCoalescing option, 2nd Get() call will share the value from the first replaceFn call.
In order to immediately initiate next replaceFn call without waiting for the previous replaceFn call to finish, use (*Cache).Forget or (*Cache).Purge.
Similarly, 3rd and 4th Get() call will NOT return value from the second replaceFn call, but instead initiate the third replaceFn call.
With EnableStrictCoalescing
Get() is called....: 1 2 3 4 returned value.....: 1 2 3 3 replaceFn is called: 1---->12---->23---->3
Without EnableStrictCoalescing
Get() is called....: 1 2 3 4 returned value.....: 1 1 2 2 replaceFn is called: 1---->1 2---->2
## Example with freshFor == 1s and ttl == 1s
1st, 2nd, and 3rd Get() calls all return value from the first replaceFn call, since the value is considered fresh.
4th and 5th call do NOT return value from the first replaceFn call, since by the time these calls are made, value by the first replaceFn call is already considered expired. Instead, 4th (and 5th) call will initiate the second replaceFn call. Without EnableStrictCoalescing option, 4th call will share the value from the first replaceFn call, and 5th Get() call will initiate the second replaceFn call.
With EnableStrictCoalescing:
Elapsed time (s)...: 0 1 2 Get() is called....: 1 2 3 4 5 returned value.....: 1 1 1 2 2 replaceFn is called: 1------------>12------------>2
Without EnableStrictCoalescing:
Elapsed time (s)...: 0 1 2 Get() is called....: 1 2 3 4 5 returned value.....: 1 1 1 1 2 replaceFn is called: 1------------>1 2------------>2
func With2QBackend ¶ added in v1.2.0
func With2QBackend(capacity int) CacheOption
With2QBackend specifies to use 2Q cache for storing cache items. Capacity needs to be greater than 0.
func WithCleanupInterval ¶ added in v1.4.0
func WithCleanupInterval(interval time.Duration) CacheOption
WithCleanupInterval specifies cleanup interval of expired items.
Setting interval of 0 (or negative) will disable the cleaner. This means if you use non-evicting cache backend (that is, the default, built-in map backend), the cache keeps holding key-value pairs indefinitely. If cardinality of key is very large, this leads to memory leak.
By default, a cleaner runs every once in 2x ttl (or every 60s if ttl == 0). Try tuning your cache size (and using non-map backend) before tuning this option. Using cleanup interval on a cache with many items may decrease the through-put, since the cleaner has to acquire the lock to iterate through all items.
func WithLRUBackend ¶
func WithLRUBackend(capacity int) CacheOption
WithLRUBackend specifies to use LRU for storing cache items. Capacity needs to be greater than 0.
func WithMapBackend ¶
func WithMapBackend(initialCapacity int) CacheOption
WithMapBackend specifies to use the built-in map for storing cache items (the default).
Note that this default map backend cannot have the maximum number of items configured, so it holds all values in memory until expired values are cleaned regularly at the interval specified by WithCleanupInterval.
If your key's cardinality is high and if you would like to hard-limit the cache's memory usage, consider using other backends such as LRU backend.
Initial capacity needs to be non-negative.
type HitStats ¶ added in v1.8.0
type HitStats struct { // Hits is the number of fresh cache hits in (*Cache).Get. Hits uint64 // GraceHits is the number of stale cache hits in (*Cache).Get. GraceHits uint64 // Misses is the number of cache misses in (*Cache).Get. Misses uint64 // Replacements is the number of times replaceFn is called. // Note that this field is incremented after replaceFn finishes to reduce lock time. Replacements uint64 }
type SizeStats ¶ added in v1.8.0
type SizeStats struct { // Size is the current number of items in the cache. Size int // Capacity is the maximum number of allowed items in the cache. // // Note that, for map backend, there is no upper bound in number of items in the cache. // Therefore, Capacity is always -1 for map backend. Capacity int }