README
¶
CCache
CCache is an LRU Cache, written in Go, focused on supporting high concurrency.
Lock contention on the list is reduced by:
- Introducing a window which limits the frequency that an item can get promoted
- Using a buffered channel to queue promotions for a single worker
- Garbage collecting within the same thread as the worker
Setup
First, download the project:
go get github.com/karlseguin/ccache
Configuration
Next, import and create a Cache
instance:
import (
"github.com/karlseguin/ccache"
)
var cache = ccache.New(ccache.Configure())
Configure
exposes a chainable API:
var cache = ccache.New(ccache.Configure().MaxSize(1000).ItemsToPrune(100))
The most likely configuration options to tweak are:
MaxSize(int)
- the maximum number size to store in the cache (default: 5000)GetsPerPromote(int)
- the number of times an item is fetched before we promote it. For large caches with long TTLs, it normally isn't necessary to promote an item after every fetch (default: 3)ItemsToPrune(int)
- the number of items to prune when we hitMaxSize
. Freeing up more than 1 slot at a time improved performance (default: 500)
Configurations that change the internals of the cache, which aren't as likely to need tweaking:
Buckets
- ccache shards its internal map to provide a greater amount of concurrency. Must be a power of 2 (default: 16).PromoteBuffer(int)
- the size of the buffer to use to queue promotions (default: 1024)DeleteBuffer(int)
the size of the buffer to use to queue deletions (default: 1024)
Usage
Once the cache is setup, you can Get
, Set
and Delete
items from it. A Get
returns an *Item
:
Get
item := cache.Get("user:4")
if item == nil {
//handle
} else {
user := item.Value().(*User)
}
The returned *Item
exposes a number of methods:
Value() interface{}
- the value cachedExpired() bool
- whether the item is expired or notTTL() time.Duration
- the duration before the item expires (will be a negative value for expired items)Expires() time.Time
- the time the item will expire
By returning expired items, CCache lets you decide if you want to serve stale content or not. For example, you might decide to serve up slightly stale content (< 30 seconds old) while re-fetching newer data in the background. You might also decide to serve up infinitely stale content if you're unable to get new data from your source.
Set
Set
expects the key, value and ttl:
cache.Set("user:4", user, time.Minute * 10)
Fetch
There's also a Fetch
which mixes a Get
and a Set
:
item, err := cache.Fetch("user:4", time.Minute * 10, func() (interface{}, error) {
//code to fetch the data incase of a miss
//should return the data to cache and the error, if any
})
Delete
Delete
expects the key to delete. It's ok to call Delete
on a non-existant key:
cache.Delete("user:4")
Extend
The life of an item can be changed via the Extend
method. This will change the expiry of the item by the specified duration relative to the current time.
Replace
The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU:
cache.Replace("user:4", user)
Replace
returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value is not inserted and false is returned.
Stop
The cache's background worker can be stopped by calling Stop
. Once Stop
is called
the cache should not be used (calls are likely to panic). Stop must be called in order to allow the garbage collector to reap the cache.
Tracking
CCache supports a special tracking mode which is meant to be used in conjunction with other pieces of your code that maintains a long-lived reference to data.
When you configure your cache with Track()
:
cache = ccache.New(ccache.Configure().Track())
The items retrieved via TrackingGet
will not be eligible for purge until Release
is called on them:
item := cache.TrackingGet("user:4")
user := item.Value() //will be nil if "user:4" didn't exist in the cache
item.Release() //can be called even if item.Value() returned nil
In practice, Release
wouldn't be called until later, at some other place in your code.
There's a couple reason to use the tracking mode if other parts of your code also hold references to objects. First, if you're already going to hold a reference to these objects, there's really no reason not to have them in the cache - the memory is used up anyways.
More important, it helps ensure that you're code returns consistent data. With tracking, "user:4" might be purged, and a subsequent Fetch
would reload the data. This can result in different versions of "user:4" being returned by different parts of your system.
LayeredCache
CCache's LayeredCache
stores and retrieves values by both a primary and secondary key. Deletion can happen against either the primary and secondary key, or the primary key only (removing all values that share the same primary key).
LayeredCache
is useful for HTTP caching, when you want to purge all variations of a request.
LayeredCache
takes the same configuration object as the main cache, exposes the same optional tracking capabilities, but exposes a slightly different API:
cache := ccache.Layered(ccache.Configure())
cache.Set("/users/goku", "type:json", "{value_to_cache}", time.Minute * 5)
cache.Set("/users/goku", "type:xml", "<value_to_cache>", time.Minute * 5)
json := cache.Get("/users/goku", "type:json")
xml := cache.Get("/users/goku", "type:xml")
cache.Delete("/users/goku", "type:json")
cache.Delete("/users/goku", "type:xml")
// OR
cache.DeleteAll("/users/goku")
SecondaryCache
In some cases, when using a LayeredCache
, it may be desirable to always be acting on the secondary portion of the cache entry. This could be the case where the primary key is used as a key elsewhere in your code. The SecondaryCache
is retrieved with:
cache := ccache.Layered(ccache.Configure())
sCache := cache.GetOrCreateSecondaryCache("/users/goku")
sCache.Set("type:json", "{value_to_cache}", time.Minute * 5)
The semantics for interacting with the SecondaryCache
are exactly the same as for a regular Cache
. However, one difference is that Get
will not return nil, but will return an empty 'cache' for a non-existent primary key.
Size
By default, items added to a cache have a size of 1. This means that if you configure MaxSize(10000)
, you'll be able to store 10000 items in the cache.
However, if the values you set into the cache have a method Size() int64
, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with MaxSize(4096000)
and items that return a Size() int64
of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes.
Want Something Simpler?
For a simpler cache, checkout out rcache
Documentation
¶
Overview ¶
An LRU cached aimed at high concurrency
An LRU cached aimed at high concurrency
Index ¶
- Variables
- type Cache
- func (c *Cache) Clear()
- func (c *Cache) Delete(key string) bool
- func (c *Cache) Fetch(key string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)
- func (c *Cache) Get(key string) *Item
- func (c *Cache) ItemCount() int
- func (c *Cache) Replace(key string, value interface{}) bool
- func (c *Cache) Set(key string, value interface{}, duration time.Duration)
- func (c *Cache) Stop()
- func (c *Cache) TrackingGet(key string) TrackedItem
- type Configuration
- func (c *Configuration) Buckets(count uint32) *Configuration
- func (c *Configuration) DeleteBuffer(size uint32) *Configuration
- func (c *Configuration) GetsPerPromote(count int32) *Configuration
- func (c *Configuration) ItemsToPrune(count uint32) *Configuration
- func (c *Configuration) MaxSize(max int64) *Configuration
- func (c *Configuration) OnDelete(callback func(item *Item)) *Configuration
- func (c *Configuration) PromoteBuffer(size uint32) *Configuration
- func (c *Configuration) Track() *Configuration
- type Item
- type LayeredCache
- func (c *LayeredCache) Clear()
- func (c *LayeredCache) Delete(primary, secondary string) bool
- func (c *LayeredCache) DeleteAll(primary string) bool
- func (c *LayeredCache) Fetch(primary, secondary string, duration time.Duration, ...) (*Item, error)
- func (c *LayeredCache) Get(primary, secondary string) *Item
- func (c *LayeredCache) GetOrCreateSecondaryCache(primary string) *SecondaryCache
- func (c *LayeredCache) ItemCount() int
- func (c *LayeredCache) Replace(primary, secondary string, value interface{}) bool
- func (c *LayeredCache) Set(primary, secondary string, value interface{}, duration time.Duration)
- func (c *LayeredCache) Stop()
- func (c *LayeredCache) TrackingGet(primary, secondary string) TrackedItem
- type SecondaryCache
- func (s *SecondaryCache) Delete(secondary string) bool
- func (s *SecondaryCache) Fetch(secondary string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)
- func (s *SecondaryCache) Get(secondary string) *Item
- func (s *SecondaryCache) Replace(secondary string, value interface{}) bool
- func (s *SecondaryCache) Set(secondary string, value interface{}, duration time.Duration) *Item
- func (c *SecondaryCache) TrackingGet(secondary string) TrackedItem
- type Sized
- type TrackedItem
Constants ¶
This section is empty.
Variables ¶
var NilTracked = new(nilItem)
Functions ¶
This section is empty.
Types ¶
type Cache ¶
type Cache struct { *Configuration // contains filtered or unexported fields }
func New ¶
func New(config *Configuration) *Cache
Create a new cache with the specified configuration See ccache.Configure() for creating a configuration
func (*Cache) Clear ¶
func (c *Cache) Clear()
this isn't thread safe. It's meant to be called from non-concurrent tests
func (*Cache) Delete ¶
Remove the item from the cache, return true if the item was present, false otherwise.
func (*Cache) Fetch ¶
func (c *Cache) Fetch(key string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)
Attempts to get the value from the cache and calles fetch on a miss (missing or stale item). If fetch returns an error, no value is cached and the error is returned back to the caller.
func (*Cache) Get ¶
Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).
func (*Cache) Replace ¶
Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL
func (*Cache) Stop ¶
func (c *Cache) Stop()
Stops the background worker. Operations performed on the cache after Stop is called are likely to panic
func (*Cache) TrackingGet ¶
func (c *Cache) TrackingGet(key string) TrackedItem
Used when the cache was created with the Track() configuration option. Avoid otherwise
type Configuration ¶
type Configuration struct {
// contains filtered or unexported fields
}
func Configure ¶
func Configure() *Configuration
Creates a configuration object with sensible defaults Use this as the start of the fluent configuration: e.g.: ccache.New(ccache.Configure().MaxSize(10000))
func (*Configuration) Buckets ¶
func (c *Configuration) Buckets(count uint32) *Configuration
Keys are hashed into % bucket count to provide greater concurrency (every set requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...) [16]
func (*Configuration) DeleteBuffer ¶
func (c *Configuration) DeleteBuffer(size uint32) *Configuration
The size of the queue for items which should be deleted. If the queue fills up, calls to Delete() will block
func (*Configuration) GetsPerPromote ¶
func (c *Configuration) GetsPerPromote(count int32) *Configuration
Give a large cache with a high read / write ratio, it's usually unnecessary to promote an item on every Get. GetsPerPromote specifies the number of Gets a key must have before being promoted [3]
func (*Configuration) ItemsToPrune ¶
func (c *Configuration) ItemsToPrune(count uint32) *Configuration
The number of items to prune when memory is low [500]
func (*Configuration) MaxSize ¶
func (c *Configuration) MaxSize(max int64) *Configuration
The max size for the cache [5000]
func (*Configuration) OnDelete ¶
func (c *Configuration) OnDelete(callback func(item *Item)) *Configuration
OnDelete allows setting a callback function to react to ideam deletion. This typically allows to do a cleanup of resources, such as calling a Close() on cached object that require some kind of tear-down.
func (*Configuration) PromoteBuffer ¶
func (c *Configuration) PromoteBuffer(size uint32) *Configuration
The size of the queue for items which should be promoted. If the queue fills up, promotions are skipped [1024]
func (*Configuration) Track ¶
func (c *Configuration) Track() *Configuration
By turning tracking on and using the cache's TrackingGet, the cache won't evict items which you haven't called Release() on. It's a simple reference counter.
type LayeredCache ¶
type LayeredCache struct { *Configuration // contains filtered or unexported fields }
func Layered ¶
func Layered(config *Configuration) *LayeredCache
See ccache.Configure() for creating a configuration
func (*LayeredCache) Clear ¶
func (c *LayeredCache) Clear()
this isn't thread safe. It's meant to be called from non-concurrent tests
func (*LayeredCache) Delete ¶
func (c *LayeredCache) Delete(primary, secondary string) bool
Remove the item from the cache, return true if the item was present, false otherwise.
func (*LayeredCache) DeleteAll ¶
func (c *LayeredCache) DeleteAll(primary string) bool
Deletes all items that share the same primary key
func (*LayeredCache) Fetch ¶
func (c *LayeredCache) Fetch(primary, secondary string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)
Attempts to get the value from the cache and calles fetch on a miss. If fetch returns an error, no value is cached and the error is returned back to the caller.
func (*LayeredCache) Get ¶
func (c *LayeredCache) Get(primary, secondary string) *Item
Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).
func (*LayeredCache) GetOrCreateSecondaryCache ¶
func (c *LayeredCache) GetOrCreateSecondaryCache(primary string) *SecondaryCache
Get the secondary cache for a given primary key. This operation will never return nil. In the case where the primary key does not exist, a new, underlying, empty bucket will be created and returned.
func (*LayeredCache) ItemCount ¶
func (c *LayeredCache) ItemCount() int
func (*LayeredCache) Replace ¶
func (c *LayeredCache) Replace(primary, secondary string, value interface{}) bool
Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL nor does it alter its position in the LRU
func (*LayeredCache) Set ¶
func (c *LayeredCache) Set(primary, secondary string, value interface{}, duration time.Duration)
Set the value in the cache for the specified duration
func (*LayeredCache) Stop ¶
func (c *LayeredCache) Stop()
func (*LayeredCache) TrackingGet ¶
func (c *LayeredCache) TrackingGet(primary, secondary string) TrackedItem
Used when the cache was created with the Track() configuration option. Avoid otherwise
type SecondaryCache ¶
type SecondaryCache struct {
// contains filtered or unexported fields
}
func (*SecondaryCache) Delete ¶
func (s *SecondaryCache) Delete(secondary string) bool
Delete a secondary key. The semantics are the same as for LayeredCache.Delete
func (*SecondaryCache) Fetch ¶
func (s *SecondaryCache) Fetch(secondary string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)
Fetch or set a secondary key. The semantics are the same as for LayeredCache.Fetch
func (*SecondaryCache) Get ¶
func (s *SecondaryCache) Get(secondary string) *Item
Get the secondary key. The semantics are the same as for LayeredCache.Get
func (*SecondaryCache) Replace ¶
func (s *SecondaryCache) Replace(secondary string, value interface{}) bool
Replace a secondary key. The semantics are the same as for LayeredCache.Replace
func (*SecondaryCache) Set ¶
func (s *SecondaryCache) Set(secondary string, value interface{}, duration time.Duration) *Item
Set the secondary key to a value. The semantics are the same as for LayeredCache.Set
func (*SecondaryCache) TrackingGet ¶
func (c *SecondaryCache) TrackingGet(secondary string) TrackedItem
Track a secondary key. The semantics are the same as for LayeredCache.TrackingGet