bigcache

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 20, 2017 License: Apache-2.0 Imports: 8 Imported by: 0

README

BigCache Build Status Coverage Status GoDoc Go Report Card

Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on heap but omits GC for them. To achieve that operations on bytes arrays take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

Usage

Simple initialization
import "github.com/allegro/bigcache"

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))

cache.Set("my-unique-key", []byte("value"))

entry, _ := cache.Get("my-unique-key")
fmt.Println(string(entry))
Custom initialization

When cache load can be predicted in advance then it is better to use custom initialization because additional memory allocation can be avoided in that way.

import (
	"log"

	"github.com/allegro/bigcache"
)

config := bigcache.Config {
		// number of shards (must be a power of 2)
		Shards: 1024,
		// time after which entry can be evicted
		LifeWindow: 10 * time.Minute,
		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,
		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,
		// prints information about additional memory allocation
		Verbose: true,
		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,
		// callback fired when the oldest entry is removed because of its
		// expiration time or no space left for the new entry. Default value is nil which
		// means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: nil,
	}

cache, initErr := bigcache.NewBigCache(config)
if initErr != nil {
	log.Fatal(initErr)
}

cache.Set("my-unique-key", []byte("value"))

if entry, err := cache.Get("my-unique-key"); err == nil {
	fmt.Println(string(entry))
}

Benchmarks

Three caches were compared: bigcache, freecache and map. Benchmark tests were made on MacBook Pro (3 GHz Processor Intel Core i7, 16GB Memory).

Writes and reads
cd caches_bench; go test -bench=. -benchtime=10s ./... -timeout 30m

BenchmarkMapSet-4              	20000000	      1681 ns/op	     296 B/op	       3 allocs/op
BenchmarkFreeCacheSet-4        	20000000	      1132 ns/op	     349 B/op	       3 allocs/op
BenchmarkBigCacheSet-4         	20000000	       831 ns/op	     305 B/op	       2 allocs/op
BenchmarkMapGet-4              	30000000	       540 ns/op	      24 B/op	       2 allocs/op
BenchmarkFreeCacheGet-4        	20000000	       986 ns/op	     152 B/op	       4 allocs/op
BenchmarkBigCacheGet-4         	20000000	       726 ns/op	      40 B/op	       3 allocs/op
BenchmarkBigCacheSetParallel-4 	20000000	       532 ns/op	     313 B/op	       3 allocs/op
BenchmarkFreeCacheSetParallel-4	20000000	       564 ns/op	     357 B/op	       4 allocs/op
BenchmarkBigCacheGetParallel-4 	50000000	       338 ns/op	      40 B/op	       3 allocs/op
BenchmarkFreeCacheGetParallel-4	30000000	       581 ns/op	     152 B/op	       4 allocs/op

Writes and reads in bigcache are faster than in freecache. Writes to map are the slowest.

GC pause time
cd caches_bench; go run caches_gc_overhead_comparison.go

Number of entries:  20000000
GC pause for bigcache:  27.81671ms
GC pause for freecache:  30.218371ms
GC pause for map:  11.590772251s

Test shows how long are the GC pauses for caches filled with 20mln of entries. Bigcache and freecache have very similar GC pause time. It is clear that both reduce GC overhead in contrast to map which GC pause time took more than 10 seconds.

How it works

BigCache relies on optimization presented in 1.5 version of Go (issue-9477). This optimization states that if map without pointers in keys and values is used then GC will omit it’s content. Therefore BigCache uses map[uint64]uint32 where keys are hashed and values are offsets of entries.

Entries are kept in bytes array, to omit GC again. Bytes array size can grow to gigabytes without impact on performance because GC will only see single pointer to it.

Bigcache vs Freecache

Both caches provide the same core features but they reduce GC overhead in different ways. Bigcache relies on map[uint64]uint32, freecache implements its own mapping built on slices to reduce number of pointers.

Results from benchmark tests are presented above. One of the advantage of bigcache over freecache is that you don’t need to know the size of the cache in advance, because when bigcache is full, it can allocate additional memory for new entries instead of overwriting existing ones as freecache does currently. However hard max size in bigcache also can be set, check HardMaxCacheSize.

More

Bigcache genesis is described in allegro.tech blog post: writing a very fast cache service in Go

License

BigCache is released under the Apache 2.0 license (see LICENSE)

Documentation

Index

Constants

View Source
const ErrCannotRetrieveEntry = iteratorError("Could not retrieve entry from cache")

ErrCannotRetrieveEntry is reported when entry cannot be retrieved from underlying

View Source
const ErrInvalidIteratorState = iteratorError("Iterator is in invalid state. Use SetNext() to move to next position")

ErrInvalidIteratorState is reported when iterator is in invalid state

Variables

This section is empty.

Functions

This section is empty.

Types

type BigCache

type BigCache struct {
	// contains filtered or unexported fields
}

BigCache is fast, concurrent, evicting cache created to keep big number of entries without impact on performance. It keeps entries on heap but omits GC for them. To achieve that operations on bytes arrays take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

func NewBigCache

func NewBigCache(config Config) (*BigCache, error)

NewBigCache initialize new instance of BigCache

func (*BigCache) Get

func (c *BigCache) Get(key string) ([]byte, error)

Get reads entry for the key

func (*BigCache) Iterator

func (c *BigCache) Iterator() *EntryInfoIterator

Iterator returns iterator function to iterate over EntryInfo's from whole cache.

func (*BigCache) Len

func (c *BigCache) Len() int

Len computes number of entries in cache

func (*BigCache) Reset

func (c *BigCache) Reset() error

Reset empties all cache shards

func (*BigCache) Set

func (c *BigCache) Set(key string, entry []byte) error

Set saves entry under the key

type Config

type Config struct {
	// Number of cache shards, value must be a power of two
	Shards int
	// Time after which entry can be evicted
	LifeWindow time.Duration
	// Interval between removing expired entries (clean up).
	// If set to <= 0 then no action is performed. Setting to < 1 second is counterproductive — bigcache has a one second resolution.
	CleanWindow time.Duration
	// Max number of entries in life window. Used only to calculate initial size for cache shards.
	// When proper value is set then additional memory allocation does not occur.
	MaxEntriesInWindow int
	// Max size of entry in bytes. Used only to calculate initial size for cache shards.
	MaxEntrySize int
	// Verbose mode prints information about new memory allocation
	Verbose bool
	// Hasher used to map between string keys and unsigned 64bit integers, by default fnv64 hashing is used.
	Hasher Hasher
	// HardMaxCacheSize is a limit for cache size in MB. Cache will not allocate more memory than this limit.
	// It can protect application from consuming all available memory on machine, therefore from running OOM Killer.
	// Default value is 0 which means unlimited size. When the limit is higher than 0 and reached then
	// the oldest entries are overridden for the new ones.
	HardMaxCacheSize int
	// OnRemove is a callback fired when the oldest entry is removed because of its expiration time or no space left
	// for the new entry. Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
	OnRemove func(key string, entry []byte)
}

Config for BigCache

func DefaultConfig

func DefaultConfig(eviction time.Duration) Config

DefaultConfig initializes config with default values. When load for BigCache can be predicted in advance then it is better to use custom config.

type EntryInfo

type EntryInfo struct {
	// contains filtered or unexported fields
}

EntryInfo holds informations about entry in the cache

func (EntryInfo) Hash

func (e EntryInfo) Hash() uint64

Hash returns entry's hash value

func (EntryInfo) Key

func (e EntryInfo) Key() string

Key returns entry's underlying key

func (EntryInfo) Timestamp

func (e EntryInfo) Timestamp() uint64

Timestamp returns entry's timestamp (time of insertion)

func (EntryInfo) Value

func (e EntryInfo) Value() []byte

Value returns entry's underlying value

type EntryInfoIterator

type EntryInfoIterator struct {
	// contains filtered or unexported fields
}

EntryInfoIterator allows to iterate over entries in the cache

func (*EntryInfoIterator) SetNext

func (it *EntryInfoIterator) SetNext() bool

SetNext moves to next element and returns true if it exists.

func (*EntryInfoIterator) Value

func (it *EntryInfoIterator) Value() (EntryInfo, error)

Value returns current value from the iterator

type EntryNotFoundError

type EntryNotFoundError struct {
	// contains filtered or unexported fields
}

EntryNotFoundError is an error type struct which is returned when entry was not found for provided key

func (EntryNotFoundError) Error

func (e EntryNotFoundError) Error() string

Error returned when entry does not exist.

type Hasher

type Hasher interface {
	Sum64(string) uint64
}

Hasher is responsible for generating unsigned, 64 bit hash of provided string. Hasher should minimize collisions (generating same hash for different strings) and while performance is also important fast functions are preferable (i.e. you can use FarmHash family).

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL