freecache

package module
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 26, 2019 License: MIT Imports: 10 Imported by: 0

README

FreeCache - A cache library for Go with zero GC overhead and high concurrent performance.

Long lived objects in memory introduce expensive GC overhead, With FreeCache, you can cache unlimited number of objects in memory without increased latency and degraded throughput.

Build Status GoCover GoDoc

Features

  • Store hundreds of millions of entries
  • Zero GC overhead
  • High concurrent thread-safe access
  • Pure Go implementation
  • Expiration support
  • Nearly LRU algorithm
  • Strictly limited memory usage
  • Come with a toy server that supports a few basic Redis commands with pipeline
  • Iterator support

Performance

Here is the benchmark result compares to built-in map, Set performance is about 2x faster than built-in map, Get performance is about 1/2x slower than built-in map. Since it is single threaded benchmark, in multi-threaded environment, FreeCache should be many times faster than single lock protected built-in map.

BenchmarkCacheSet        3000000               446 ns/op
BenchmarkMapSet          2000000               861 ns/op
BenchmarkCacheGet        3000000               517 ns/op
BenchmarkMapGet         10000000               212 ns/op

Example Usage

cacheSize := 100 * 1024 * 1024
cache := freecache.NewCache(cacheSize)
debug.SetGCPercent(20)
key := []byte("abc")
val := []byte("def")
expire := 60 // expire in 60 seconds
cache.Set(key, val, expire)
got, err := cache.Get(key)
if err != nil {
    fmt.Println(err)
} else {
    fmt.Println(string(got))
}
affected := cache.Del(key)
fmt.Println("deleted key ", affected)
fmt.Println("entry count ", cache.EntryCount())

Notice

  • Memory is preallocated.
  • If you allocate large amount of memory, you may need to set debug.SetGCPercent() to a much lower percentage to get a normal GC frequency.

How it is done

FreeCache avoids GC overhead by reducing the number of pointers. No matter how many entries stored in it, there are only 512 pointers. The data set is sharded into 256 segments by the hash value of the key. Each segment has only two pointers, one is the ring buffer that stores keys and values, the other one is the index slice which used to lookup for an entry. Each segment has its own lock, so it supports high concurrent access.

TODO

  • Support dump to file and load from file.
  • Support resize cache size at runtime.

License

The MIT License

Documentation

Index

Constants

View Source
const ENTRY_HDR_SIZE = 24
View Source
const HASH_ENTRY_SIZE = 16

Variables

View Source
var ErrLargeEntry = errors.New("The entry size is larger than 1/1024 of cache size")
View Source
var ErrLargeKey = errors.New("The key is larger than 65535")
View Source
var ErrNotFound = errors.New("Entry not found")
View Source
var ErrOutOfRange = errors.New("out of range")

Functions

This section is empty.

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache is a freecache instance.

func NewCache

func NewCache(size int) (cache *Cache)

NewCache returns a newly initialize cache by size. The cache size will be set to 512KB at minimum. If the size is set relatively large, you should call `debug.SetGCPercent()`, set it to a much smaller value to limit the memory consumption and GC pause time.

func (*Cache) AverageAccessTime

func (cache *Cache) AverageAccessTime() int64

AverageAccessTime returns the average unix timestamp when a entry being accessed. Entries have greater access time will be evacuated when it is about to be overwritten by new value.

func (*Cache) Clear

func (cache *Cache) Clear()

Clear clears the cache.

func (*Cache) Del

func (cache *Cache) Del(key []byte) (affected bool)

Del deletes an item in the cache by key and returns true or false if a delete occurred.

func (*Cache) DelInt

func (cache *Cache) DelInt(key int64) (affected bool)

DelInt deletes an item in the cache by int key and returns true or false if a delete occurred.

func (*Cache) EntryCount

func (cache *Cache) EntryCount() (entryCount int64)

EntryCount returns the number of items currently in the cache.

func (*Cache) EvacuateCount

func (cache *Cache) EvacuateCount() (count int64)

EvacuateCount is a metric indicating the number of times an eviction occurred.

func (*Cache) ExpiredCount

func (cache *Cache) ExpiredCount() (count int64)

ExpiredCount is a metric indicating the number of times an expire occurred.

func (*Cache) Get

func (cache *Cache) Get(key []byte) (value []byte, err error)

Get returns the value or not found error.

func (*Cache) GetInt

func (cache *Cache) GetInt(key int64) (value []byte, err error)

GetInt returns the value for an integer within the cache or a not found error.

func (*Cache) GetIntWithExpiration

func (cache *Cache) GetIntWithExpiration(key int64) (value []byte, expireAt uint32, err error)

GetIntWithExpiration returns the value and expiration or a not found error.

func (*Cache) GetWithBuf

func (cache *Cache) GetWithBuf(key, buf []byte) (value []byte, err error)

GetWithBuf copies the value to the buf or returns not found error. This method doesn't allocate memory when the capacity of buf is greater or equal to value.

func (*Cache) GetWithExpiration

func (cache *Cache) GetWithExpiration(key []byte) (value []byte, expireAt uint32, err error)

GetWithExpiration returns the value with expiration or not found error.

func (*Cache) HitCount

func (cache *Cache) HitCount() (count int64)

HitCount is a metric that returns number of times a key was found in the cache.

func (*Cache) HitRate

func (cache *Cache) HitRate() float64

HitRate is the ratio of hits over lookups.

func (*Cache) LookupCount

func (cache *Cache) LookupCount() int64

LookupCount is a metric that returns the number of times a lookup for a given key occurred.

func (*Cache) MissCount

func (cache *Cache) MissCount() (count int64)

MissCount is a metric that returns the number of times a miss occurred in the cache.

func (*Cache) NewIterator

func (cache *Cache) NewIterator() *Iterator

NewIterator creates a new iterator for the cache.

func (*Cache) OverwriteCount

func (cache *Cache) OverwriteCount() (overwriteCount int64)

OverwriteCount indicates the number of times entries have been overriden.

func (*Cache) ResetStatistics

func (cache *Cache) ResetStatistics()

ResetStatistics refreshes the current state of the statistics.

func (*Cache) Set

func (cache *Cache) Set(key, value []byte, expireSeconds int) (err error)

Set sets a key, value and expiration for a cache entry and stores it in the cache. If the key is larger than 65535 or value is larger than 1/1024 of the cache size, the entry will not be written to the cache. expireSeconds <= 0 means no expire, but it can be evicted when cache is full.

func (*Cache) SetInt

func (cache *Cache) SetInt(key int64, value []byte, expireSeconds int) (err error)

SetInt stores in integer value in the cache.

func (*Cache) TTL

func (cache *Cache) TTL(key []byte) (timeLeft uint32, err error)

TTL returns the TTL time left for a given key or a not found error.

type Entry

type Entry struct {
	Key   []byte
	Value []byte
}

Entry represents a key/value pair.

type Iterator

type Iterator struct {
	// contains filtered or unexported fields
}

Iterator iterates the entries for the cache.

func (*Iterator) Next

func (it *Iterator) Next() *Entry

Next returns the next entry for the iterator. The order of the entries is not guaranteed. If there is no more entries to return, nil will be returned.

type RingBuf

type RingBuf struct {
	// contains filtered or unexported fields
}

Ring buffer has a fixed size, when data exceeds the size, old data will be overwritten by new data. It only contains the data in the stream from begin to end

func NewRingBuf

func NewRingBuf(size int, begin int64) (rb RingBuf)

func (*RingBuf) Begin

func (rb *RingBuf) Begin() int64

func (*RingBuf) Dump

func (rb *RingBuf) Dump() []byte

Create a copy of the buffer.

func (*RingBuf) End

func (rb *RingBuf) End() int64

func (*RingBuf) EqualAt

func (rb *RingBuf) EqualAt(p []byte, off int64) bool

func (*RingBuf) Evacuate

func (rb *RingBuf) Evacuate(off int64, length int) (newOff int64)

Evacuate read the data at off, then write it to the the data stream, Keep it from being overwritten by new data.

func (*RingBuf) ReadAt

func (rb *RingBuf) ReadAt(p []byte, off int64) (n int, err error)

read up to len(p), at off of the data stream.

func (*RingBuf) Resize

func (rb *RingBuf) Resize(newSize int)

func (*RingBuf) Size

func (rb *RingBuf) Size() int64

func (*RingBuf) Skip

func (rb *RingBuf) Skip(length int64)

func (*RingBuf) String

func (rb *RingBuf) String() string

func (*RingBuf) Write

func (rb *RingBuf) Write(p []byte) (n int, err error)

func (*RingBuf) WriteAt

func (rb *RingBuf) WriteAt(p []byte, off int64) (n int, err error)

Directories

Path Synopsis
A basic freecache server supports redis protocol
A basic freecache server supports redis protocol

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL