rockscache

package module
v0.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 22, 2023 License: BSD-3-Clause Imports: 12 Imported by: 16

README

license Build Status codecov Go Report Card Go Reference

English | 简体中文

RocksCache

The first Redis cache library to ensure eventual consistency and strong consistency with DB.

Features

  • Eventual Consistency: ensures eventual consistency of cache even in extreme cases
  • Strong consistency: provides strong consistent access to applications
  • Anti-breakdown: a better solution for cache breakdown
  • Anti-penetration
  • Anti-avalanche
  • Batch Query

Usage

This cache repository uses the most common update DB and then delete cache cache management policy

Read cache
import "github.com/dtm-labs/rockscache"

// new a client for rockscache using the default options
rc := rockscache.NewClient(redisClient, NewDefaultOptions())

// use Fetch to fetch data
// 1. the first parameter is the key of the data
// 2. the second parameter is the data expiration time
// 3. the third parameter is the data fetch function which is called when the cache does not exist
v, err := rc.Fetch("key1", 300 * time.Second, func()(string, error) {
  // fetch data from database or other sources
  return "value1", nil
})
Delete the cache
rc.TagAsDeleted(key)

Batch usage

Batch read cache
import "github.com/dtm-labs/rockscache"

// new a client for rockscache using the default options
rc := rockscache.NewClient(redisClient, NewDefaultOptions())

// use FetchBatch to fetch data
// 1. the first parameter is the keys list of the data
// 2. the second parameter is the data expiration time
// 3. the third parameter is the batch data fetch function which is called when the cache does not exist
// the parameter of the batch data fetch function is the index list of those keys
// missing in cache, which can be used to form a batch query for missing data.
// the return value of the batch data fetch function is a map, with key of the
// index and value of the corresponding data in form of string
v, err := rc.FetchBatch([]string{"key1", "key2", "key3"}, 300, func(idxs []int) (map[int]string, error) {
    // fetch data from database or other sources
    values := make(map[int]string)
    for _, i := range idxs {
        values[i] = fmt.Sprintf("value%d", i)
    }
    return values, nil
})
Batch delete cache
rc.TagAsDeletedBatch(keys)

Eventual consistency

With the introduction of caching, consistency problems in a distributed system show up, as the data is stored in two places at the same time: the database and Redis. For background on this consistency problem, and an introduction to popular Redis caching solutions, see.

But all the caching solutions we've seen so far, without introducing versioning at the application level, fail to address the following data inconsistency scenario.

cache-version-problem

Even if you use lock to do the updating, there are still corner cases that can cause inconsistency.

redis cache inconsistency
Solution

This project brings you a brand new solution that guarantee data consistency between the cache and the database, without introducing version. This solution is the first of its kind and has been patented and is now open sourced for everyone to use.

When the developer calls Fetch when reading the data, and makes sure to call TagAsDeleted after updating the database, then the cache can guarentee the eventual consistency. When step 5 in the diagram above is writing to v1, the write in this solution will eventually be ignored.

For a full runnable example, see dtm-cases/cache

Strongly consistent access

If your application needs to use caching and requires strong consistency rather than eventual consistency, then this can be supported by turning on the option StrongConsisteny, with the access method remaining the same

rc.Options.StrongConsisteny = true

Refer to cache consistency for detailed principles and dtm-cases/cache for examples

Downgrading and strong consistency

The library supports downgrading. The downgrade switch is divided into

  • DisableCacheRead: turns off cache reads, default false; if on, then Fetch does not read from the cache, but calls fn directly to fetch the data
  • DisableCacheDelete: disables cache delete, default false; if on, then TagAsDeleted does nothing and returns directly

When Redis has a problem and needs to be downgraded, you can control this with these two switches. If you need to maintain strong consistent access even during a downgrade, rockscache also supports

Refer to cache-consistency for detailed principles and dtm-cases/cache for examples

Anti-Breakdown

The use of cache through this library comes with an anti-breakdown feature. On the one hand Fetch will use singleflight within the process to avoid multiple requests being sent to Redis within a process, and on the other hand distributed locks will be used in the Redis layer to avoid multiple requests being sent to the DB from multiple processes at the same time, ensuring that only one data query request ends up at the DB.

The project's anti-breakdown provides a faster response time when hot cached data is deleted. If a hot cache data takes 3s to compute, a normal anti-breakdown solution would cause all requests for this hot data to wait 3s for this time, whereas this project's solution returns it immediately.

Anti-Penetration

The use of caching through this library comes with anti-penetration features. When fn in Fetch returns an empty string, this is considered an empty result and the expiry time is set to EmptyExpire in the rockscache option.

EmptyExpire defaults to 60s, if set to 0 then anti-penetration is turned off and no empty results are saved

Anti-Avalanche

The cache is used with this library and comes with an anti-avalanche. RandomExpireAdjustment in rockscache defaults to 0.1, if set to an expiry time of 600 then the expiry time will be set to a random number in the middle of 540s - 600s to avoid data expiring at the same time

Contact us

Chat Group

Join the chat via https://discord.gg/dV9jS5Rb33.

Give a star! ⭐

If you think this project is interesting, or helpful to you, please give a star!

Documentation

Overview

Package rockscache The first Redis cache library to ensure eventual consistency and strong consistency with DB.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func SetVerbose

func SetVerbose(v bool)

SetVerbose sets verbose mode.

Types

type Client

type Client struct {
	Options Options
	// contains filtered or unexported fields
}

Client delay client

func NewClient

func NewClient(rdb redis.UniversalClient, options Options) *Client

NewClient return a new rockscache client for each key, rockscache client store a hash set, the hash set contains the following fields: value: the value of the key lockUntil: the time when the lock is released. lockOwner: the owner of the lock. if a thread query the cache for data, and no cache exists, it will lock the key before querying data in DB

func (*Client) Fetch

func (c *Client) Fetch(key string, expire time.Duration, fn func() (string, error)) (string, error)

Fetch returns the value store in cache indexed by the key. If the key doest not exists, call fn to get result, store it in cache, then return.

func (*Client) Fetch2 added in v0.0.12

func (c *Client) Fetch2(ctx context.Context, key string, expire time.Duration, fn func() (string, error)) (string, error)

Fetch2 returns the value store in cache indexed by the key. If the key doest not exists, call fn to get result, store it in cache, then return.

func (*Client) FetchBatch added in v0.1.0

func (c *Client) FetchBatch(keys []string, expire time.Duration, fn func(idxs []int) (map[int]string, error)) (map[int]string, error)

FetchBatch returns a map with values indexed by index of keys list. 1. the first parameter is the keys list of the data 2. the second parameter is the data expiration time 3. the third parameter is the batch data fetch function which is called when the cache does not exist the parameter of the batch data fetch function is the index list of those keys missing in cache, which can be used to form a batch query for missing data. the return value of the batch data fetch function is a map, with key of the index and value of the corresponding data in form of string

func (*Client) FetchBatch2 added in v0.1.0

func (c *Client) FetchBatch2(ctx context.Context, keys []string, expire time.Duration, fn func(idxs []int) (map[int]string, error)) (map[int]string, error)

FetchBatch2 is same with FetchBatch, except that a user defined context.Context can be provided.

func (*Client) LockForUpdate added in v0.0.2

func (c *Client) LockForUpdate(ctx context.Context, key string, owner string) error

LockForUpdate locks the key, used in very strict strong consistency mode

func (*Client) RawGet added in v0.0.2

func (c *Client) RawGet(ctx context.Context, key string) (string, error)

RawGet returns the value store in cache indexed by the key, no matter if the key locked or not

func (*Client) RawSet added in v0.0.2

func (c *Client) RawSet(ctx context.Context, key string, value string, expire time.Duration) error

RawSet sets the value store in cache indexed by the key, no matter if the key locked or not

func (*Client) TagAsDeleted added in v0.0.7

func (c *Client) TagAsDeleted(key string) error

TagAsDeleted a key, the key will expire after delay time.

func (*Client) TagAsDeleted2 added in v0.0.12

func (c *Client) TagAsDeleted2(ctx context.Context, key string) error

TagAsDeleted2 a key, the key will expire after delay time.

func (*Client) TagAsDeletedBatch added in v0.1.0

func (c *Client) TagAsDeletedBatch(keys []string) error

TagAsDeletedBatch a key list, the keys in list will expire after delay time.

func (*Client) TagAsDeletedBatch2 added in v0.1.0

func (c *Client) TagAsDeletedBatch2(ctx context.Context, keys []string) error

TagAsDeletedBatch2 a key list, the keys in list will expire after delay time.

func (*Client) UnlockForUpdate added in v0.0.2

func (c *Client) UnlockForUpdate(ctx context.Context, key string, owner string) error

UnlockForUpdate unlocks the key, used in very strict strong consistency mode

type Options

type Options struct {
	// Delay is the delay delete time for keys that are tag deleted. default is 10s
	Delay time.Duration
	// EmptyExpire is the expire time for empty result. default is 60s
	EmptyExpire time.Duration
	// LockExpire is the expire time for the lock which is allocated when updating cache. default is 3s
	// should be set to the max of the underling data calculating time.
	LockExpire time.Duration
	// LockSleep is the sleep interval time if try lock failed. default is 100ms
	LockSleep time.Duration
	// WaitReplicas is the number of replicas to wait for. default is 0
	// if WaitReplicas is > 0, it will use redis WAIT command to wait for TagAsDeleted synchronized.
	WaitReplicas int
	// WaitReplicasTimeout is the number of replicas to wait for. default is 3000ms
	// if WaitReplicas is > 0, WaitReplicasTimeout is the timeout for WAIT command.
	WaitReplicasTimeout time.Duration
	// RandomExpireAdjustment is the random adjustment for the expire time. default 0.1
	// if the expire time is set to 600s, and this value is set to 0.1, then the actual expire time will be 540s - 600s
	// solve the problem of cache avalanche.
	RandomExpireAdjustment float64
	// CacheReadDisabled is the flag to disable read cache. default is false
	// when redis is down, set this flat to downgrade.
	DisableCacheRead bool
	// CacheDeleteDisabled is the flag to disable delete cache. default is false
	// when redis is down, set this flat to downgrade.
	DisableCacheDelete bool
	// StrongConsistency is the flag to enable strong consistency. default is false
	// if enabled, the Fetch result will be consistent with the db result, but performance is bad.
	StrongConsistency bool
	// Context for redis command
	Context context.Context
}

Options represents the options for rockscache client

func NewDefaultOptions

func NewDefaultOptions() Options

NewDefaultOptions return default options

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL