cove

package module
v0.0.0-...-4fdd53c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 24, 2024 License: MIT Imports: 13 Imported by: 1

README

Cove

goreportcard.com PkgGoDev

cove is a caching library for Go that utilizes SQLite as the storage backend. It provides a TTL cache for key-value pairs with support for namespaces, batch operations, range scans and eviction callbacks.

TL;DR

A TTL caching for Go backed by SQLite. (See examples for usage)

go get github.com/admpub/cove
package main

import (
	"fmt"
	"github.com/admpub/cove"
)

func main() {
	cache, err := cove.New(
		cove.URITemp(),
	)
	if err != nil {
		panic(err)
	}
	defer cache.Close()

	ns, err := cache.NS("strings")
	if err != nil {
		panic(err)
	}
	// When using generic, use a separate namespace for each type
	stringCache := cove.Of[string](ns)

	stringCache.Set("key", "the string")

	str, err := stringCache.Get("key")
	hit, err := cove.Hit(err)
	if err != nil {
		panic(err)
	}
	if hit {
		fmt.Println(str) // Output: the string
	}

	str, err = stringCache.GetOr("async-key", func(key string) (string, error) {
		return "refreshed string", nil
	})
	hit, err = cove.Hit(err)
	if err != nil {
		panic(err)
	}
	if hit {
		fmt.Println(str) // Output: refreshed string
	}

}

Use case

cove is meant to be embedded into your application, and not as a standalone service. It is a simple key-value store that is meant to be used for caching data that is expensive to compute or retrieve. cove can also be used as a key-value store

So why SQLite?

There are plenty of in memory and other caches build in go,
eg https://github.com/avelino/awesome-go#cache, performance, concurrency, fast, LFU, LRU, ARC and so on.
There are also a few key-value stores build in go that can be embedded (just like cove),
eg https://github.com/dgraph-io/badger or https://github.com/boltdb/bolt and probably quite a few more,
https://github.com/avelino/awesome-go#databases-implemented-in-go.

Well if these alternatives suits your use case, use them. The main benefit of using a cache/kv, from my perspective and the reason for building cove, is that a cache backed by sqlite should be decent and fast enough in most case. Its generically just a good solution while probably being outperformed by others in niche cases.

  • you can have very large K/V pairs
  • you can tune it for your use case
  • it should perform decently
  • you can cache hundreds of GB. SSD are fast these days.
  • page caching and tuning will help you out.

While sqlite has come a long way since its inception and particular with it running in WAL mode, there are some limitations. Eg only one writer is allowed at a time. So if you have a write heavy cache, you might want to consider another solution. With that said it should be fine for most with some tuning to increase write performance, eg synchronous = off.

Installation

To install cove, use go get:

go get github.com/admpub/cove
Considerations

Since cove uses SQLite as the storage backend, it is important realize that you project now will depend on cgo and that the SQLite library will be compiled into your project. This might not be a problem at all, but it could cause problems in some modern "magic" build tools used in CD/CI pipelines for go.

Usage

Tuning

cove uses sqlite in WAL mode and writes the data to disk. While probably :memory: works for the most part, it does not have all the cool performance stuff that comes with sqlite in WAL mode on disk and probably will result in som SQL_BUSY errors.

In general the default tuning is the following

PRAGMA journal_mode = WAL;
PRAGMA synchronous = normal;
PRAGMA temp_store = memory;
PRAGMA auto_vacuum = incremental;
PRAGMA incremental_vacuum;

Have a look at https://www.sqlite.org/pragma.html for tuning your cache to your needs.

If you are write heavy, you might want to consider synchronous = off and dabble with some other settings, eg wal_autocheckpoint, to increase write performance. The tradeoff is that you might lose some read performance instead.

    cache, err := cove.New(
		cove.URITemp(), 
		cove.DBSyncOff(), 
		// Yes, yes, this can be used to inject sql, but I trust you
		// to not let your users arbitrarily configure pragma on your 
		// sqlite instance.
		cove.DBPragma("wal_autocheckpoint = 1000"),
    )
Creating a Cache

To create a cache, use the New function. You can specify various options such as TTL, vacuum interval, and eviction callbacks.

Config

cove.URITemp() Creates a temporary directory in /tmp or similar for the database. If combined with a cove.DBRemoveOnClose() and a gracefull shutdown, the database will be removed on close

if you want the cache to persist over restarts or such, you can use cove.URIFromPath("/path/to/db/cache.db") instead.

There are a few options that can be set when creating a cache, see cove.With* and cove.DB* functions for more information.

Example
package main

import (
    "github.com/admpub/cove"
    "time"
)

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
        cove.WithTTL(time.Minute*10),
    )
    if err != nil {
        panic(err)
    }
    defer cache.Close()
}
Setting and Getting Values

You can set and get values from the cache using the Set and Get methods.

package main

import (
    "fmt"
    "github.com/admpub/cove"
    "time"
)

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
        cove.WithTTL(time.Minute*10),
    )
    if err != nil {
        panic(err)
    }
    defer cache.Close()

    // Set a value
    err = cache.Set("key", []byte("value0"))
    if err != nil {
        panic(err)
    }

    // Get the value
    value, err := cache.Get("key")
	hit, err := cove.Hit(err)
    if err != nil {
        panic(err)
    }

    fmt.Println("[Hit]:", hit, "[Value]:", string(value)) 
	// Output: "[Hit]: true [Value]: value0
}
Handling NotFound errors

If a key is not found in the cache, the Get method will return an NotFound error.

You can handle not found errors using the Hit and Miss helper functions.

package main

import (
    "errors"
    "fmt"
    "github.com/admpub/cove"
    "time"
)

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
        cove.WithTTL(time.Minute*10),
    )
    if err != nil {
        panic(err)
    }
    defer cache.Close()

    _, err = cache.Get("key")
    fmt.Println("err == cove.NotFound:", err == cove.NotFound)
    fmt.Println("errors.Is(err, cove.NotFound):", errors.Is(err, cove.NotFound))

    _, err = cache.Get("key")
    hit, err := cove.Hit(err)
    if err != nil { // A "real" error has occurred
        panic(err)
    }
    if !hit {
        fmt.Println("key miss")
    }

    _, err = cache.Get("key")
    miss, err := cove.Miss(err)
    if err != nil { // A "real" error has occurred
        panic(err)
    }
    if miss {
        fmt.Println("key miss")
    }
}
Using Namespaces

Namespaces allow you to isolate different sets of keys within the same cache.

package main

import (
    "github.com/admpub/cove"
    "time"
	"fmt"
)

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
        cove.WithTTL(time.Minute*10),
    )
    if err != nil {
        panic(err)
    }
    defer cache.Close()


	err = cache.Set("key", []byte("value0"))
	if err != nil {
		panic(err)
	}
	
    ns, err := cache.NS("namespace1")
    if err != nil {
        panic(err)
    }

    err = ns.Set("key", []byte("value1"))
    if err != nil {
        panic(err)
    }




	value, err := cache.Get("key")
	if err != nil {
		panic(err)
	}
	fmt.Println(string(value)) // Output: value0
	
    value, err = ns.Get("key")
    if err != nil {
        panic(err)
    }
    fmt.Println(string(value)) // Output: value1
	
	
}
Eviction Callbacks

You can set a callback function to be called when a key is evicted from the cache.

package main

import (
    "fmt"
    "github.com/admpub/cove"
)

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
        cove.WithEvictCallback(func(key string, val []byte) {
            fmt.Printf("evicted %s: %s\n", key, string(val))
        }),
    )
    if err != nil {
        panic(err)
    }
    defer cache.Close()

    err = cache.Set("key", []byte("evict me"))
    if err != nil {
        panic(err)
    }

    _, err = cache.Evict("key")
    if err != nil {
        panic(err)
    }
    // Output: evicted key: evict me
}
Using Iterators

Iterators allow you to scan through keys and values without loading all rows into memory.

WARNING
Since iterators don't really have any way of communication errors
the Con is that errors are dropped when using iterators.
the Pro is that it is very easy to use, and scan row by row (ie. no need to load all rows into memory)

package main

import (
    "fmt"
    "github.com/admpub/cove"
    "time"
)

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
        cove.WithTTL(time.Minute*10),
    )
    if err != nil {
        panic(err)
    }
    defer cache.Close()

    for i := 0; i < 100; i++ {
        err = cache.Set(fmt.Sprintf("key%d", i), []byte(fmt.Sprintf("value%d", i)))
        if err != nil {
            panic(err)
        }
    }

    // KV iterator
    for k, v := range cache.ItrRange("key97", cove.RANGE_MAX) {
        fmt.Println(k, string(v))
    }

    // Key iterator
    for key := range cache.ItrKeys(cove.RANGE_MIN, "key1") {
        fmt.Println(key)
    }

    // Value iterator
    for value := range cache.ItrValues(cove.RANGE_MIN, "key1") {
        fmt.Println(string(value))
    }
}
Batch Operations

cove provides batch operations to efficiently handle multiple keys in a single operation. This includes BatchSet, BatchGet, and BatchEvict.

BatchSet

The BatchSet method allows you to set multiple key-value pairs in the cache in a single operation. This method ensures atomicity by splitting the batch into sub-batches if necessary.

package main

import (
    "fmt"
    "github.com/admpub/cove"
)

func assertNoErr(err error) {
    if err != nil {
        panic(err)
    }
}

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
    )
    assertNoErr(err)
    defer cache.Close()

    err = cache.BatchSet([]cove.KV[[]byte]{
        {K: "key1", V: []byte("val1")},
        {K: "key2", V: []byte("val2")},
    })
    assertNoErr(err)
}
BatchGet

The BatchGet method allows you to retrieve multiple keys from the cache in a single operation. This method ensures atomicity by splitting the batch into sub-batches if necessary.

package main

import (
    "fmt"
    "github.com/admpub/cove"
)

func assertNoErr(err error) {
    if err != nil {
        panic(err)
    }
}

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
    )
    assertNoErr(err)
    defer cache.Close()

    err = cache.BatchSet([]cove.KV[[]byte]{
        {K: "key1", V: []byte("val1")},
        {K: "key2", V: []byte("val2")},
    })
    assertNoErr(err)

    kvs, err := cache.BatchGet([]string{"key1", "key2", "key3"})
    assertNoErr(err)

    for _, kv := range kvs {
        fmt.Println(kv.K, "-", string(kv.V))
        // Output:
        // key1 - val1
        // key2 - val2
    }
}
BatchEvict

The BatchEvict method allows you to evict multiple keys from the cache in a single operation. This method ensures atomicity by splitting the batch into sub-batches if necessary.

package main

import (
    "fmt"
    "github.com/admpub/cove"
)

func assertNoErr(err error) {
    if err != nil {
        panic(err)
    }
}

func main() {
    cache, err := cove.New(
        cove.URITemp(),
        cove.DBRemoveOnClose(),
    )
    assertNoErr(err)
    defer cache.Close()

    err = cache.BatchSet([]cove.KV[[]byte]{
        {K: "key1", V: []byte("val1")},
        {K: "key2", V: []byte("val2")},
    })
    assertNoErr(err)

    evicted, err := cache.BatchEvict([]string{"key1", "key2", "key3"})
    assertNoErr(err)

    for _, kv := range evicted {
        fmt.Println("Evicted,", kv.K, "-", string(kv.V))
        // Output:
        // Evicted, key1 - val1
        // Evicted, key2 - val2
    }
}

These batch operations help in efficiently managing multiple keys in the cache, ensuring atomicity and reducing the number of individual operations.

Typed Cache

The TypedCache in cove provides a way to work with strongly-typed values in the cache, using Go generics. This allows you to avoid manual serialization and deserialization of values, making the code cleaner and less error-prone.

Creating a Typed Cache

A Typed Cache is simply to use golang generics to wrap the cache and provide type safety and ease of use. The Typed Cache comes with the same fetchers and api as the untyped cache but adds a marshalling and unmarshalling layer on top of it. encoding/gob is used for serialization and deserialization of values.

To create a typed cache, use the Of function, passing the existing cache/namespace instance:

package main

import (
	"fmt"
	"github.com/admpub/cove"
	"time"
)

func assertNoErr(err error) {
	if err != nil {
		panic(err)
	}
}

type Person struct {
	Name string
	Age  int
}

func main() {
	// Create a base cache
	cache, err := cove.New(
		cove.URITemp(),
		cove.DBRemoveOnClose(),
	)
	assertNoErr(err)
	defer cache.Close()

	personNamespace, err := cache.NS("person")
	assertNoErr(err)
	
	// Create a typed cache for Person struct
	typedCache := cove.Of[Person](personNamespace)

	// Set a value in the typed cache
	err = typedCache.Set("alice", Person{Name: "Alice", Age: 30})
	assertNoErr(err)

	// Get a value from the typed cache
	alice, err := typedCache.Get("alice")
	assertNoErr(err)
	fmt.Printf("%+v\n", alice) // Output: {Name:Alice Age:30}
}
Methods
  • Set: Sets a value in the cache with the default TTL.
  • Get: Retrieves a value from the cache.
  • SetTTL: Sets a value in the cache with a custom TTL.
  • GetOr: Retrieves a value from the cache or calls a fetch function if the key does not exist.
  • BatchSet: Sets a batch of key/value pairs in the cache.
  • BatchGet: Retrieves a batch of keys from the cache.
  • BatchEvict: Evicts a batch of keys from the cache.
  • Evict: Evicts a key from the cache.
  • EvictAll: Evicts all keys in the cache.
  • Range: Returns all key-value pairs in a specified range.
  • Keys: Returns all keys in a specified range.
  • Values: Returns all values in a specified range.
  • ItrRange: Returns an iterator for key-value pairs in a specified range.
  • ItrKeys: Returns an iterator for keys in a specified range.
  • ItrValues: Returns an iterator for values in a specified range.
  • Raw: Returns the underlying untyped cache.

The TypedCache uses encoding/gob for serialization and deserialization of values, ensuring type safety and ease of use.

Benchmarks

All models are wrong but some are useful. Not sure what category this falls under, but here are some benchmarks inserts/sec, reads/sec, write mb/sec and read mb/sec.

In general Linux, 4 cores, a ssd and 32 gb of ram it seems to do some

  • 20-30k inserts/sec
  • 200k reads/sec.
  • writes 100-200 mb/sec
  • reads 1000-2000 mb/sec.

It seems fast enough...

 
BenchmarkSetParallel/default-4                         28_256 insert/sec
BenchmarkSetParallel/sync-off-4                        36_523 insert/sec
BenchmarkSetParallel/sync-off+autocheckpoint-4         25_480 insert/sec

BenchmarkGetParallel/default-4                        192_668 reads/sec
BenchmarkGetParallel/sync-off-4                       238_714 reads/sec
BenchmarkGetParallel/sync-off+autocheckpoint-4        193_778 reads/sec

BenchmarkSetMemParallel/default+0.1mb-4                   273 write-mb/sec
BenchmarkSetMemParallel/default+1mb-4                     261 write-mb/sec
BenchmarkSetMemParallel/sync-off+0.1mb-4                  238 write-mb/sec
BenchmarkSetMemParallel/sync-off+1mb-4                    212 write-mb/sec

BenchmarkSetMem/default+0.1mb-4                           104 write-mb/sec
BenchmarkSetMem/default+1mb-4                             122 write-mb/sec
BenchmarkSetMem/sync-off+0.1mb-4                          219 write-mb/sec
BenchmarkSetMem/sync-off+1mb-4                            249 write-mb/sec

BenchmarkGetMemParallel/default+0.1mb-4                2_189 read-mb/sec
BenchmarkGetMemParallel/default+1mb-4                  1_566 read-mb/sec
BenchmarkGetMemParallel/sync-off+0.1mb-4               2_194 read-mb/sec
BenchmarkGetMemParallel/sync-off+1mb-4                 1_501 read-mb/sec

BenchmarkGetMem/default+0.1mb-4                          764 read-mb/sec
BenchmarkGetMem/default+1mb-4                            520 read-mb/sec
BenchmarkGetMem/sync-off+0.1mb-4                         719 read-mb/sec
BenchmarkGetMem/sync-off+1mb-4                           530 read-mb/sec

TODO

  • Add hooks or middleware for logging, metrics, eviction strategy etc.
  • More testing

License

This project is licensed under the MIT License - see the LICENSE file for details.

Documentation

Index

Constants

View Source
const MAX_BLOB_SIZE = 1_000_000_000 - 100 // 1GB - 100 bytes
View Source
const MAX_PARAMS = 999
View Source
const NO_TTL = time.Hour * 24 * 365 * 100 // 100 years (effectively forever)

NO_TTL is a constant that represents no ttl, kind of, it is really just a very long time for a cache

View Source
const NS_DEFAULT = "default"
View Source
const RANGE_MAX = string(byte(255))
View Source
const RANGE_MIN = string(byte(0))

Variables

View Source
var NotFound = errors.New("not found")

Functions

func Hit

func Hit(err error) (bool, error)

func Miss

func Miss(err error) (bool, error)

func URIFromPath

func URIFromPath(path string) string

func URITemp

func URITemp() string

func Unzip

func Unzip[V any](kv []KV[V]) (keys []string, vals []V)

func Vacuum

func Vacuum(interval time.Duration, max int) func(cache *Cache)

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

func New

func New(uri string, op ...Op) (*Cache, error)

func (*Cache) BatchEvict

func (c *Cache) BatchEvict(keys []string) (evicted []KV[[]byte], err error)

BatchEvict evicts a batch of keys from the cache

if onEvict is set, it will be called for each key the eviction will take place in one transaction, but split up into bacthes of MAX_PARAMS, ie 999, in order to have the eviction be atomic. If one key fails to evict, the whole batch will fail. Prefer batches less then MAX_PARAMS

func (*Cache) BatchGet

func (c *Cache) BatchGet(keys []string) ([]KV[[]byte], error)

BatchGet retrieves a batch of keys from the cache

the BatchGet will take place in one transaction, but split up into sub-batches of MAX_PARAMS size, ie 999, in order to have the BatchGet be atomic. If one key fails to fetched, the whole batch will fail. Prefer batches less then MAX_PARAMS

func (*Cache) BatchSet

func (c *Cache) BatchSet(rows []KV[[]byte]) error

BatchSet sets a batch of key/value pairs in the cache

the BatchSet will take place in one transaction, but split up into sub-batches of MAX_PARAMS/3 size, ie 999/3 = 333, in order to have the BatchSet be atomic. If one key fails to set, the whole batch will fail. Prefer batches less then MAX_PARAMS

func (*Cache) Close

func (c *Cache) Close() error

Close closes the cache and all its namespaces

func (*Cache) Evict

func (c *Cache) Evict(key string) (kv KV[[]byte], err error)

Evict evicts a key from the cache if onEvict is set, it will be called for key

func (*Cache) EvictAll

func (c *Cache) EvictAll() (len int, err error)

EvictAll evicts all keys in the cache onEvict will not be called

func (*Cache) Get

func (c *Cache) Get(key string) ([]byte, error)

Get retrieves a value from the cache

func (*Cache) GetOr

func (c *Cache) GetOr(key string, fetch func(k string) ([]byte, error)) ([]byte, error)

GetOr retrieves a value from the cache, if the key does not exist it will call the setter function and set the result.

If multiple goroutines call GetOr with the same key, only one will call the fetch function the others will wait for the first to finish and retrieve the cached value from the first call. It is useful paradigm to lessen a thundering herd problem. This is done by locking on the provided key in the application layer, not the database layer. meaning, this might work poorly if multiple applications are using the same sqlite cache files.

func (*Cache) ItrKeys

func (c *Cache) ItrKeys(from string, to string) iter.Seq[string]

ItrKeys returns an iterator for the range of keys [from, to]

WARNING
Since iterators don't really have any way of communication errors
the Con is that errors are dropped when using iterators.
the Pro is that it is very easy to use, and scan row by row (ie. no need to load all rows into memory)

func (*Cache) ItrRange

func (c *Cache) ItrRange(from string, to string) iter.Seq2[string, []byte]

ItrRange returns an iterator for the range of keys [from, to]

WARNING
Since iterators don't really have any way of communication errors
the Con is that errors are dropped when using iterators.
the Pro is that it is very easy to use, and scan row by row (ie. no need to load all rows into memory)

func (*Cache) ItrValues

func (c *Cache) ItrValues(from string, to string) iter.Seq[[]byte]

ItrValues returns an iterator for the range of values [from, to]

WARNING
Since iterators don't really have any way of communication errors
the Con is that errors are dropped when using iterators.
the Pro is that it is very easy to use, and scan row by row (ie. no need to load all rows into memory)

func (*Cache) Keys

func (c *Cache) Keys(from string, to string) (keys []string, err error)

Keys returns all keys in the range [from, to]

func (*Cache) NS

func (c *Cache) NS(ns string, ops ...Op) (*Cache, error)

NS creates a new namespace, if the namespace already exists it will return the existing namespace.

onEvict must be set for every new namespace created using WithEvictCallback. NS will create a new table in the database for the namespace in order to isolate it, and the indexes.

func (*Cache) Range

func (c *Cache) Range(from string, to string) (kv []KV[[]byte], err error)

Range returns all key value pairs in the range [from, to]

func (*Cache) Set

func (c *Cache) Set(key string, value []byte) error

Set sets a value in the cache, with default ttl

func (*Cache) SetTTL

func (c *Cache) SetTTL(key string, value []byte, ttl time.Duration) error

SetTTL sets a value in the cache with a custom ttl

func (*Cache) Vacuum

func (c *Cache) Vacuum(max int) (n int, err error)

func (*Cache) Values

func (c *Cache) Values(from string, to string) (values [][]byte, err error)

Values returns all values in the range [from, to]

type KV

type KV[T any] struct {
	K string
	V T
}

func Zip

func Zip[V any](keys []string, values []V) []KV[V]

func (KV[T]) Key

func (kv KV[T]) Key() string

func (KV[T]) Unzip

func (kv KV[T]) Unzip() (string, T)

func (KV[T]) Value

func (kv KV[T]) Value() T

type Op

type Op func(*Cache) error

func DBPragma

func DBPragma(s string) Op

DBPragma is a helper function to set a pragma on the database

see https://www.sqlite.org/pragma.html

example:

DBPragma("journal_size_limit = 6144000")

func DBRemoveOnClose

func DBRemoveOnClose() Op

DBRemoveOnClose is a helper function to remove the database files on close

func DBSyncOff

func DBSyncOff() Op

DBSyncOff is a helper function to set

synchronous = off

this is useful for write performance but effects read performance and durability

func WithEvictCallback

func WithEvictCallback(cb func(key string, val []byte)) Op

WithEvictCallback sets the callback function to be called when a key is evicted

func WithLogger

func WithLogger(log *slog.Logger) Op

WithLogger sets the logger for the cache

func WithTTL

func WithTTL(defaultTTL time.Duration) Op

WithTTL sets the default TTL for the cache

func WithVacuum

func WithVacuum(vacuum func(cache *Cache)) Op

WithVacuum sets the vacuum function to be called in a go routine

type TypedCache

type TypedCache[V any] struct {
	// contains filtered or unexported fields
}

func Of

func Of[V any](cache *Cache) *TypedCache[V]

Of creates a typed cache facade using generics. encoding/gob is used for serialization and deserialization

 Example:

	cache, err := cove.New(cove.URITemp(), cove.DBRemoveOnClose())
	assert.NoError(err)

	// creates a namespace that is separate from the main cache,
	// this helps to avoid key collisions and allows for easier management of keys
	// when using multiple types
	separateNamespace, err := cache.NS("my-strings")
	assert.NoError(err)

	stringCache := cove.Of[string](stringNamespace)
	stringCache.Set("hello", "typed world")
	fmt.Println(stringCache.Get("hello"))
	// Output: typed world <nil>

func (*TypedCache[V]) BatchEvict

func (t *TypedCache[V]) BatchEvict(keys []string) ([]KV[V], error)

BatchEvict evicts a batch of keys from the cache

if onEvict is set, it will be called for each key the eviction will take place in one transaction, but split up into bacthes of MAX_PARAMS, ie 999, in order to have the eviction be atomic. If one key fails to evict, the whole batch will fail. Prefer batches less then MAX_PARAMS

func (*TypedCache[V]) BatchGet

func (t *TypedCache[V]) BatchGet(keys []string) ([]KV[V], error)

BatchGet retrieves a batch of keys from the cache

the BatchGet will take place in one transaction, but split up into sub-batches of MAX_PARAMS size, ie 999, in order to have the BatchGet be atomic. If one key fails to fetched, the whole batch will fail. Prefer batches less then MAX_PARAMS

func (*TypedCache[V]) BatchSet

func (t *TypedCache[V]) BatchSet(ziped []KV[V]) error

BatchSet sets a batch of key/value pairs in the cache

the BatchSet will take place in one transaction, but split up into sub-batches of MAX_PARAMS/3 size, ie 999/3 = 333, in order to have the BatchSet be atomic. If one key fails to set, the whole batch will fail. Prefer batches less then MAX_PARAMS

func (*TypedCache[V]) Evict

func (t *TypedCache[V]) Evict(key string) (KV[V], error)

Evict evicts a key from the cache

if onEvict is set, it will be called for key

func (*TypedCache[V]) EvictAll

func (t *TypedCache[V]) EvictAll() (int, error)

EvictAll evicts all keys in the cache

onEvict will not be called

func (*TypedCache[V]) Get

func (t *TypedCache[V]) Get(key string) (V, error)

Get returns the value for the key

func (*TypedCache[V]) GetOr

func (t *TypedCache[V]) GetOr(key string, getter func(key string) (V, error)) (V, error)

GetOr returns the value for the key, if the key does not exist it will call the getter function to get the value

func (*TypedCache[V]) ItrKeys

func (t *TypedCache[V]) ItrKeys(from string, to string) iter.Seq[string]

ItrKeys returns a key iterator for the range of keys [from, to]

WARNING
Since iterators don't really have any way of communication errors
the Con is that errors are dropped when using iterators.
the Pro is that it is very easy to use, and scan row by row (ie. no need to load all rows into memory)

func (*TypedCache[V]) ItrRange

func (t *TypedCache[V]) ItrRange(from string, to string) iter.Seq2[string, V]

ItrRange returns a k/v iterator for the range of keys [from, to]

WARNING
Since iterators don't really have any way of communication errors
the Con is that errors are dropped when using iterators.
the Pro is that it is very easy to use, and scan row by row (ie. no need to load all rows into memory)

func (*TypedCache[V]) ItrValues

func (t *TypedCache[V]) ItrValues(from string, to string) iter.Seq[V]

ItrValues returns a value iterator for the range of keys [from, to]

WARNING
Since iterators don't really have any way of communication errors
the Con is that errors are dropped when using iterators.
the Pro is that it is very easy to use, and scan row by row (ie. no need to load all rows into memory)

func (*TypedCache[V]) Keys

func (t *TypedCache[V]) Keys(from string, to string) (keys []string, err error)

Keys returns all keys in the range [from, to]

func (*TypedCache[V]) Range

func (t *TypedCache[V]) Range(from string, to string) ([]KV[V], error)

Range returns all key value pairs in the range [from, to]

func (*TypedCache[V]) Raw

func (t *TypedCache[V]) Raw() *Cache

Raw returns the underlying untyped cache

func (*TypedCache[V]) Set

func (t *TypedCache[V]) Set(key string, value V) error

Set sets the value for the key with the default TTL

func (*TypedCache[V]) SetTTL

func (t *TypedCache[V]) SetTTL(key string, value V, ttl time.Duration) error

SetTTL sets the value for the key with the specified TTL

func (*TypedCache[V]) Values

func (t *TypedCache[V]) Values(from string, to string) (values []V, err error)

Values returns all values in the range [from, to]

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL