bitcask

package module
v2.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 28, 2025 License: MIT Imports: 27 Imported by: 13

README

bitcask

Go Reference Build Status Test Status

A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak

For a more feature-complete Redis-compatible server, distributed key/value store have a look at Bitraft which uses this library as its backend. Use Bitcask as a starting point or if you want to embed in your application, use Bitraft if you need a complete server/client solution with high availability with a Redis-compatible API.

Table of Contents:

Features

  • Embedded (import "go.mills.io/bitcask/v2")
  • Builtin CLI (bitcask)
  • Builtin Redis-compatible server (bitcaskd)
  • Predictable read/write performance
  • High throughput (See: Performance )
  • Full Transactions support (ACID)
  • Low latency

Migrating from Bitcask v1

If you are migrating from Bitcask v1 ([git.mills.io/prologic/bitcask)bitcask-v1), to Bitcask v2 (go.mills.io/bitcask/v2), please update your code as follows:

import (
- "git.mills.io/prologic/bitcask"
+ "go.mills.io/bitcask/v2"
)
  • WithSync(true) was renamed to WithSyncWrites(true)

  • Iterators that take a bitcask.KeyFunc as input now use bitcask.Key as the type for keys rather than [] byte].

- db.Scan(prefix, func (key []byte) error {
+ db.Scan(prefix, func (key bitcask.Key) error {
  //
})
  • Fold() was renamed to ForEach() (see other changes above)

Is Bitcask right for my project?

[!NOTE] Please read this carefully to identify whether using Bitcask is suitable for your needs.

bitcask is a great fit for:

  • Storing hundreds of thousands to millions of key/value pairs based on default configuration. With the default configuration (configurable) of 64 bytes per key and 64kB values, 1M keys would consume roughly ~600-700MB of memory ~65-70GB of disk storage. These are all configurable when you create a new database with bitcask.Open(...) with functional-style options you can pass with WithXXX().

  • As the backing store to a distributed key/value store. See for example the bitraft as an example of this.

  • For high performance, low latency read/write workloads where you cannot fit a typical hash-map into memory, but require the highest level of performance and predicate read latency. Bitcask ensures only 1 read/write IOPS are ever required for reading and writing key/value pairs.

  • As a general purpose embedded key/value store where you would have used BoltDB, LevelDB, BuntDB or similar...

bitcask is not suited for:

  • Storing billions of records The reason for this is the key-space is held in memory using radix tree. This means the more keys you have in your key space, the more memory is consumed. Consider using a disk-backed B-Tree like BoltDB or LevelDB if you intend to store a large quantity of key/value pairs.

[!NOTE] However that storing large amounts of data in terms of value(s) is totally fine. In other words, thousands to millions of keys with large values will work just fine.

Transactions

Bitcask supports transactions with ACID semantics. A call to Txn() returns a new transaction that is a snapshot of the current trie of keys. Keys written to a transaction are committed as a single batch operation, providing Atomicity.

As writes are performed in the transaction, we maintain an internal cache of new entries written within the transaction. Thus, any follow up reads on the same key by this transaction would see this write. But, other transactions won’t, providing Isolation and Consistency.

Finally Durability in Bitcask is guaranteed with by a write-ahead-log of the current datafile and further guaranteed by enabling synchronous writes with the WithSyncWrites(true) option.

[!WARNING] A transaction is not thread safe and should only be used by a single goroutine.

Development

git clone https://git.mills.io/prologic/bitcask.git
make

Install

go get go.mills.io/bitcask/v2

Usage (library)

Install the package into your project:

go get go.mills.io/bitcask/v2
package main

import (
	"log"
	"go.mills.io/bitcask/v2"
)

func main() {
    db, _ := bitcask.Open("/tmp/db")
    defer db.Close()
    db.Put([]byte("Hello"), []byte("World"))
    val, _ := db.Get([]byte("Hello"))
    log.Printf(string(val))
}

See the GoDoc for further documentation and other examples.

See also examples

Configuration Options

The default options (if none are specified) default to a Bitcask instance with:

  • Maximum Keys of 64 bytes
  • Maximum Values of 65 KB
  • Maximum Datafiles of 1 MB (before rotating)
  • Synchronous Writes: off
  • Auto Recovery: on

The defaults are designed for high performance in mind with recovery on startup and support limits of ~16M keys and ~1GB of persitent storage with the default file descriptor limits on most Linux systems.

Any of these options can be changed with any of the WithXXX(...) options.

[!NOTE] If you require better reliability over performance, please enable synchronous writes with the WithSyncWrites(true).

Bitcask is an embedded key/value store designed for handling write-intensive workloads. However, frequent write operations leading to a large number of new key-value pairs over time can result in issues like "Too many open files" (#193) errors due to the creation of numerous data files. These problems can be mitigated by periodically compacting the data through issuing a .Merge() operation, increasing the maximum value size with the MaxDatafileSize() option, and increasing the process file descriptor limit. Example: With a MaxDatafileSize(1<<30) (1GB) and a file descriptor limit of 1M (million) files, you are able to store up to 1PB (Petabytes) of (compacted) data before you hit "Too many open files", assuming a single machine can even handle this.

You should consider your read/write workloads carefully and ensure you set appropriate file descriptor limits with ulimit -n that suit your needs.

Usage (tool)

bitcask -p /tmp/db set Hello World
bitcask -p /tmp/db get Hello
World

Usage (server)

There is also a builtin very simple Redis-compatible server called bitcaskd:

./bitcaskd ./tmp
INFO[0000] starting bitcaskd v0.0.7@146f777              bind=":6379" path=./tmp

Example session:

telnet localhost 6379
Trying ::1...
Connected to localhost.
Escape character is '^]'.
SET foo bar
+OK
GET foo
$3
bar
DEL foo
:1
GET foo
$-1
PING
+PONG
QUIT
+OK
Connection closed by foreign host.

Docker

You can also use the Bitcask Docker Image:

docker pull prologic/bitcask
docker run -d -p 6379:6379 prologic/bitcask

Performance

make bench
...

BenchmarkGet/128B-10             1029229              1191 ns/op         107.46 MB/s        4864 B/op         14 allocs/op
BenchmarkGet/256B-10              916785              1190 ns/op         215.16 MB/s        4992 B/op         14 allocs/op
BenchmarkGet/512B-10              831607              1261 ns/op         406.19 MB/s        5280 B/op         14 allocs/op
BenchmarkGet/1K-10                796448              1384 ns/op         740.06 MB/s        5856 B/op         14 allocs/op
BenchmarkGet/2K-10                612469              1724 ns/op        1187.83 MB/s        7008 B/op         14 allocs/op
BenchmarkGet/4K-10                515680              2314 ns/op        1770.19 MB/s        9568 B/op         14 allocs/op
BenchmarkGet/8K-10                375813              3231 ns/op        2535.32 MB/s       14176 B/op         14 allocs/op
BenchmarkGet/16K-10               236959              5115 ns/op        3203.28 MB/s       23136 B/op         14 allocs/op
BenchmarkGet/32K-10               129828              9449 ns/op        3467.77 MB/s       45664 B/op         14 allocs/op

BenchmarkPut/128BNoSync-10        249405              5116 ns/op          25.02 MB/s        6737 B/op         46 allocs/op
BenchmarkPut/256BNoSync-10        155542              6896 ns/op          37.12 MB/s        6867 B/op         46 allocs/op
BenchmarkPut/1KNoSync-10           72939             19902 ns/op          51.45 MB/s        7740 B/op         46 allocs/op
BenchmarkPut/2KNoSync-10           37819             33780 ns/op          60.63 MB/s        8904 B/op         46 allocs/op
BenchmarkPut/4KNoSync-10           18554             70200 ns/op          58.35 MB/s       18914 B/op         47 allocs/op
BenchmarkPut/8KNoSync-10            8276            167674 ns/op          48.86 MB/s       20249 B/op         47 allocs/op
BenchmarkPut/16KNoSync-10           3660            333656 ns/op          49.10 MB/s       29291 B/op         47 allocs/op
BenchmarkPut/32KNoSync-10           2275            561683 ns/op          58.34 MB/s       52000 B/op         48 allocs/op

BenchmarkPut/128BSync-10             258           5149745 ns/op           0.02 MB/s        6736 B/op         46 allocs/op
BenchmarkPut/256BSync-10             211           5138904 ns/op           0.05 MB/s        6864 B/op         46 allocs/op
BenchmarkPut/1KSync-10               207           5356101 ns/op           0.19 MB/s        7728 B/op         46 allocs/op
BenchmarkPut/2KSync-10               247           5212069 ns/op           0.39 MB/s        8932 B/op         46 allocs/op
BenchmarkPut/4KSync-10               207           5043624 ns/op           0.81 MB/s       18924 B/op         47 allocs/op
BenchmarkPut/8KSync-10               208           5411918 ns/op           1.51 MB/s       20204 B/op         47 allocs/op
BenchmarkPut/16KSync-10              234           5367222 ns/op           3.05 MB/s       29261 B/op         47 allocs/op
BenchmarkPut/32KSync-10              198           5594519 ns/op           5.86 MB/s       51996 B/op         48 allocs/op

BenchmarkScan-10                 1112818              1066 ns/op            4986 B/op         22 allocs/op

For 128B values:

  • ~1,000,000 reads/sec
  • ~250,000 writes/sec
  • ~1,100,000 scans/sec

The full benchmark above shows linear performance as you increase key/value sizes.

As far as benchmarks go, this is all contrived and generally not typical of any real workloads. These benchmarks were run on a 2022 Mac Studio M1 Max with 32GB of RAM. Your results may differ.

Contributors

Thank you to all those that have contributed to this project, battle-tested it, used it in their own projects or products, fixed bugs, improved performance and even fix tiny typos in documentation! Thank you and keep contributing!

You can find an AUTHORS file where we keep a list of contributors to the project. If you contribute a PR please consider adding your name there.

  • bitraft -- A Distributed Key/Value store (using Raft) with a Redis compatible protocol.
  • bitcaskfs -- A FUSE file system for mounting a Bitcask database.
  • bitcask-bench -- A benchmarking tool comparing Bitcask and several other Go key/value libraries.

License

bitcask is licensed under the term of the MIT License


Documentation

Overview

Package bitcask is a high performance embedded key value store that uses an on-disk LSM and WAL data structures and in-memory radix tree of key/value pairs as per the bitcask paper and seen in the Riak database.

Package bitcask implements a high-performance key-value store based on a WAL and LSM.

Example
_, _ = Open("path/to/db")
Output:

Example (WithOptions)
opts := []Option{
	WithMaxKeySize(1024),
	WithMaxValueSize(4096),
}
_, _ = Open("path/to/db", opts...)
Output:

Index

Examples

Constants

View Source
const (
	// DefaultDebug is the default debug mode
	DefaultDebug = false

	// DefaultPath is the default path
	DefaultPath = "/tmp/bitcask.db"

	// DefaultDirMode is the default os.FileMode used when creating directories
	DefaultDirMode = os.FileMode(0700)

	// DefaultFileMode is the default os.FileMode used when creating files
	DefaultFileMode = os.FileMode(0600)

	// DefaultMaxDatafileSize is the default maximum datafile size in bytes
	DefaultMaxDatafileSize = 1 << 20 // 1MB

	// DefaultMaxKeySize is the default maximum key size in bytes
	DefaultMaxKeySize = uint32(64) // 64 bytes

	// DefaultMaxValueSize is the default value size in bytes
	DefaultMaxValueSize = uint64(1 << 16) // 65KB

	// DefaultSyncWrites is the default file synchronization action
	DefaultSyncWrites = false

	// DefaultOpenReadonly if true opens the database in read-only mode if it is already opened for writing by another process.
	DefaultOpenReadonly = false

	// DefaultAutoRecovery is the default auto-recovery action, if set will attempt to automatically recover the database if required
	DefaultAutoRecovery = true

	// CurrentDBVersion is the current database version
	CurrentDBVersion = uint32(2)
)

Variables

View Source
var (
	// ErrKeyNotFound is the error returned when a key is not found
	ErrKeyNotFound = errors.New("error: key not found")

	// ErrKeyTooLarge is the error returned for a key that exceeds the
	// maximum allowed key size (configured with WithMaxKeySize).
	ErrKeyTooLarge = errors.New("error: key too large")

	// ErrEmptyKey is the error returned for a value with an empty key.
	ErrEmptyKey = errors.New("error: empty key")

	// ErrValueTooLarge is the error returned for a value that exceeds the
	// maximum allowed value size (configured with WithMaxValueSize).
	ErrValueTooLarge = errors.New("error: value too large")

	// ErrChecksumFailed is the error returned if a key/value retrieved does
	// not match its CRC checksum
	ErrChecksumFailed = errors.New("error: checksum failed")

	// ErrDatabaseLocked is the error returned if the database is locked
	// (typically opened by another process)
	ErrDatabaseLocked = errors.New("error: database locked")

	// ErrDatabaseReadonly is the error returned when the database has been opened in readonly mode
	ErrDatabaseReadonly = errors.New("error: database is readonly")

	// ErrInvalidRange is the error returned when the range scan is invalid
	ErrInvalidRange = errors.New("error: invalid range")

	// ErrInvalidVersion is the error returned when the database version is invalid
	ErrInvalidVersion = errors.New("error: invalid db version")

	// ErrMergeInProgress is the error returned if merge is called when already a merge
	// is in progress
	ErrMergeInProgress = errors.New("error: merge already in progress")
)
View Source
var (
	// ErrIteratorClosed ...
	ErrIteratorClosed = errors.New("error: iterator is closed")

	// ErrStopIteration ...
	ErrStopIteration = errors.New("error: iterator has no more items")
)
View Source
var (
	// ErrInvalidKeyFormat ...
	ErrInvalidKeyFormat = errors.New("invalid key format includes +[],")
)
View Source
var (
	// ErrObjectNotFound is the error returned when an object is not found in the collection.
	ErrObjectNotFound = errors.New("error: object not found")
)

Functions

func Byte2Int

func Byte2Int(b []byte) int64

Byte2Int returns an int from a little endian encoded byte sequence

func Int2Byte

func Int2Byte(i int64) []byte

Int2Byte returns an 8-byte little endian representation of the integer i

func ScoreToFloat64

func ScoreToFloat64(b Score) float64

ScoreToFloat64 ...

func ScoreToInt64

func ScoreToInt64(b Score) int64

ScoreToInt64 ...

Types

type Batch

type Batch struct {
	// contains filtered or unexported fields
}

Batch ...

func (*Batch) Clear

func (b *Batch) Clear()

Clear clears the batch

func (*Batch) Delete

func (b *Batch) Delete(key Key) (internal.Entry, error)

Delete deletes a key from the batch

func (*Batch) Entries

func (b *Batch) Entries() []internal.Entry

Entries returns all the entries in the batch

func (*Batch) Put

func (b *Batch) Put(key Key, value Value) (internal.Entry, error)

Put writes a new key/value to the batch

type Bitcask added in v2.0.7

type Bitcask struct {
	// contains filtered or unexported fields
}

Bitcask ...

func Open

func Open(path string, options ...Option) (*Bitcask, error)

Open opens the database at the given path with optional options. Options can be provided with the `WithXXX` functions that provide configuration options as functions.

func (*Bitcask) Backup added in v2.0.7

func (b *Bitcask) Backup(path string) error

Backup copies db directory to given path it creates path if it does not exist

func (*Bitcask) Batch added in v2.0.7

func (b *Bitcask) Batch() *Batch

Batch returns a new batch that allows multiple keys to be deleted and written in a single operation.

func (*Bitcask) Close added in v2.0.7

func (b *Bitcask) Close() error

Close closes the database and removes the lock. It is important to call Close() as this is the only way to cleanup the lock held by the open database.

func (*Bitcask) Collection added in v2.0.7

func (b *Bitcask) Collection(name string) *Collection

Collection ...

func (*Bitcask) Delete added in v2.0.7

func (b *Bitcask) Delete(key Key) error

Delete deletes the named key.

func (*Bitcask) ForEach added in v2.0.7

func (b *Bitcask) ForEach(f KeyFunc) (err error)

ForEach iterates over all keys in the database calling the function `f` for each key. If the function returns an error, no further keys are processed and the error is returned.

func (*Bitcask) Get added in v2.0.7

func (b *Bitcask) Get(key Key) (Value, error)

Get fetches value for a key

func (*Bitcask) GetReader added in v2.0.7

func (b *Bitcask) GetReader(key Key) (io.ReadSeeker, error)

GetReader fetches value for a key and returns an io.ReadSeeker

func (*Bitcask) Has added in v2.0.7

func (b *Bitcask) Has(key Key) bool

Has returns true if the key exists in the database, false otherwise.

func (*Bitcask) Hash added in v2.0.7

func (b *Bitcask) Hash(key Key) *Hash

func (*Bitcask) Iterator added in v2.0.7

func (b *Bitcask) Iterator(opts ...IteratorOption) *Iterator

Iterator returns an iterator for iterating through keys in key order

func (*Bitcask) Len added in v2.0.7

func (b *Bitcask) Len() int

Len returns the total number of keys in the database

func (*Bitcask) List added in v2.0.7

func (b *Bitcask) List(key Key) *List

func (*Bitcask) Merge added in v2.0.7

func (b *Bitcask) Merge() error

Merge merges all datafiles in the database. Old keys are squashed and deleted keys removes. Duplicate key/value pairs are also removed. Call this function periodically to reclaim disk space.

func (*Bitcask) Path added in v2.0.7

func (b *Bitcask) Path() string

Path returns the database path

func (*Bitcask) Put added in v2.0.7

func (b *Bitcask) Put(key Key, value Value) error

Put stores the key and value in the database.

func (*Bitcask) Range added in v2.0.7

func (b *Bitcask) Range(start, end Key, f KeyFunc) (err error)

Range performs a range scan of keys matching a range of keys between the start key and end key and calling the function `f` with the keys found. If the function returns an error no further keys are processed and the first error returned.

func (*Bitcask) Readonly added in v2.0.7

func (b *Bitcask) Readonly() bool

Readonly returns true if the database is currently opened in readonly mode, false otherwise

func (*Bitcask) Scan added in v2.0.7

func (b *Bitcask) Scan(prefix Key, f KeyFunc) (err error)

Scan performs a prefix scan of keys matching the given prefix and calling the function `f` with the keys found. If the function returns an error no further keys are processed and the first error is returned.

func (*Bitcask) SortedSet added in v2.0.7

func (b *Bitcask) SortedSet(key Key) *SortedSet

func (*Bitcask) Stats added in v2.0.7

func (b *Bitcask) Stats() (stats Stats, err error)

Stats returns statistics about the database including the number of data files, keys and overall size on disk of the data

func (*Bitcask) Sync added in v2.0.7

func (b *Bitcask) Sync() error

Sync flushes all buffers to disk ensuring all data is written

func (*Bitcask) Transaction added in v2.0.7

func (b *Bitcask) Transaction() *Txn

Transaction returns a new transaction that is a snapshot of the key space of the database and isolated from other transactions. Key/Value pairs written in a transaction are batched together. Transactions are not thread safe and should only be used in a single goroutine.

func (*Bitcask) WriteBatch added in v2.0.7

func (b *Bitcask) WriteBatch(batch *Batch) error

WriteBatch writes the batch to the database. The batch is not cleared when the write has completed, so call Clear() to reset the batch if needed.

type Collection added in v2.0.4

type Collection struct {
	// contains filtered or unexported fields
}

Collection allows you to manage a collection of objects encoded as JSON documents with a path-based key based on the provided name. This is convenient for storing complex normalized collections of objects.

func (*Collection) Add added in v2.0.4

func (c *Collection) Add(id string, obj any) error

Add adds a new object to the collection

func (*Collection) Count added in v2.0.4

func (c *Collection) Count() int

Count returns the number of objects in this collection.

func (*Collection) Delete added in v2.0.4

func (c *Collection) Delete(id string) error

Delete deletes an object from the collection

func (*Collection) Drop added in v2.0.4

func (c *Collection) Drop() error

Drop deletes the entire collection

func (*Collection) Exists added in v2.0.4

func (c *Collection) Exists() bool

Exists returns true if the collection exists at all, which really just means whether there are any objects in the collection, so this is the same as calling Count() > 0.

func (*Collection) Get added in v2.0.4

func (c *Collection) Get(id string, obj any) error

Get returns an object from the collection

func (*Collection) Has added in v2.0.4

func (c *Collection) Has(id string) bool

Has returns true if an object exists by the provided id.

func (*Collection) List added in v2.0.4

func (c *Collection) List(dest any) error

List returns a list of all objects in this collection.

type ErrBadConfig

type ErrBadConfig struct {
	Err error
}

ErrBadConfig is the error returned on failure to load the database config.

func (*ErrBadConfig) Error

func (e *ErrBadConfig) Error() string

Error implements the error interface.

func (*ErrBadConfig) Is

func (e *ErrBadConfig) Is(target error) bool

Is returns true if the provided target error is the same as ErrBadConfig.

func (*ErrBadConfig) Unwrap

func (e *ErrBadConfig) Unwrap() error

Unwrap returns the underlying wrapped error that caused ErrBadConfig to be returned.

type ErrBadMetadata

type ErrBadMetadata struct {
	Err error
}

ErrBadMetadata is the error returned on failure to load the database metadata.

func (*ErrBadMetadata) Error

func (e *ErrBadMetadata) Error() string

Error implements the error interface.

func (*ErrBadMetadata) Is

func (e *ErrBadMetadata) Is(target error) bool

Is returns true if the provided target error is the same as ErrBadConfig.

func (*ErrBadMetadata) Unwrap

func (e *ErrBadMetadata) Unwrap() error

Unwrap returns the underlying wrapped error that caused ErrBadMetadata to be returned.

type Hash

type Hash struct {
	// contains filtered or unexported fields
}

Hash ...

+key,h = ""
h[key]name = "James"
h[key]age = "21"
h[key]sex = "Male"

func (*Hash) Drop

func (h *Hash) Drop() error

Drop ...

func (*Hash) Get

func (h *Hash) Get(field []byte) ([]byte, error)

Get ...

func (*Hash) GetAll

func (h *Hash) GetAll() (map[string][]byte, error)

GetAll ...

func (*Hash) MGet

func (h *Hash) MGet(fields ...[]byte) ([][]byte, error)

MGet ...

func (*Hash) MSet

func (h *Hash) MSet(pairs ...[]byte) error

MSet ...

func (*Hash) Remove

func (h *Hash) Remove(fields ...[]byte) error

Remove ...

func (*Hash) Set

func (h *Hash) Set(field, value []byte) error

Set ...

type Item

type Item struct {
	// contains filtered or unexported fields
}

Item is a single key/value pair

func (*Item) Key

func (i *Item) Key() Key

Key returns the key

func (*Item) KeySize

func (i *Item) KeySize() int

KeySize returns the size of the key

func (*Item) String

func (i *Item) String() string

String implements the fmt.Stringer interface and returns a representation of a key

func (*Item) Value

func (i *Item) Value() Value

Value returns the value

func (*Item) ValueSize

func (i *Item) ValueSize() int

ValueSize returns the value of the value

type Iterator

type Iterator struct {
	// contains filtered or unexported fields
}

Iterator ...

func (*Iterator) Close

func (it *Iterator) Close() error

func (*Iterator) Next

func (it *Iterator) Next() (*Item, error)

func (*Iterator) SeekPrefix

func (it *Iterator) SeekPrefix(prefix Key) (*Item, error)

type IteratorOption

type IteratorOption func(it *Iterator)

IteratorOption ...

func Reverse

func Reverse() IteratorOption

Reverse ...

type Key

type Key []byte

Key is a slice of bytes that represents a key in a key/value pair

type KeyFunc

type KeyFunc func(Key) error

KeyFunc is a function that takes a key and performs some operation on it possibly returning an error

type List

type List struct {
	// contains filtered or unexported fields
}

List ... +key,l = "" l[key]0 = "a" l[key]1 = "b" l[key]2 = "c"

func (*List) Append

func (l *List) Append(values ...Value) error

Append ...

func (*List) Index

func (l *List) Index(i int64) ([]byte, error)

Index ...

func (*List) Len

func (l *List) Len() (int64, error)

Len ...

func (*List) Pop

func (l *List) Pop() ([]byte, error)

Pop ...

func (*List) Range

func (l *List) Range(start, stop int64, fn func(i int64, value []byte, quit *bool)) error

Range enumerate value by index <start> must >= 0 <stop> should equal to -1 or lager than <start>

type Option

type Option func(*config.Config) error

Option is a function that takes a config struct and modifies it

func WithAutoRecovery

func WithAutoRecovery(enabled bool) Option

WithAutoRecovery sets auto recovery of data and index file recreation. IMPORTANT: This flag MUST BE used only if a proper backup was made of all the existing datafiles.

func WithDirMode

func WithDirMode(mode os.FileMode) Option

WithDirMode sets the FileMode used for each new file created.

func WithFileMode

func WithFileMode(mode os.FileMode) Option

WithFileMode sets the FileMode used for each new file created.

func WithMaxDatafileSize

func WithMaxDatafileSize(size uint64) Option

WithMaxDatafileSize sets the maximum datafile size option

func WithMaxKeySize

func WithMaxKeySize(size uint32) Option

WithMaxKeySize sets the maximum key size option

func WithMaxValueSize

func WithMaxValueSize(size uint64) Option

WithMaxValueSize sets the maximum value size option

func WithOpenReadonly added in v2.0.6

func WithOpenReadonly(enabled bool) Option

WithOpenReadonly opens the database in readonly mode.

func WithSyncWrites added in v2.0.4

func WithSyncWrites(enabled bool) Option

WithSyncWrites causes Sync() to be called on every key/value written increasing durability and safety at the expense of write performance.

type Score

type Score []byte

Score indicated that a number can be encoded to sorted []byte Score is use in SortedSet you can implement your own decode & encode function just like below

func Float64ToScore

func Float64ToScore(f float64) Score

Float64ToScore ...

func Int64ToScore

func Int64ToScore(i int64) Score

Int64ToScore ...

type SortedSet

type SortedSet struct {
	// contains filtered or unexported fields
}

SortedSet ... +key,z = "" z[key]m member = score z[key]s score member = ""

func (*SortedSet) Add

func (s *SortedSet) Add(scoreMembers ...[]byte) (int, error)

Add add score & member pairs SortedSet.Add(Score, []byte, Score, []byte ...)

func (*SortedSet) Range

func (s *SortedSet) Range(from, to Score, fn func(i int64, score Score, member []byte, quit *bool)) error

Range ... <from> is less than <to>

func (*SortedSet) Remove

func (s *SortedSet) Remove(members ...[]byte) (int, error)

Remove ...

func (*SortedSet) Score

func (s *SortedSet) Score(member []byte) (Score, error)

Score ...

type Stats

type Stats struct {
	Datafiles   int
	Keys        int
	Size        int64
	Reclaimable int64
}

Stats is a struct returned by Stats() on an open bitcask instance

type Txn added in v2.0.7

type Txn struct {
	// contains filtered or unexported fields
}

Txn is an transaction that represents a snapshot view of the current key space of the database.Transactions are isolated from each other, and key/value pairs written in a transaction are batched together. Transactions are not thread safe and should only be used in a single goroutine. Transactions writing the same key result in the last transaction wins strategy, and there are not versioning of keys or collision detection.

func (*Txn) Commit added in v2.0.7

func (t *Txn) Commit() error

Commit commits the transaction and writes the current batch to the database

func (*Txn) Delete added in v2.0.7

func (t *Txn) Delete(key Key) error

Delete deletes the named key.

func (*Txn) Discard added in v2.0.7

func (t *Txn) Discard()

Discard discards the transaction

func (*Txn) ForEach added in v2.0.7

func (t *Txn) ForEach(f KeyFunc) (err error)

ForEach iterates over all keys in the database calling the function `f` for each key. If the function returns an error, no further keys are processed and the error is returned.

func (*Txn) Get added in v2.0.7

func (t *Txn) Get(key Key) (Value, error)

Get fetches value for a key

func (*Txn) GetReader added in v2.0.7

func (t *Txn) GetReader(key Key) (io.ReadSeeker, error)

GetReader fetches value for a key and returns an io.ReadSeeker

func (*Txn) Has added in v2.0.7

func (t *Txn) Has(key Key) bool

Has returns true if the key exists in the database, false otherwise.

func (*Txn) Iterator added in v2.0.7

func (t *Txn) Iterator(opts ...IteratorOption) *Iterator

Iterator returns an iterator for iterating through keys in key order

func (*Txn) Put added in v2.0.7

func (t *Txn) Put(key Key, value Value) error

Put stores the key and value in the database.

func (*Txn) Range added in v2.0.7

func (t *Txn) Range(start Key, end Key, f KeyFunc) (err error)

Range performs a range scan of keys matching a range of keys between the start key and end key and calling the function `f` with the keys found. If the function returns an error no further keys are processed and the first error returned.

func (*Txn) Scan added in v2.0.7

func (t *Txn) Scan(prefix Key, f KeyFunc) (err error)

Scan performs a prefix scan of keys matching the given prefix and calling the function `f` with the keys found. If the function returns an error no further keys are processed and the first error is returned.

type Value

type Value []byte

Value is a slice of bytes that represents a value in key/value pair

Directories

Path Synopsis
cmd
bitcask
Package main is a simple command-line interface for reading/writing and performing operations on a Bitcask database
Package main is a simple command-line interface for reading/writing and performing operations on a Bitcask database
bitcaskd
Package main is a simple Bitcask server that implements a basic Redis API
Package main is a simple Bitcask server that implements a basic Redis API
examples
kitchensink
Package main demonstrates all features of Bitcask (the kitchen sink)
Package main demonstrates all features of Bitcask (the kitchen sink)
redisserver
Package main implements a simple Redis-like server using Bitcask as the store
Package main implements a simple Redis-like server using Bitcask as the store
Package internal contains internal implementation details
Package internal contains internal implementation details
codec
Package codec implements binary encoding and decoding for database entries
Package codec implements binary encoding and decoding for database entries
config
Package config defines configuration details and functions to load and save configuration to disk
Package config defines configuration details and functions to load and save configuration to disk
data
Package data implements on disk and in memory storage for data files
Package data implements on disk and in memory storage for data files
index
Package index deals with reading and writing database indexes
Package index deals with reading and writing database indexes
migrations
Package migrations contains functions to run migrations
Package migrations contains functions to run migrations

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL