leveldb

package
v0.0.0-...-03c569e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 6, 2024 License: MIT Imports: 27 Imported by: 14

Documentation

Overview

Package leveldb provides implementation of LevelDB key/value database.

Create or open a database:

// The returned DB instance is safe for concurrent use. Which mean that all
// DB's methods may be called concurrently from multiple goroutine.
db, err := leveldb.OpenFile("path/to/db", nil)
...
defer db.Close()
...

Read or modify the database content:

// Remember that the contents of the returned slice should not be modified.
data, err := db.Get([]byte("key"), nil)
...
err = db.Put([]byte("key"), []byte("value"), nil)
...
err = db.Delete([]byte("key"), nil)
...

Iterate over database content:

iter := db.NewIterator(nil, nil)
for iter.Next() {
	// Remember that the contents of the returned slice should not be modified, and
	// only valid until the next call to Next.
	key := iter.Key()
	value := iter.Value()
	...
}
iter.Release()
err = iter.Error()
...

Iterate over subset of database content with a particular prefix:

iter := db.NewIterator(util.BytesPrefix([]byte("foo-")), nil)
for iter.Next() {
	// Use key/value.
	...
}
iter.Release()
err = iter.Error()
...

Seek-then-Iterate:

iter := db.NewIterator(nil, nil)
for ok := iter.Seek(key); ok; ok = iter.Next() {
	// Use key/value.
	...
}
iter.Release()
err = iter.Error()
...

Iterate over subset of database content:

iter := db.NewIterator(&util.Range{Start: []byte("foo"), Limit: []byte("xoo")}, nil)
for iter.Next() {
	// Use key/value.
	...
}
iter.Release()
err = iter.Error()
...

Batch writes:

batch := new(leveldb.Batch)
batch.Put([]byte("foo"), []byte("value"))
batch.Put([]byte("bar"), []byte("another value"))
batch.Delete([]byte("baz"))
err = db.Write(batch, nil)
...

Use bloom filter:

o := &opt.Options{
	Filter: filter.NewBloomFilter(10),
}
db, err := leveldb.OpenFile("path/to/db", o)
...
defer db.Close()
...

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNotFound         = errors.ErrNotFound
	ErrReadOnly         = errors.New("leveldb: read-only mode")
	ErrSnapshotReleased = errors.New("leveldb: snapshot released")
	ErrIterReleased     = errors.New("leveldb: iterator released")
	ErrClosed           = errors.New("leveldb: closed")
)

Common errors.

Functions

This section is empty.

Types

type Batch

type Batch struct {
	// contains filtered or unexported fields
}

Batch is a write batch.

func MakeBatch

func MakeBatch(n int) *Batch

MakeBatch returns empty batch with preallocated buffer.

func (*Batch) Delete

func (b *Batch) Delete(key []byte)

Delete appends 'delete operation' of the given key to the batch. It is safe to modify the contents of the argument after Delete returns but not before.

func (*Batch) Dump

func (b *Batch) Dump() []byte

Dump dumps batch contents. The returned slice can be loaded into the batch using Load method. The returned slice is not its own copy, so the contents should not be modified.

func (*Batch) Len

func (b *Batch) Len() int

Len returns number of records in the batch.

func (*Batch) Load

func (b *Batch) Load(data []byte) error

Load loads given slice into the batch. Previous contents of the batch will be discarded. The given slice will not be copied and will be used as batch buffer, so it is not safe to modify the contents of the slice.

func (*Batch) Put

func (b *Batch) Put(key, value []byte)

Put appends 'put operation' of the given key/value pair to the batch. It is safe to modify the contents of the argument after Put returns but not before.

func (*Batch) Replay

func (b *Batch) Replay(r BatchReplay) error

Replay replays batch contents.

func (*Batch) Reset

func (b *Batch) Reset()

Reset resets the batch.

type BatchReplay

type BatchReplay interface {
	Put(key, value []byte)
	Delete(key []byte)
}

BatchReplay wraps basic batch operations.

type DB

type DB struct {
	// contains filtered or unexported fields
}

DB is a LevelDB database.

func Open

func Open(stor storage.Storage, o *opt.Options) (db *DB, err error)

Open opens or creates a DB for the given storage. The DB will be created if not exist, unless ErrorIfMissing is true. Also, if ErrorIfExist is true and the DB exist Open will returns os.ErrExist error.

Open will return an error with type of ErrCorrupted if corruption detected in the DB. Use errors.IsCorrupted to test whether an error is due to corruption. Corrupted DB can be recovered with Recover function.

The returned DB instance is safe for concurrent use. The DB must be closed after use, by calling Close method.

func OpenFile

func OpenFile(path string, o *opt.Options) (db *DB, err error)

OpenFile opens or creates a DB for the given path. The DB will be created if not exist, unless ErrorIfMissing is true. Also, if ErrorIfExist is true and the DB exist OpenFile will returns os.ErrExist error.

OpenFile uses standard file-system backed storage implementation as described in the leveldb/storage package.

OpenFile will return an error with type of ErrCorrupted if corruption detected in the DB. Use errors.IsCorrupted to test whether an error is due to corruption. Corrupted DB can be recovered with Recover function.

The returned DB instance is safe for concurrent use. The DB must be closed after use, by calling Close method.

func Recover

func Recover(stor storage.Storage, o *opt.Options) (db *DB, err error)

Recover recovers and opens a DB with missing or corrupted manifest files for the given storage. It will ignore any manifest files, valid or not. The DB must already exist or it will returns an error. Also, Recover will ignore ErrorIfMissing and ErrorIfExist options.

The returned DB instance is safe for concurrent use. The DB must be closed after use, by calling Close method.

func RecoverFile

func RecoverFile(path string, o *opt.Options) (db *DB, err error)

RecoverFile recovers and opens a DB with missing or corrupted manifest files for the given path. It will ignore any manifest files, valid or not. The DB must already exist or it will returns an error. Also, Recover will ignore ErrorIfMissing and ErrorIfExist options.

RecoverFile uses standard file-system backed storage implementation as described in the leveldb/storage package.

The returned DB instance is safe for concurrent use. The DB must be closed after use, by calling Close method.

func (*DB) Close

func (db *DB) Close() error

Close closes the DB. This will also releases any outstanding snapshot, abort any in-flight compaction and discard open transaction.

It is not safe to close a DB until all outstanding iterators are released. It is valid to call Close multiple times. Other methods should not be called after the DB has been closed.

func (*DB) CompactRange

func (db *DB) CompactRange(r util.Range) error

CompactRange compacts the underlying DB for the given key range. In particular, deleted and overwritten versions are discarded, and the data is rearranged to reduce the cost of operations needed to access the data. This operation should typically only be invoked by users who understand the underlying implementation.

A nil Range.Start is treated as a key before all keys in the DB. And a nil Range.Limit is treated as a key after all keys in the DB. Therefore if both is nil then it will compact entire DB.

func (*DB) Delete

func (db *DB) Delete(key []byte, wo *opt.WriteOptions) error

Delete deletes the value for the given key. Delete will not returns error if key doesn't exist. Write merge also applies for Delete, see Write.

It is safe to modify the contents of the arguments after Delete returns but not before.

func (*DB) Get

func (db *DB) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error)

Get gets the value for the given key. It returns ErrNotFound if the DB does not contains the key.

The returned slice is its own copy, it is safe to modify the contents of the returned slice. It is safe to modify the contents of the argument after Get returns.

func (*DB) GetProperty

func (db *DB) GetProperty(name string) (value string, err error)

GetProperty returns value of the given property name.

Property names:

leveldb.num-files-at-level{n}
	Returns the number of files at level 'n'.
leveldb.stats
	Returns statistics of the underlying DB.
leveldb.iostats
	Returns statistics of effective disk read and write.
leveldb.writedelay
	Returns cumulative write delay caused by compaction.
leveldb.sstables
	Returns sstables list for each level.
leveldb.blockpool
	Returns block pool stats.
leveldb.cachedblock
	Returns size of cached block.
leveldb.openedtables
	Returns number of opened tables.
leveldb.alivesnaps
	Returns number of alive snapshots.
leveldb.aliveiters
	Returns number of alive iterators.

func (*DB) GetSnapshot

func (db *DB) GetSnapshot() (*Snapshot, error)

GetSnapshot returns a latest snapshot of the underlying DB. A snapshot is a frozen snapshot of a DB state at a particular point in time. The content of snapshot are guaranteed to be consistent.

The snapshot must be released after use, by calling Release method.

func (*DB) Has

func (db *DB) Has(key []byte, ro *opt.ReadOptions) (ret bool, err error)

Has returns true if the DB does contains the given key.

It is safe to modify the contents of the argument after Has returns.

func (*DB) NewIterator

func (db *DB) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator

NewIterator returns an iterator for the latest snapshot of the underlying DB. The returned iterator is not safe for concurrent use, but it is safe to use multiple iterators concurrently, with each in a dedicated goroutine. It is also safe to use an iterator concurrently with modifying its underlying DB. The resultant key/value pairs are guaranteed to be consistent.

Slice allows slicing the iterator to only contains keys in the given range. A nil Range.Start is treated as a key before all keys in the DB. And a nil Range.Limit is treated as a key after all keys in the DB.

WARNING: Any slice returned by interator (e.g. slice returned by calling Iterator.Key() or Iterator.Key() methods), its content should not be modified unless noted otherwise.

The iterator must be released after use, by calling Release method.

Also read Iterator documentation of the leveldb/iterator package.

func (*DB) OpenTransaction

func (db *DB) OpenTransaction() (*Transaction, error)

OpenTransaction opens an atomic DB transaction. Only one transaction can be opened at a time. Subsequent call to Write and OpenTransaction will be blocked until in-flight transaction is committed or discarded. The returned transaction handle is safe for concurrent use.

Transaction is very expensive and can overwhelm compaction, especially if transaction size is small. Use with caution. The rule of thumb is if you need to merge at least same amount of `Options.WriteBuffer` worth of data then use transaction, otherwise don't.

The transaction must be closed once done, either by committing or discarding the transaction. Closing the DB will discard open transaction.

func (*DB) Put

func (db *DB) Put(key, value []byte, wo *opt.WriteOptions) error

Put sets the value for the given key. It overwrites any previous value for that key; a DB is not a multi-map. Write merge also applies for Put, see Write.

It is safe to modify the contents of the arguments after Put returns but not before.

func (*DB) SetReadOnly

func (db *DB) SetReadOnly() error

SetReadOnly makes DB read-only. It will stay read-only until reopened.

func (*DB) SizeOf

func (db *DB) SizeOf(ranges []util.Range) (Sizes, error)

SizeOf calculates approximate sizes of the given key ranges. The length of the returned sizes are equal with the length of the given ranges. The returned sizes measure storage space usage, so if the user data compresses by a factor of ten, the returned sizes will be one-tenth the size of the corresponding user data size. The results may not include the sizes of recently written data.

func (*DB) Stats

func (db *DB) Stats(s *DBStats) error

Stats populates s with database statistics.

func (*DB) Write

func (db *DB) Write(batch *Batch, wo *opt.WriteOptions) error

Write apply the given batch to the DB. The batch records will be applied sequentially. Write might be used concurrently, when used concurrently and batch is small enough, write will try to merge the batches. Set NoWriteMerge option to true to disable write merge.

It is safe to modify the contents of the arguments after Write returns but not before. Write will not modify content of the batch.

type DBStats

type DBStats struct {
	WriteDelayCount    int32
	WriteDelayDuration time.Duration
	WritePaused        bool

	AliveSnapshots int32
	AliveIterators int32

	IOWrite uint64
	IORead  uint64

	BlockCacheSize    int
	OpenedTablesCount int

	LevelSizes        Sizes
	LevelTablesCounts []int
	LevelRead         Sizes
	LevelWrite        Sizes
	LevelDurations    []time.Duration

	MemComp       uint32
	Level0Comp    uint32
	NonLevel0Comp uint32
	SeekComp      uint32
}

DBStats is database statistics.

type ErrBatchCorrupted

type ErrBatchCorrupted struct {
	Reason string
}

ErrBatchCorrupted records reason of batch corruption. This error will be wrapped with errors.ErrCorrupted.

func (*ErrBatchCorrupted) Error

func (e *ErrBatchCorrupted) Error() string

type ErrInternalKeyCorrupted

type ErrInternalKeyCorrupted struct {
	Ikey   []byte
	Reason string
}

ErrInternalKeyCorrupted records internal key corruption.

func (*ErrInternalKeyCorrupted) Error

func (e *ErrInternalKeyCorrupted) Error() string

type ErrManifestCorrupted

type ErrManifestCorrupted struct {
	Field  string
	Reason string
}

ErrManifestCorrupted records manifest corruption. This error will be wrapped with errors.ErrCorrupted.

func (*ErrManifestCorrupted) Error

func (e *ErrManifestCorrupted) Error() string

type Reader

type Reader interface {
	Get(key []byte, ro *opt.ReadOptions) (value []byte, err error)
	NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator
}

Reader is the interface that wraps basic Get and NewIterator methods. This interface implemented by both DB and Snapshot.

type Sizes

type Sizes []int64

Sizes is list of size.

func (Sizes) Sum

func (sizes Sizes) Sum() int64

Sum returns sum of the sizes.

type Snapshot

type Snapshot struct {
	// contains filtered or unexported fields
}

Snapshot is a DB snapshot.

func (*Snapshot) Get

func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error)

Get gets the value for the given key. It returns ErrNotFound if the DB does not contains the key.

The caller should not modify the contents of the returned slice, but it is safe to modify the contents of the argument after Get returns.

func (*Snapshot) Has

func (snap *Snapshot) Has(key []byte, ro *opt.ReadOptions) (ret bool, err error)

Has returns true if the DB does contains the given key.

It is safe to modify the contents of the argument after Get returns.

func (*Snapshot) NewIterator

func (snap *Snapshot) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator

NewIterator returns an iterator for the snapshot of the underlying DB. The returned iterator is not safe for concurrent use, but it is safe to use multiple iterators concurrently, with each in a dedicated goroutine. It is also safe to use an iterator concurrently with modifying its underlying DB. The resultant key/value pairs are guaranteed to be consistent.

Slice allows slicing the iterator to only contains keys in the given range. A nil Range.Start is treated as a key before all keys in the DB. And a nil Range.Limit is treated as a key after all keys in the DB.

WARNING: Any slice returned by interator (e.g. slice returned by calling Iterator.Key() or Iterator.Value() methods), its content should not be modified unless noted otherwise.

The iterator must be released after use, by calling Release method. Releasing the snapshot doesn't mean releasing the iterator too, the iterator would be still valid until released.

Also read Iterator documentation of the leveldb/iterator package.

func (*Snapshot) Release

func (snap *Snapshot) Release()

Release releases the snapshot. This will not release any returned iterators, the iterators would still be valid until released or the underlying DB is closed.

Other methods should not be called after the snapshot has been released.

func (*Snapshot) String

func (snap *Snapshot) String() string

type Transaction

type Transaction struct {
	// contains filtered or unexported fields
}

Transaction is the transaction handle.

func (*Transaction) Commit

func (tr *Transaction) Commit() error

Commit commits the transaction. If error is not nil, then the transaction is not committed, it can then either be retried or discarded.

Other methods should not be called after transaction has been committed.

func (*Transaction) Delete

func (tr *Transaction) Delete(key []byte, wo *opt.WriteOptions) error

Delete deletes the value for the given key. Please note that the transaction is not compacted until committed, so if you writes 10 same keys, then those 10 same keys are in the transaction.

It is safe to modify the contents of the arguments after Delete returns.

func (*Transaction) Discard

func (tr *Transaction) Discard()

Discard discards the transaction. This method is noop if transaction is already closed (either committed or discarded)

Other methods should not be called after transaction has been discarded.

func (*Transaction) Get

func (tr *Transaction) Get(key []byte, ro *opt.ReadOptions) ([]byte, error)

Get gets the value for the given key. It returns ErrNotFound if the DB does not contains the key.

The returned slice is its own copy, it is safe to modify the contents of the returned slice. It is safe to modify the contents of the argument after Get returns.

func (*Transaction) Has

func (tr *Transaction) Has(key []byte, ro *opt.ReadOptions) (bool, error)

Has returns true if the DB does contains the given key.

It is safe to modify the contents of the argument after Has returns.

func (*Transaction) NewIterator

func (tr *Transaction) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator

NewIterator returns an iterator for the latest snapshot of the transaction. The returned iterator is not safe for concurrent use, but it is safe to use multiple iterators concurrently, with each in a dedicated goroutine. It is also safe to use an iterator concurrently while writes to the transaction. The resultant key/value pairs are guaranteed to be consistent.

Slice allows slicing the iterator to only contains keys in the given range. A nil Range.Start is treated as a key before all keys in the DB. And a nil Range.Limit is treated as a key after all keys in the DB.

The returned iterator has locks on its own resources, so it can live beyond the lifetime of the transaction who creates them.

WARNING: Any slice returned by interator (e.g. slice returned by calling Iterator.Key() or Iterator.Key() methods), its content should not be modified unless noted otherwise.

The iterator must be released after use, by calling Release method.

Also read Iterator documentation of the leveldb/iterator package.

func (*Transaction) Put

func (tr *Transaction) Put(key, value []byte, wo *opt.WriteOptions) error

Put sets the value for the given key. It overwrites any previous value for that key; a DB is not a multi-map. Please note that the transaction is not compacted until committed, so if you writes 10 same keys, then those 10 same keys are in the transaction.

It is safe to modify the contents of the arguments after Put returns.

func (*Transaction) Write

func (tr *Transaction) Write(b *Batch, wo *opt.WriteOptions) error

Write apply the given batch to the transaction. The batch will be applied sequentially. Please note that the transaction is not compacted until committed, so if you writes 10 same keys, then those 10 same keys are in the transaction.

It is safe to modify the contents of the arguments after Write returns.

Directories

Path Synopsis
Package cache provides interface and implementation of a cache algorithms.
Package cache provides interface and implementation of a cache algorithms.
Package comparer provides interface and implementation for ordering sets of data.
Package comparer provides interface and implementation for ordering sets of data.
Package errors provides common error types used throughout leveldb.
Package errors provides common error types used throughout leveldb.
Package filter provides interface and implementation of probabilistic data structure.
Package filter provides interface and implementation of probabilistic data structure.
Package iterator provides interface and implementation to traverse over contents of a database.
Package iterator provides interface and implementation to traverse over contents of a database.
Package journal reads and writes sequences of journals.
Package journal reads and writes sequences of journals.
Package memdb provides in-memory key/value database implementation.
Package memdb provides in-memory key/value database implementation.
Package opt provides sets of options used by LevelDB.
Package opt provides sets of options used by LevelDB.
Package storage provides storage abstraction for LevelDB.
Package storage provides storage abstraction for LevelDB.
Package table allows read and write sorted key/value.
Package table allows read and write sorted key/value.
Package util provides utilities used throughout leveldb.
Package util provides utilities used throughout leveldb.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL