Documentation ¶
Overview ¶
Package ethdb defines the interfaces for an Ethereum data store.
Index ¶
- Constants
- type AncientReader
- type AncientReaderOp
- type AncientStater
- type AncientStore
- type AncientWriteOp
- type AncientWriter
- type Batch
- type Batcher
- type Compacter
- type Database
- type HookedBatch
- type Iteratee
- type Iterator
- type KeyValueReader
- type KeyValueStater
- type KeyValueStore
- type KeyValueWriter
- type Reader
- type ResettableAncientStore
- type Stater
- type Writer
Constants ¶
const IdealBatchSize = 100 * 1024
IdealBatchSize defines the size of the data batches should ideally add in one write.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AncientReader ¶
type AncientReader interface { AncientReaderOp // ReadAncients runs the given read operation while ensuring that no writes take place // on the underlying ancient store. ReadAncients(fn func(AncientReaderOp) error) (err error) }
AncientReader is the extended ancient reader interface including 'batched' or 'atomic' reading.
type AncientReaderOp ¶
type AncientReaderOp interface { // HasAncient returns an indicator whether the specified data exists in the // ancient store. HasAncient(kind string, number uint64) (bool, error) // Ancient retrieves an ancient binary blob from the append-only immutable files. Ancient(kind string, number uint64) ([]byte, error) // AncientRange retrieves multiple items in sequence, starting from the index 'start'. // It will return // - at most 'count' items, // - if maxBytes is specified: at least 1 item (even if exceeding the maxByteSize), // but will otherwise return as many items as fit into maxByteSize. // - if maxBytes is not specified, 'count' items will be returned if they are present AncientRange(kind string, start, count, maxBytes uint64) ([][]byte, error) // Ancients returns the ancient item numbers in the ancient store. Ancients() (uint64, error) // Tail returns the number of first stored item in the ancient store. // This number can also be interpreted as the total deleted items. Tail() (uint64, error) // AncientSize returns the ancient size of the specified category. AncientSize(kind string) (uint64, error) }
AncientReaderOp contains the methods required to read from immutable ancient data.
type AncientStater ¶
type AncientStater interface { // AncientDatadir returns the path of the ancient store directory. // // If the ancient store is not activated, an error is returned. // If an ephemeral ancient store is used, an empty path is returned. // // The path returned by AncientDatadir can be used as the root path // of the ancient store to construct paths for other sub ancient stores. AncientDatadir() (string, error) }
AncientStater wraps the Stat method of a backing ancient store.
type AncientStore ¶
type AncientStore interface { AncientReader AncientWriter io.Closer }
AncientStore contains all the methods required to allow handling different ancient data stores backing immutable data store.
type AncientWriteOp ¶
type AncientWriteOp interface { // Append adds an RLP-encoded item. Append(kind string, number uint64, item interface{}) error // AppendRaw adds an item without RLP-encoding it. AppendRaw(kind string, number uint64, item []byte) error }
AncientWriteOp is given to the function argument of ModifyAncients.
type AncientWriter ¶
type AncientWriter interface { // ModifyAncients runs a write operation on the ancient store. // If the function returns an error, any changes to the underlying store are reverted. // The integer return value is the total size of the written data. ModifyAncients(func(AncientWriteOp) error) (int64, error) // TruncateHead discards all but the first n ancient data from the ancient store. // After the truncation, the latest item can be accessed it item_n-1(start from 0). TruncateHead(n uint64) (uint64, error) // TruncateTail discards the first n ancient data from the ancient store. The already // deleted items are ignored. After the truncation, the earliest item can be accessed // is item_n(start from 0). The deleted items may not be removed from the ancient store // immediately, but only when the accumulated deleted data reach the threshold then // will be removed all together. TruncateTail(n uint64) (uint64, error) // Sync flushes all in-memory ancient store data to disk. Sync() error }
AncientWriter contains the methods required to write to immutable ancient data.
type Batch ¶
type Batch interface { KeyValueWriter // ValueSize retrieves the amount of data queued up for writing. ValueSize() int // Write flushes any accumulated data to disk. Write() error // Reset resets the batch for reuse. Reset() // Replay replays the batch contents. Replay(w KeyValueWriter) error }
Batch is a write-only database that commits changes to its host database when Write is called. A batch cannot be used concurrently.
type Batcher ¶
type Batcher interface { // NewBatch creates a write-only database that buffers changes to its host db // until a final write is called. NewBatch() Batch // NewBatchWithSize creates a write-only database batch with pre-allocated buffer. NewBatchWithSize(size int) Batch }
Batcher wraps the NewBatch method of a backing data store.
type Compacter ¶
type Compacter interface { // Compact flattens the underlying data store for the given key range. In essence, // deleted and overwritten versions are discarded, and the data is rearranged to // reduce the cost of operations needed to access them. // // A nil start is treated as a key before all keys in the data store; a nil limit // is treated as a key after all keys in the data store. If both is nil then it // will compact entire data store. Compact(start []byte, limit []byte) error }
Compacter wraps the Compact method of a backing data store.
type Database ¶
Database contains all the methods required by the high level database to not only access the key-value data store but also the ancient chain store.
type HookedBatch ¶
type HookedBatch struct { Batch OnPut func(key []byte, value []byte) // Callback if a key is inserted OnDelete func(key []byte) // Callback if a key is deleted }
HookedBatch wraps an arbitrary batch where each operation may be hooked into to monitor from black box code.
func (HookedBatch) Delete ¶
func (b HookedBatch) Delete(key []byte) error
Delete removes the key from the key-value data store.
type Iteratee ¶
type Iteratee interface { // NewIterator creates a binary-alphabetical iterator over a subset // of database content with a particular key prefix, starting at a particular // initial key (or after, if it does not exist). // // Note: This method assumes that the prefix is NOT part of the start, so there's // no need for the caller to prepend the prefix to the start NewIterator(prefix []byte, start []byte) Iterator }
Iteratee wraps the NewIterator methods of a backing data store.
type Iterator ¶
type Iterator interface { // Next moves the iterator to the next key/value pair. It returns whether the // iterator is exhausted. Next() bool // Error returns any accumulated error. Exhausting all the key/value pairs // is not considered to be an error. Error() error // Key returns the key of the current key/value pair, or nil if done. The caller // should not modify the contents of the returned slice, and its contents may // change on the next call to Next. Key() []byte // Value returns the value of the current key/value pair, or nil if done. The // caller should not modify the contents of the returned slice, and its contents // may change on the next call to Next. Value() []byte // Release releases associated resources. Release should always succeed and can // be called multiple times without causing error. Release() }
Iterator iterates over a database's key/value pairs in ascending key order.
When it encounters an error any seek will return false and will yield no key/ value pairs. The error can be queried by calling the Error method. Calling Release is still necessary.
An iterator must be released after use, but it is not necessary to read an iterator until exhaustion. An iterator is not safe for concurrent use, but it is safe to use multiple iterators concurrently.
type KeyValueReader ¶
type KeyValueReader interface { // Has retrieves if a key is present in the key-value data store. Has(key []byte) (bool, error) // Get retrieves the given key if it's present in the key-value data store. Get(key []byte) ([]byte, error) }
KeyValueReader wraps the Has and Get method of a backing data store.
type KeyValueStater ¶
type KeyValueStater interface { // Stat returns the statistic data of the database. Stat() (string, error) }
KeyValueStater wraps the Stat method of a backing data store.
type KeyValueStore ¶
type KeyValueStore interface { KeyValueReader KeyValueWriter KeyValueStater Batcher Iteratee Compacter io.Closer }
KeyValueStore contains all the methods required to allow handling different key-value data stores backing the high level database.
type KeyValueWriter ¶
type KeyValueWriter interface { // Put inserts the given value into the key-value data store. Put(key []byte, value []byte) error // Delete removes the key from the key-value data store. Delete(key []byte) error }
KeyValueWriter wraps the Put method of a backing data store.
type Reader ¶
type Reader interface { KeyValueReader AncientReader }
Reader contains the methods required to read data from both key-value as well as immutable ancient data.
type ResettableAncientStore ¶
type ResettableAncientStore interface { AncientStore // Reset is designed to reset the entire ancient store to its default state. Reset() error }
ResettableAncientStore extends the AncientStore interface by adding a Reset method.
type Stater ¶
type Stater interface { KeyValueStater AncientStater }
Stater contains the methods required to retrieve states from both key-value as well as immutable ancient data.
type Writer ¶
type Writer interface { KeyValueWriter AncientWriter }
Writer contains the methods required to write data to both key-value as well as immutable ancient data.
Directories ¶
Path | Synopsis |
---|---|
Package leveldb implements the key-value database layer based on LevelDB.
|
Package leveldb implements the key-value database layer based on LevelDB. |
Package memorydb implements the key-value database layer based on memory maps.
|
Package memorydb implements the key-value database layer based on memory maps. |
Package pebble implements the key-value database layer based on pebble.
|
Package pebble implements the key-value database layer based on pebble. |
Package remotedb implements the key-value database layer based on a remote geth node.
|
Package remotedb implements the key-value database layer based on a remote geth node. |