database

package
v1.0.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 26, 2024 License: GPL-3.0 Imports: 49 Imported by: 0

Documentation

Overview

Package database implements various types of databases used in Kaia. This package is used to read/write data from/to the persistent layer.

Overview of database package

DBManager is the interface used by the consumers of database package. databaseManager is the implementation of DBManager interface. It contains cacheManager and a list of Database interfaces. cacheManager caches data stored in the persistent layer, to decrease the direct access to the persistent layer. Database is the interface for persistent layer implementation. Currently there are 4 implementations, levelDB, memDB, badgerDB, dynamoDB and shardedDB.

Source Files

  • badger_database.go : implementation of badgerDB, which wraps github.com/dgraph-io/badger
  • cache_manager.go : implementation of cacheManager, which manages cache layer over persistent layer
  • db_manager.go : contains DBManager and databaseManager
  • dynamodb.go : implementation of dynamoDB, which wraps github.com/aws/aws-sdk-go/service/dynamodb
  • interface.go : interfaces used outside database package
  • leveldb_database.go : implementation of levelDB, which wraps github.com/syndtr/goleveldb
  • memory_database.go : implementation of MemDB, which wraps go native map structure
  • metrics.go : metrics used in database package, mostly related to cacheManager
  • sharded_database.go : implementation of shardedDB, which wraps a list of Database interface
  • schema.go : prefixes and suffixes for database keys and database key generating functions

Package pebble implements the key-value database layer based on pebble.

Index

Constants

View Source
const (
	LevelDB   DBType = "LevelDB"
	RocksDB          = "RocksDB"
	BadgerDB         = "BadgerDB"
	MemoryDB         = "MemoryDB"
	DynamoDB         = "DynamoDBS3"
	ShardedDB        = "ShardedDB"
	PebbleDB         = "PebbleDB"
)
View Source
const IdealBatchSize = 100 * 1024

IdealBatchSize defines the size of the data batches should ideally add in one write.

View Source
const (
	MinOpenFilesCacheCapacity = 16
)
View Source
const (
	WorkerNum = 10
)

batch write

Variables

View Source
var (
	HeadBlockQ backupHashQueue
	FastBlockQ backupHashQueue
)
View Source
var (

	// SnapshotGeneratorKey tracks the snapshot generation marker across restarts.
	SnapshotGeneratorKey = []byte("SnapshotGenerator")

	SnapshotAccountPrefix = []byte("a") // SnapshotAccountPrefix + account hash -> account trie value
	SnapshotStoragePrefix = []byte("o") // SnapshotStoragePrefix + account hash + storage hash -> storage trie value

	// Chain index prefixes (use `i` + single byte to avoid mixing data types).
	BloomBitsIndexPrefix = []byte("iB") // BloomBitsIndexPrefix is the data table of a chain indexer to track its progress

)

The fields below define the low level database schema prefixing.

View Source
var ErrRocksDBNotBuilt = errors.New("rocksdb is not built")
View Source
var OpenFileLimit = 64

Functions

func AccountSnapshotKey

func AccountSnapshotKey(hash common.Hash) []byte

AccountSnapshotKey = SnapshotAccountPrefix + hash

func BloomBitsKey

func BloomBitsKey(bit uint, section uint64, hash common.Hash) []byte

bloomBitsKey = bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash

func CodeKey

func CodeKey(hash common.Hash) []byte

CodeKey = codePrefix + hash

func GetDefaultLevelDBOption

func GetDefaultLevelDBOption() *opt.Options

GetDefaultLevelDBOption returns default LevelDB option copied from defaultLevelDBOption. defaultLevelDBOption has fields with minimum values.

func GetOpenFilesLimit

func GetOpenFilesLimit() int

GetOpenFilesLimit raises out the number of allowed file handles per process for Kaia and returns half of the allowance to assign to the database.

func IsCodeKey

func IsCodeKey(key []byte) (bool, []byte)

IsCodeKey reports whether the given byte slice is the key of contract code, if so return the raw code hash as well.

func IsPow2

func IsPow2(num uint) bool

IsPow2 checks if the given number is power of two or not.

func NewBadgerDB

func NewBadgerDB(dbDir string) (*badgerDB, error)

func NewLevelDB

func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)

func NewLevelDBWithOption

func NewLevelDBWithOption(dbPath string, ldbOption *opt.Options) (*levelDB, error)

NewLevelDBWithOption explicitly receives LevelDB option to construct a LevelDB object.

func NewPebbleDB added in v1.0.2

func NewPebbleDB(dbc *DBConfig, file string) (*pebbleDB, error)

NewPebbleDB returns a wrapped pebble DB object. The namespace is the prefix that the metrics reporting should use for surfacing internal stats.

func SenderTxHashToTxHashKey

func SenderTxHashToTxHashKey(senderTxHash common.Hash) []byte

func StorageSnapshotKey

func StorageSnapshotKey(accountHash, storageHash common.Hash) []byte

StorageSnapshotKey = SnapshotStoragePrefix + account hash + storage hash

func StorageSnapshotsKey

func StorageSnapshotsKey(accountHash common.Hash) []byte

StorageSnapshotsKey = SnapshotStoragePrefix + account hash + storage hash

func TrieNodeKey

func TrieNodeKey(hash common.ExtHash) []byte

TrieNodeKey = if Legacy, hash32. Otherwise, exthash

func TxLookupKey

func TxLookupKey(hash common.Hash) []byte

TxLookupKey = txLookupPrefix + hash

func WriteBatches

func WriteBatches(batches ...Batch) (int, error)

func WriteBatchesOverThreshold

func WriteBatchesOverThreshold(batches ...Batch) (int, error)

func WriteBatchesParallel

func WriteBatchesParallel(batches ...Batch) (int, error)

Types

type Batch

type Batch interface {
	KeyValueWriter

	// ValueSize retrieves the amount of data queued up for writing.
	ValueSize() int

	// Write flushes any accumulated data to disk.
	Write() error

	// Reset resets the Batch for reuse.
	Reset()

	// Release deallocate WriteBatch object of RocksDB.
	Release()

	// Replay replays the Batch contents.
	Replay(w KeyValueWriter) error
}

Batch is a write-only database that commits changes to its host database when Write is called. A Batch cannot be used concurrently.

func NewStateTrieDBBatch

func NewStateTrieDBBatch(batches []Batch) Batch

type Batcher

type Batcher interface {
	// NewBatch creates a write-only database that buffers changes to its host db
	// until a final write is called.
	NewBatch() Batch
}

Batcher wraps the NewBatch method of a backing data store.

type Compacter

type Compacter interface {
	// Compact flattens the underlying data store for the given key range. In essence,
	// deleted and overwritten versions are discarded, and the data is rearranged to
	// reduce the cost of operations needed to access them.
	//
	// A nil start is treated as a key before all keys in the data store; a nil limit
	// is treated as a key after all keys in the data store. If both is nil then it
	// will compact entire data store.
	Compact(start []byte, limit []byte) error
}

Compacter wraps the Compact method of a backing data store.

type CustomRetryer

type CustomRetryer struct {
	client.DefaultRetryer
}

CustomRetryer wraps AWS SDK's built in DefaultRetryer adding additional custom features. DefaultRetryer of AWS SDK has its own standard of retryable situation, but it's not proper when network environment is not stable. CustomRetryer conservatively retry in all error cases because DB failure of Kaia is critical.

func (CustomRetryer) ShouldRetry

func (r CustomRetryer) ShouldRetry(req *request.Request) bool

ShouldRetry overrides AWS SDK's built in DefaultRetryer to retry in all error cases.

type DBConfig

type DBConfig struct {
	// General configurations for all types of DB.
	Dir                 string
	DBType              DBType
	SingleDB            bool // whether dbs (such as MiscDB, headerDB and etc) share one physical DB
	NumStateTrieShards  uint // the number of shards of state trie db
	ParallelDBWrite     bool
	OpenFilesLimit      int
	EnableDBPerfMetrics bool // If true, read and write performance will be logged

	// LevelDB related configurations.
	LevelDBCacheSize   int // LevelDBCacheSize = BlockCacheCapacity + WriteBuffer
	LevelDBCompression LevelDBCompressionType
	LevelDBBufferPool  bool

	// PebbleDB related configurations
	PebbleDBCacheSize int

	// RocksDB related configurations
	RocksDBConfig *RocksDBConfig

	// DynamoDB related configurations
	DynamoDBConfig *DynamoDBConfig
}

DBConfig handles database related configurations.

type DBEntryType

type DBEntryType uint8
const (
	MiscDB DBEntryType = iota // Do not move MiscDB which has the path of others DB.

	BodyDB
	ReceiptsDB
	StateTrieDB
	StateTrieMigrationDB
	TxLookUpEntryDB

	SnapshotDB
)

func (DBEntryType) String

func (et DBEntryType) String() string

type DBManager

type DBManager interface {
	IsParallelDBWrite() bool
	IsSingle() bool
	InMigration() bool
	MigrationBlockNumber() uint64

	Close()
	NewBatch(dbType DBEntryType) Batch

	GetMemDB() *MemDB
	GetDBConfig() *DBConfig

	CreateMigrationDBAndSetStatus(blockNum uint64) error
	FinishStateMigration(succeed bool) chan struct{}
	GetStateTrieDB() Database
	GetStateTrieMigrationDB() Database
	GetMiscDB() Database
	GetSnapshotDB() Database

	// from accessors_chain.go
	ReadCanonicalHash(number uint64) common.Hash
	WriteCanonicalHash(hash common.Hash, number uint64)
	DeleteCanonicalHash(number uint64)

	ReadHeadHeaderHash() common.Hash
	WriteHeadHeaderHash(hash common.Hash)

	ReadHeadBlockHash() common.Hash
	ReadHeadBlockBackupHash() common.Hash
	WriteHeadBlockHash(hash common.Hash)

	ReadHeadFastBlockHash() common.Hash
	ReadHeadFastBlockBackupHash() common.Hash
	WriteHeadFastBlockHash(hash common.Hash)

	ReadFastTrieProgress() uint64
	WriteFastTrieProgress(count uint64)

	HasHeader(hash common.Hash, number uint64) bool
	ReadHeader(hash common.Hash, number uint64) *types.Header
	ReadHeaderRLP(hash common.Hash, number uint64) rlp.RawValue
	WriteHeader(header *types.Header)
	DeleteHeader(hash common.Hash, number uint64)
	ReadHeaderNumber(hash common.Hash) *uint64

	HasBody(hash common.Hash, number uint64) bool
	ReadBody(hash common.Hash, number uint64) *types.Body
	ReadBodyInCache(hash common.Hash) *types.Body
	ReadBodyRLP(hash common.Hash, number uint64) rlp.RawValue
	ReadBodyRLPByHash(hash common.Hash) rlp.RawValue
	WriteBody(hash common.Hash, number uint64, body *types.Body)
	PutBodyToBatch(batch Batch, hash common.Hash, number uint64, body *types.Body)
	WriteBodyRLP(hash common.Hash, number uint64, rlp rlp.RawValue)
	DeleteBody(hash common.Hash, number uint64)

	ReadTd(hash common.Hash, number uint64) *big.Int
	WriteTd(hash common.Hash, number uint64, td *big.Int)
	DeleteTd(hash common.Hash, number uint64)

	ReadReceipt(txHash common.Hash) (*types.Receipt, common.Hash, uint64, uint64)
	ReadReceipts(blockHash common.Hash, number uint64) types.Receipts
	ReadReceiptsByBlockHash(hash common.Hash) types.Receipts
	WriteReceipts(hash common.Hash, number uint64, receipts types.Receipts)
	PutReceiptsToBatch(batch Batch, hash common.Hash, number uint64, receipts types.Receipts)
	DeleteReceipts(hash common.Hash, number uint64)

	ReadBlock(hash common.Hash, number uint64) *types.Block
	ReadBlockByHash(hash common.Hash) *types.Block
	ReadBlockByNumber(number uint64) *types.Block
	HasBlock(hash common.Hash, number uint64) bool
	WriteBlock(block *types.Block)
	DeleteBlock(hash common.Hash, number uint64)

	ReadBadBlock(hash common.Hash) *types.Block
	WriteBadBlock(block *types.Block)
	ReadAllBadBlocks() ([]*types.Block, error)
	DeleteBadBlocks()

	FindCommonAncestor(a, b *types.Header) *types.Header

	ReadIstanbulSnapshot(hash common.Hash) ([]byte, error)
	WriteIstanbulSnapshot(hash common.Hash, blob []byte)
	DeleteIstanbulSnapshot(hash common.Hash)

	WriteMerkleProof(key, value []byte)

	// Bytecodes related operations
	ReadCode(hash common.Hash) []byte
	ReadCodeWithPrefix(hash common.Hash) []byte
	WriteCode(hash common.Hash, code []byte)
	PutCodeToBatch(batch Batch, hash common.Hash, code []byte)
	DeleteCode(hash common.Hash)
	HasCode(hash common.Hash) bool

	// State Trie Database related operations
	ReadTrieNode(hash common.ExtHash) ([]byte, error)
	HasTrieNode(hash common.ExtHash) (bool, error)
	HasCodeWithPrefix(hash common.Hash) bool
	ReadPreimage(hash common.Hash) []byte

	// Read StateTrie from new DB
	ReadTrieNodeFromNew(hash common.ExtHash) ([]byte, error)
	HasTrieNodeFromNew(hash common.ExtHash) (bool, error)
	HasCodeWithPrefixFromNew(hash common.Hash) bool
	ReadPreimageFromNew(hash common.Hash) []byte

	// Read StateTrie from old DB
	ReadTrieNodeFromOld(hash common.ExtHash) ([]byte, error)
	HasTrieNodeFromOld(hash common.ExtHash) (bool, error)
	HasCodeWithPrefixFromOld(hash common.Hash) bool
	ReadPreimageFromOld(hash common.Hash) []byte

	// Write StateTrie
	WriteTrieNode(hash common.ExtHash, node []byte)
	PutTrieNodeToBatch(batch Batch, hash common.ExtHash, node []byte)
	DeleteTrieNode(hash common.ExtHash)
	WritePreimages(number uint64, preimages map[common.Hash][]byte)

	// Trie pruning
	ReadPruningEnabled() bool
	WritePruningEnabled()
	DeletePruningEnabled()

	WritePruningMarks(marks []PruningMark)
	ReadPruningMarks(startNumber, endNumber uint64) []PruningMark
	DeletePruningMarks(marks []PruningMark)
	PruneTrieNodes(marks []PruningMark)
	WriteLastPrunedBlockNumber(blockNumber uint64)
	ReadLastPrunedBlockNumber() (uint64, error)

	// from accessors_indexes.go
	ReadTxLookupEntry(hash common.Hash) (common.Hash, uint64, uint64)
	WriteTxLookupEntries(block *types.Block)
	WriteAndCacheTxLookupEntries(block *types.Block) error
	PutTxLookupEntriesToBatch(batch Batch, block *types.Block)
	DeleteTxLookupEntry(hash common.Hash)

	ReadTxAndLookupInfo(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64)

	NewSenderTxHashToTxHashBatch() Batch
	PutSenderTxHashToTxHashToBatch(batch Batch, senderTxHash, txHash common.Hash)
	ReadTxHashFromSenderTxHash(senderTxHash common.Hash) common.Hash

	ReadBloomBits(bloomBitsKey []byte) ([]byte, error)
	WriteBloomBits(bloomBitsKey []byte, bits []byte)

	ReadValidSections() ([]byte, error)
	WriteValidSections(encodedSections []byte)

	ReadSectionHead(encodedSection []byte) ([]byte, error)
	WriteSectionHead(encodedSection []byte, hash common.Hash)
	DeleteSectionHead(encodedSection []byte)

	// from accessors_metadata.go
	ReadDatabaseVersion() *uint64
	WriteDatabaseVersion(version uint64)

	ReadChainConfig(hash common.Hash) *params.ChainConfig
	WriteChainConfig(hash common.Hash, cfg *params.ChainConfig)

	// from accessors_snapshot.go
	ReadSnapshotJournal() []byte
	WriteSnapshotJournal(journal []byte)
	DeleteSnapshotJournal()

	ReadSnapshotGenerator() []byte
	WriteSnapshotGenerator(generator []byte)
	DeleteSnapshotGenerator()

	ReadSnapshotDisabled() bool
	WriteSnapshotDisabled()
	DeleteSnapshotDisabled()

	ReadSnapshotRecoveryNumber() *uint64
	WriteSnapshotRecoveryNumber(number uint64)
	DeleteSnapshotRecoveryNumber()

	ReadSnapshotSyncStatus() []byte
	WriteSnapshotSyncStatus(status []byte)
	DeleteSnapshotSyncStatus()

	ReadSnapshotRoot() common.Hash
	WriteSnapshotRoot(root common.Hash)
	DeleteSnapshotRoot()

	ReadAccountSnapshot(hash common.Hash) []byte
	WriteAccountSnapshot(hash common.Hash, entry []byte)
	DeleteAccountSnapshot(hash common.Hash)

	ReadStorageSnapshot(accountHash, storageHash common.Hash) []byte
	WriteStorageSnapshot(accountHash, storageHash common.Hash, entry []byte)
	DeleteStorageSnapshot(accountHash, storageHash common.Hash)

	NewSnapshotDBIterator(prefix []byte, start []byte) Iterator

	NewSnapshotDBBatch() SnapshotDBBatch

	// below operations are used in parent chain side, not child chain side.
	WriteChildChainTxHash(ccBlockHash common.Hash, ccTxHash common.Hash)
	ConvertChildChainBlockHashToParentChainTxHash(scBlockHash common.Hash) common.Hash

	WriteLastIndexedBlockNumber(blockNum uint64)
	GetLastIndexedBlockNumber() uint64

	// below operations are used in child chain side, not parent chain side.
	WriteAnchoredBlockNumber(blockNum uint64)
	ReadAnchoredBlockNumber() uint64

	WriteReceiptFromParentChain(blockHash common.Hash, receipt *types.Receipt)
	ReadReceiptFromParentChain(blockHash common.Hash) *types.Receipt

	WriteHandleTxHashFromRequestTxHash(rTx, hTx common.Hash)
	ReadHandleTxHashFromRequestTxHash(rTx common.Hash) common.Hash

	WriteParentOperatorFeePayer(feePayer common.Address)
	WriteChildOperatorFeePayer(feePayer common.Address)
	ReadParentOperatorFeePayer() common.Address
	ReadChildOperatorFeePayer() common.Address

	// cacheManager related functions.
	ClearHeaderChainCache()
	ClearBlockChainCache()
	ReadTxAndLookupInfoInCache(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64)
	ReadBlockReceiptsInCache(blockHash common.Hash) types.Receipts
	ReadTxReceiptInCache(txHash common.Hash) *types.Receipt

	// snapshot in clique(ConsensusClique) consensus
	WriteCliqueSnapshot(snapshotBlockHash common.Hash, encodedSnapshot []byte)
	ReadCliqueSnapshot(snapshotBlockHash common.Hash) ([]byte, error)

	// Governance related functions
	WriteGovernance(data map[string]interface{}, num uint64) error
	WriteGovernanceIdx(num uint64) error
	ReadGovernance(num uint64) (map[string]interface{}, error)
	ReadRecentGovernanceIdx(count int) ([]uint64, error)
	ReadGovernanceAtNumber(num uint64, epoch uint64) (uint64, map[string]interface{}, error)
	WriteGovernanceState(b []byte)
	ReadGovernanceState() ([]byte, error)
	DeleteGovernance(num uint64)

	// StakingInfo related functions
	ReadStakingInfo(blockNum uint64) ([]byte, error)
	WriteStakingInfo(blockNum uint64, stakingInfo []byte) error
	HasStakingInfo(blockNum uint64) (bool, error)
	DeleteStakingInfo(blockNum uint64)

	// TotalSupply checkpoint functions
	ReadSupplyCheckpoint(blockNum uint64) *SupplyCheckpoint
	WriteSupplyCheckpoint(blockNum uint64, checkpoint *SupplyCheckpoint)
	DeleteSupplyCheckpoint(blockNum uint64)
	ReadLastSupplyCheckpointNumber() uint64
	WriteLastSupplyCheckpointNumber(blockNum uint64)

	// DB migration related function
	StartDBMigration(DBManager) error

	// ChainDataFetcher checkpoint function
	WriteChainDataFetcherCheckpoint(checkpoint uint64)
	ReadChainDataFetcherCheckpoint() (uint64, error)

	TryCatchUpWithPrimary() error

	Stat(string) (string, error)
	Compact([]byte, []byte) error
	// contains filtered or unexported methods
}

func NewDBManager

func NewDBManager(dbc *DBConfig) DBManager

NewDBManager returns DBManager interface. If SingleDB is false, each Database will have its own DB. If not, each Database will share one common DB.

func NewLevelDBManagerForTest

func NewLevelDBManagerForTest(dbc *DBConfig, levelDBOption *opt.Options) (DBManager, error)

NewLevelDBManagerForTest returns a DBManager, consisted of only LevelDB. It also accepts LevelDB option, opt.Options.

func NewMemoryDBManager

func NewMemoryDBManager() DBManager

type DBType

type DBType string

func (DBType) ToValid

func (db DBType) ToValid() DBType

ToValid converts DBType to a valid one. If it is unable to convert, "" is returned.

type Database

type Database interface {
	KeyValueWriter
	KeyValueStater
	Compacter

	Get(key []byte) ([]byte, error)
	Has(key []byte) (bool, error)
	Close()
	NewBatch() Batch
	Type() DBType
	Meter(prefix string)
	Iteratee

	TryCatchUpWithPrimary() error
}

Database wraps all database operations. All methods are safe for concurrent use.

func NewDynamoDB

func NewDynamoDB(config *DynamoDBConfig) (Database, error)

NewDynamoDB creates either dynamoDB or dynamoDBReadOnly depending on config.ReadOnly.

func NewRocksDB

func NewRocksDB(path string, config *RocksDBConfig) (Database, error)

type DynamoDBConfig

type DynamoDBConfig struct {
	TableName          string
	Region             string // AWS region
	Endpoint           string // Where DynamoDB reside (Used to specify the localstack endpoint on the test)
	S3Endpoint         string // Where S3 reside
	IsProvisioned      bool   // Billing mode
	ReadCapacityUnits  int64  // read capacity when provisioned
	WriteCapacityUnits int64  // write capacity when provisioned
	ReadOnly           bool   // disables write
	PerfCheck          bool
}

func GetDefaultDynamoDBConfig

func GetDefaultDynamoDBConfig() *DynamoDBConfig

GetTestDynamoConfig gets dynamo config for actual aws DynamoDB test

If you use this config, you will be charged for what you use. You need to set AWS credentials to access to dynamoDB.

$ export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY
$ export AWS_SECRET_ACCESS_KEY=YOUR_SECRET

type DynamoData

type DynamoData struct {
	Key []byte `json:"Key" dynamodbav:"Key"`
	Val []byte `json:"Val" dynamodbav:"Val"`
}

type Iteratee

type Iteratee interface {
	// NewIterator creates a binary-alphabetical iterator over a subset
	// of database content with a particular key prefix, starting at a particular
	// initial key (or after, if it does not exist).
	//
	// Note: This method assumes that the prefix is NOT part of the start, so there's
	// no need for the caller to prepend the prefix to the start
	NewIterator(prefix []byte, start []byte) Iterator
}

Iteratee wraps the NewIterator methods of a backing data store.

type Iterator

type Iterator interface {
	// Next moves the iterator to the next key/value pair. It returns whether the
	// iterator is exhausted.
	Next() bool

	// Error returns any accumulated error. Exhausting all the key/value pairs
	// is not considered to be an error.
	Error() error

	// Key returns the key of the current key/value pair, or nil if done. The caller
	// should not modify the contents of the returned slice, and its contents may
	// change on the next call to Next.
	Key() []byte

	// Value returns the value of the current key/value pair, or nil if done. The
	// caller should not modify the contents of the returned slice, and its contents
	// may change on the next call to Next.
	Value() []byte

	// Release releases associated resources. Release should always succeed and can
	// be called multiple times without causing error.
	Release()
}

Iterator iterates over a database's key/value pairs in ascending key order.

When it encounters an error any seek will return false and will yield no key/ value pairs. The error can be queried by calling the Error method. Calling Release is still necessary.

An iterator must be released after use, but it is not necessary to read an iterator until exhaustion. An iterator is not safe for concurrent use, but it is safe to use multiple iterators concurrently.

type KeyValueStater

type KeyValueStater interface {
	// Stat returns a particular internal stat of the database.
	Stat(property string) (string, error)
}

KeyValueStater wraps the Stat method of a backing data store.

type KeyValueWriter

type KeyValueWriter interface {
	// Put inserts the given value into the key-value data store.
	Put(key []byte, value []byte) error

	// Delete removes the key from the key-value data store.
	Delete(key []byte) error
}

KeyValueWriter wraps the Put method of a backing data store.

type LevelDBCompressionType

type LevelDBCompressionType uint8
const (
	AllNoCompression LevelDBCompressionType = iota
	ReceiptOnlySnappyCompression
	StateTrieOnlyNoCompression
	AllSnappyCompression
)

type MemDB

type MemDB struct {
	// contains filtered or unexported fields
}

* This is a test memory database. Do not use for any production it does not get persisted

func NewMemDB

func NewMemDB() *MemDB

func NewMemDBWithCap

func NewMemDBWithCap(size int) *MemDB

func (*MemDB) Close

func (db *MemDB) Close()

Close deallocates the internal map and ensures any consecutive data access op fails with an error.

func (*MemDB) Compact

func (db *MemDB) Compact(start []byte, limit []byte) error

Compact is not supported on a memory database, but there's no need either as a memory database doesn't waste space anyway.

func (*MemDB) Delete

func (db *MemDB) Delete(key []byte) error

Delete removes the key from the key-value store.

func (*MemDB) Get

func (db *MemDB) Get(key []byte) ([]byte, error)

Get retrieves the given key if it's present in the key-value store.

func (*MemDB) Has

func (db *MemDB) Has(key []byte) (bool, error)

Has retrieves if a key is present in the key-value store.

func (*MemDB) Keys

func (db *MemDB) Keys() [][]byte

func (*MemDB) Len

func (db *MemDB) Len() int

Len returns the number of entries currently present in the memory database.

Note, this method is only used for testing (i.e. not public in general) and does not have explicit checks for closed-ness to allow simpler testing code.

func (*MemDB) Meter

func (db *MemDB) Meter(prefix string)

func (*MemDB) NewBatch

func (db *MemDB) NewBatch() Batch

func (*MemDB) NewIterator

func (db *MemDB) NewIterator(prefix []byte, start []byte) Iterator

NewIterator creates a binary-alphabetical iterator over a subset of database content with a particular key prefix, starting at a particular initial key (or after, if it does not exist).

func (*MemDB) Put

func (db *MemDB) Put(key []byte, value []byte) error

Put inserts the given value into the key-value store.

func (*MemDB) Stat

func (db *MemDB) Stat(property string) (string, error)

Stat returns a particular internal stat of the database.

func (*MemDB) TryCatchUpWithPrimary

func (db *MemDB) TryCatchUpWithPrimary() error

func (*MemDB) Type

func (db *MemDB) Type() DBType

type PruningMark

type PruningMark struct {
	Number uint64
	Hash   common.ExtHash
}

type RocksDBConfig

type RocksDBConfig struct {
	Secondary                 bool
	DumpMallocStat            bool
	DisableMetrics            bool
	CacheSize                 uint64
	CompressionType           string
	BottommostCompressionType string
	FilterPolicy              string
	MaxOpenFiles              int
	CacheIndexAndFilter       bool
}

func GetDefaultRocksDBConfig

func GetDefaultRocksDBConfig() *RocksDBConfig

type SnapshotDBBatch

type SnapshotDBBatch interface {
	Batch

	WriteSnapshotRoot(root common.Hash)
	DeleteSnapshotRoot()

	WriteAccountSnapshot(hash common.Hash, entry []byte)
	DeleteAccountSnapshot(hash common.Hash)

	WriteStorageSnapshot(accountHash, storageHash common.Hash, entry []byte)
	DeleteStorageSnapshot(accountHash, storageHash common.Hash)

	WriteSnapshotJournal(journal []byte)
	DeleteSnapshotJournal()

	WriteSnapshotGenerator(generator []byte)
	DeleteSnapshotGenerator()

	WriteSnapshotDisabled()
	DeleteSnapshotDisabled()

	WriteSnapshotRecoveryNumber(number uint64)
	DeleteSnapshotRecoveryNumber()
}

type SupplyCheckpoint added in v1.0.2

type SupplyCheckpoint struct {
	Minted   *big.Int
	BurntFee *big.Int
}

func (*SupplyCheckpoint) Copy added in v1.0.2

type TransactionLookup

type TransactionLookup struct {
	Tx *types.Transaction
	*TxLookupEntry
}

type TxLookupEntry

type TxLookupEntry struct {
	BlockHash  common.Hash
	BlockIndex uint64
	Index      uint64
}

TxLookupEntry is a positional metadata to help looking up the data content of a transaction or receipt given only its hash.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL