Documentation ¶
Overview ¶
Package database implements various types of databases used in Klaytn. This package is used to read/write data from/to the persistent layer.
Overview of database package ¶
DBManager is the interface used by the consumers of database package. databaseManager is the implementation of DBManager interface. It contains cacheManager and a list of Database interfaces. cacheManager caches data stored in the persistent layer, to decrease the direct access to the persistent layer. Database is the interface for persistent layer implementation. Currently there are 4 implementations, levelDB, memDB, badgerDB, dynamoDB and shardedDB.
Source Files
- badger_database.go : implementation of badgerDB, which wraps github.com/dgraph-io/badger
- cache_manager.go : implementation of cacheManager, which manages cache layer over persistent layer
- db_manager.go : contains DBManager and databaseManager
- dynamodb.go : implementation of dynamoDB, which wraps github.com/aws/aws-sdk-go/service/dynamodb
- interface.go : interfaces used outside database package
- leveldb_database.go : implementation of levelDB, which wraps github.com/syndtr/goleveldb
- memory_database.go : implementation of MemDB, which wraps go native map structure
- metrics.go : metrics used in database package, mostly related to cacheManager
- sharded_database.go : implementation of shardedDB, which wraps a list of Database interface
- schema.go : prefixes and suffixes for database keys and database key generating functions
Index ¶
- Constants
- Variables
- func AccountSnapshotKey(hash common.Hash) []byte
- func BloomBitsKey(bit uint, section uint64, hash common.Hash) []byte
- func CodeKey(hash common.Hash) []byte
- func GetDefaultLevelDBOption() *opt.Options
- func GetOpenFilesLimit() int
- func IsCodeKey(key []byte) (bool, []byte)
- func IsPow2(num uint) bool
- func NewBadgerDB(dbDir string) (*badgerDB, error)
- func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)
- func NewLevelDBWithOption(dbPath string, ldbOption *opt.Options) (*levelDB, error)
- func SenderTxHashToTxHashKey(senderTxHash common.Hash) []byte
- func StorageSnapshotKey(accountHash, storageHash common.Hash) []byte
- func StorageSnapshotsKey(accountHash common.Hash) []byte
- func TrieNodeKey(hash common.ExtHash) []byte
- func TxLookupKey(hash common.Hash) []byte
- func WriteBatches(batches ...Batch) (int, error)
- func WriteBatchesOverThreshold(batches ...Batch) (int, error)
- func WriteBatchesParallel(batches ...Batch) (int, error)
- type Batch
- type Batcher
- type Compacter
- type CustomRetryer
- type DBConfig
- type DBEntryType
- type DBManager
- type DBType
- type Database
- type DynamoDBConfig
- type DynamoData
- type Iteratee
- type Iterator
- type KeyValueStater
- type KeyValueWriter
- type LevelDBCompressionType
- type MemDB
- func (db *MemDB) Close()
- func (db *MemDB) Compact(start []byte, limit []byte) error
- func (db *MemDB) Delete(key []byte) error
- func (db *MemDB) Get(key []byte) ([]byte, error)
- func (db *MemDB) Has(key []byte) (bool, error)
- func (db *MemDB) Keys() [][]byte
- func (db *MemDB) Len() int
- func (db *MemDB) Meter(prefix string)
- func (db *MemDB) NewBatch() Batch
- func (db *MemDB) NewIterator(prefix []byte, start []byte) Iterator
- func (db *MemDB) Put(key []byte, value []byte) error
- func (db *MemDB) Stat(property string) (string, error)
- func (db *MemDB) TryCatchUpWithPrimary() error
- func (db *MemDB) Type() DBType
- type PruningMark
- type RocksDBConfig
- type SnapshotDBBatch
- type TransactionLookup
- type TxLookupEntry
Constants ¶
const ( LevelDB DBType = "LevelDB" RocksDB = "RocksDB" BadgerDB = "BadgerDB" MemoryDB = "MemoryDB" DynamoDB = "DynamoDBS3" ShardedDB = "ShardedDB" )
const IdealBatchSize = 100 * 1024
IdealBatchSize defines the size of the data batches should ideally add in one write.
const (
MinOpenFilesCacheCapacity = 16
)
const (
WorkerNum = 10
)
batch write
Variables ¶
var ( HeadBlockQ backupHashQueue FastBlockQ backupHashQueue )
var ( // SnapshotGeneratorKey tracks the snapshot generation marker across restarts. SnapshotGeneratorKey = []byte("SnapshotGenerator") SnapshotAccountPrefix = []byte("a") // SnapshotAccountPrefix + account hash -> account trie value SnapshotStoragePrefix = []byte("o") // SnapshotStoragePrefix + account hash + storage hash -> storage trie value // Chain index prefixes (use `i` + single byte to avoid mixing data types). BloomBitsIndexPrefix = []byte("iB") // BloomBitsIndexPrefix is the data table of a chain indexer to track its progress )
The fields below define the low level database schema prefixing.
var ErrRocksDBNotBuilt = errors.New("rocksdb is not built")
var OpenFileLimit = 64
Functions ¶
func AccountSnapshotKey ¶ added in v1.8.0
AccountSnapshotKey = SnapshotAccountPrefix + hash
func BloomBitsKey ¶
bloomBitsKey = bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash
func GetDefaultLevelDBOption ¶
GetDefaultLevelDBOption returns default LevelDB option copied from defaultLevelDBOption. defaultLevelDBOption has fields with minimum values.
func GetOpenFilesLimit ¶
func GetOpenFilesLimit() int
GetOpenFilesLimit raises out the number of allowed file handles per process for Klaytn and returns half of the allowance to assign to the database.
func IsCodeKey ¶ added in v1.9.0
IsCodeKey reports whether the given byte slice is the key of contract code, if so return the raw code hash as well.
func NewBadgerDB ¶
func NewLevelDB ¶
func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)
func NewLevelDBWithOption ¶
NewLevelDBWithOption explicitly receives LevelDB option to construct a LevelDB object.
func SenderTxHashToTxHashKey ¶
func StorageSnapshotKey ¶ added in v1.8.0
StorageSnapshotKey = SnapshotStoragePrefix + account hash + storage hash
func StorageSnapshotsKey ¶ added in v1.8.0
StorageSnapshotsKey = SnapshotStoragePrefix + account hash + storage hash
func TrieNodeKey ¶ added in v1.11.0
TrieNodeKey = if Legacy, hash32. Otherwise, exthash
func WriteBatches ¶
func WriteBatchesParallel ¶ added in v1.5.0
Types ¶
type Batch ¶
type Batch interface { KeyValueWriter // ValueSize retrieves the amount of data queued up for writing. ValueSize() int // Write flushes any accumulated data to disk. Write() error // Reset resets the Batch for reuse. Reset() // Release deallocate WriteBatch object of RocksDB. Release() // Replay replays the Batch contents. Replay(w KeyValueWriter) error }
Batch is a write-only database that commits changes to its host database when Write is called. A Batch cannot be used concurrently.
func NewStateTrieDBBatch ¶ added in v1.5.0
type Batcher ¶ added in v1.6.0
type Batcher interface { // NewBatch creates a write-only database that buffers changes to its host db // until a final write is called. NewBatch() Batch }
Batcher wraps the NewBatch method of a backing data store.
type Compacter ¶ added in v1.12.1
type Compacter interface { // Compact flattens the underlying data store for the given key range. In essence, // deleted and overwritten versions are discarded, and the data is rearranged to // reduce the cost of operations needed to access them. // // A nil start is treated as a key before all keys in the data store; a nil limit // is treated as a key after all keys in the data store. If both is nil then it // will compact entire data store. Compact(start []byte, limit []byte) error }
Compacter wraps the Compact method of a backing data store.
type CustomRetryer ¶ added in v1.7.0
type CustomRetryer struct {
client.DefaultRetryer
}
CustomRetryer wraps AWS SDK's built in DefaultRetryer adding additional custom features. DefaultRetryer of AWS SDK has its own standard of retryable situation, but it's not proper when network environment is not stable. CustomRetryer conservatively retry in all error cases because DB failure of Klaytn is critical.
func (CustomRetryer) ShouldRetry ¶ added in v1.7.0
func (r CustomRetryer) ShouldRetry(req *request.Request) bool
ShouldRetry overrides AWS SDK's built in DefaultRetryer to retry in all error cases.
type DBConfig ¶
type DBConfig struct { // General configurations for all types of DB. Dir string DBType DBType SingleDB bool // whether dbs (such as MiscDB, headerDB and etc) share one physical DB NumStateTrieShards uint // the number of shards of state trie db ParallelDBWrite bool OpenFilesLimit int EnableDBPerfMetrics bool // If true, read and write performance will be logged // LevelDB related configurations. LevelDBCacheSize int // LevelDBCacheSize = BlockCacheCapacity + WriteBuffer LevelDBCompression LevelDBCompressionType LevelDBBufferPool bool // RocksDB related configurations RocksDBConfig *RocksDBConfig // DynamoDB related configurations DynamoDBConfig *DynamoDBConfig }
DBConfig handles database related configurations.
type DBEntryType ¶
type DBEntryType uint8
const ( MiscDB DBEntryType = iota // Do not move MiscDB which has the path of others DB. BodyDB ReceiptsDB StateTrieDB StateTrieMigrationDB TxLookUpEntryDB SnapshotDB )
func (DBEntryType) String ¶ added in v1.6.0
func (et DBEntryType) String() string
type DBManager ¶
type DBManager interface { IsParallelDBWrite() bool IsSingle() bool InMigration() bool MigrationBlockNumber() uint64 Close() NewBatch(dbType DBEntryType) Batch GetMemDB() *MemDB GetDBConfig() *DBConfig CreateMigrationDBAndSetStatus(blockNum uint64) error FinishStateMigration(succeed bool) chan struct{} GetStateTrieDB() Database GetStateTrieMigrationDB() Database GetMiscDB() Database GetSnapshotDB() Database // from accessors_chain.go ReadCanonicalHash(number uint64) common.Hash WriteCanonicalHash(hash common.Hash, number uint64) DeleteCanonicalHash(number uint64) ReadHeadHeaderHash() common.Hash WriteHeadHeaderHash(hash common.Hash) ReadHeadBlockHash() common.Hash ReadHeadBlockBackupHash() common.Hash WriteHeadBlockHash(hash common.Hash) ReadHeadFastBlockHash() common.Hash ReadHeadFastBlockBackupHash() common.Hash WriteHeadFastBlockHash(hash common.Hash) ReadFastTrieProgress() uint64 WriteFastTrieProgress(count uint64) HasHeader(hash common.Hash, number uint64) bool ReadHeader(hash common.Hash, number uint64) *types.Header ReadHeaderRLP(hash common.Hash, number uint64) rlp.RawValue WriteHeader(header *types.Header) DeleteHeader(hash common.Hash, number uint64) ReadHeaderNumber(hash common.Hash) *uint64 HasBody(hash common.Hash, number uint64) bool ReadBody(hash common.Hash, number uint64) *types.Body ReadBodyInCache(hash common.Hash) *types.Body ReadBodyRLP(hash common.Hash, number uint64) rlp.RawValue ReadBodyRLPByHash(hash common.Hash) rlp.RawValue WriteBody(hash common.Hash, number uint64, body *types.Body) PutBodyToBatch(batch Batch, hash common.Hash, number uint64, body *types.Body) WriteBodyRLP(hash common.Hash, number uint64, rlp rlp.RawValue) DeleteBody(hash common.Hash, number uint64) ReadTd(hash common.Hash, number uint64) *big.Int WriteTd(hash common.Hash, number uint64, td *big.Int) DeleteTd(hash common.Hash, number uint64) ReadReceipt(txHash common.Hash) (*types.Receipt, common.Hash, uint64, uint64) ReadReceipts(blockHash common.Hash, number uint64) types.Receipts ReadReceiptsByBlockHash(hash common.Hash) types.Receipts WriteReceipts(hash common.Hash, number uint64, receipts types.Receipts) PutReceiptsToBatch(batch Batch, hash common.Hash, number uint64, receipts types.Receipts) DeleteReceipts(hash common.Hash, number uint64) ReadBlock(hash common.Hash, number uint64) *types.Block ReadBlockByHash(hash common.Hash) *types.Block ReadBlockByNumber(number uint64) *types.Block HasBlock(hash common.Hash, number uint64) bool WriteBlock(block *types.Block) DeleteBlock(hash common.Hash, number uint64) ReadBadBlock(hash common.Hash) *types.Block WriteBadBlock(block *types.Block) ReadAllBadBlocks() ([]*types.Block, error) DeleteBadBlocks() FindCommonAncestor(a, b *types.Header) *types.Header ReadIstanbulSnapshot(hash common.Hash) ([]byte, error) WriteIstanbulSnapshot(hash common.Hash, blob []byte) error DeleteIstanbulSnapshot(hash common.Hash) WriteMerkleProof(key, value []byte) // Bytecodes related operations ReadCode(hash common.Hash) []byte ReadCodeWithPrefix(hash common.Hash) []byte WriteCode(hash common.Hash, code []byte) PutCodeToBatch(batch Batch, hash common.Hash, code []byte) DeleteCode(hash common.Hash) HasCode(hash common.Hash) bool // State Trie Database related operations ReadTrieNode(hash common.ExtHash) ([]byte, error) HasTrieNode(hash common.ExtHash) (bool, error) HasCodeWithPrefix(hash common.Hash) bool ReadPreimage(hash common.Hash) []byte // Read StateTrie from new DB ReadTrieNodeFromNew(hash common.ExtHash) ([]byte, error) HasTrieNodeFromNew(hash common.ExtHash) (bool, error) HasCodeWithPrefixFromNew(hash common.Hash) bool ReadPreimageFromNew(hash common.Hash) []byte // Read StateTrie from old DB ReadTrieNodeFromOld(hash common.ExtHash) ([]byte, error) HasTrieNodeFromOld(hash common.ExtHash) (bool, error) HasCodeWithPrefixFromOld(hash common.Hash) bool ReadPreimageFromOld(hash common.Hash) []byte // Write StateTrie WriteTrieNode(hash common.ExtHash, node []byte) PutTrieNodeToBatch(batch Batch, hash common.ExtHash, node []byte) DeleteTrieNode(hash common.ExtHash) WritePreimages(number uint64, preimages map[common.Hash][]byte) // Trie pruning ReadPruningEnabled() bool WritePruningEnabled() DeletePruningEnabled() WritePruningMarks(marks []PruningMark) ReadPruningMarks(startNumber, endNumber uint64) []PruningMark DeletePruningMarks(marks []PruningMark) PruneTrieNodes(marks []PruningMark) WriteLastPrunedBlockNumber(blockNumber uint64) ReadLastPrunedBlockNumber() (uint64, error) // from accessors_indexes.go ReadTxLookupEntry(hash common.Hash) (common.Hash, uint64, uint64) WriteTxLookupEntries(block *types.Block) WriteAndCacheTxLookupEntries(block *types.Block) error PutTxLookupEntriesToBatch(batch Batch, block *types.Block) DeleteTxLookupEntry(hash common.Hash) ReadTxAndLookupInfo(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64) NewSenderTxHashToTxHashBatch() Batch PutSenderTxHashToTxHashToBatch(batch Batch, senderTxHash, txHash common.Hash) error ReadTxHashFromSenderTxHash(senderTxHash common.Hash) common.Hash ReadBloomBits(bloomBitsKey []byte) ([]byte, error) WriteBloomBits(bloomBitsKey []byte, bits []byte) error ReadValidSections() ([]byte, error) WriteValidSections(encodedSections []byte) ReadSectionHead(encodedSection []byte) ([]byte, error) WriteSectionHead(encodedSection []byte, hash common.Hash) DeleteSectionHead(encodedSection []byte) // from accessors_metadata.go ReadDatabaseVersion() *uint64 WriteDatabaseVersion(version uint64) ReadChainConfig(hash common.Hash) *params.ChainConfig WriteChainConfig(hash common.Hash, cfg *params.ChainConfig) // from accessors_snapshot.go ReadSnapshotJournal() []byte WriteSnapshotJournal(journal []byte) DeleteSnapshotJournal() ReadSnapshotGenerator() []byte WriteSnapshotGenerator(generator []byte) DeleteSnapshotGenerator() ReadSnapshotDisabled() bool WriteSnapshotDisabled() DeleteSnapshotDisabled() ReadSnapshotRecoveryNumber() *uint64 WriteSnapshotRecoveryNumber(number uint64) DeleteSnapshotRecoveryNumber() ReadSnapshotSyncStatus() []byte WriteSnapshotSyncStatus(status []byte) DeleteSnapshotSyncStatus() ReadSnapshotRoot() common.Hash WriteSnapshotRoot(root common.Hash) DeleteSnapshotRoot() ReadAccountSnapshot(hash common.Hash) []byte WriteAccountSnapshot(hash common.Hash, entry []byte) DeleteAccountSnapshot(hash common.Hash) ReadStorageSnapshot(accountHash, storageHash common.Hash) []byte WriteStorageSnapshot(accountHash, storageHash common.Hash, entry []byte) DeleteStorageSnapshot(accountHash, storageHash common.Hash) NewSnapshotDBIterator(prefix []byte, start []byte) Iterator NewSnapshotDBBatch() SnapshotDBBatch // below operations are used in parent chain side, not child chain side. WriteChildChainTxHash(ccBlockHash common.Hash, ccTxHash common.Hash) ConvertChildChainBlockHashToParentChainTxHash(scBlockHash common.Hash) common.Hash WriteLastIndexedBlockNumber(blockNum uint64) GetLastIndexedBlockNumber() uint64 // below operations are used in child chain side, not parent chain side. WriteAnchoredBlockNumber(blockNum uint64) ReadAnchoredBlockNumber() uint64 WriteReceiptFromParentChain(blockHash common.Hash, receipt *types.Receipt) ReadReceiptFromParentChain(blockHash common.Hash) *types.Receipt WriteHandleTxHashFromRequestTxHash(rTx, hTx common.Hash) ReadHandleTxHashFromRequestTxHash(rTx common.Hash) common.Hash WriteParentOperatorFeePayer(feePayer common.Address) WriteChildOperatorFeePayer(feePayer common.Address) ReadParentOperatorFeePayer() common.Address ReadChildOperatorFeePayer() common.Address // cacheManager related functions. ClearHeaderChainCache() ClearBlockChainCache() ReadTxAndLookupInfoInCache(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64) ReadBlockReceiptsInCache(blockHash common.Hash) types.Receipts ReadTxReceiptInCache(txHash common.Hash) *types.Receipt // snapshot in clique(ConsensusClique) consensus WriteCliqueSnapshot(snapshotBlockHash common.Hash, encodedSnapshot []byte) error ReadCliqueSnapshot(snapshotBlockHash common.Hash) ([]byte, error) // Governance related functions WriteGovernance(data map[string]interface{}, num uint64) error WriteGovernanceIdx(num uint64) error ReadGovernance(num uint64) (map[string]interface{}, error) ReadRecentGovernanceIdx(count int) ([]uint64, error) ReadGovernanceAtNumber(num uint64, epoch uint64) (uint64, map[string]interface{}, error) WriteGovernanceState(b []byte) error ReadGovernanceState() ([]byte, error) DeleteGovernance(num uint64) // StakingInfo related functions ReadStakingInfo(blockNum uint64) ([]byte, error) WriteStakingInfo(blockNum uint64, stakingInfo []byte) error HasStakingInfo(blockNum uint64) (bool, error) DeleteStakingInfo(blockNum uint64) // DB migration related function StartDBMigration(DBManager) error // ChainDataFetcher checkpoint function WriteChainDataFetcherCheckpoint(checkpoint uint64) error ReadChainDataFetcherCheckpoint() (uint64, error) TryCatchUpWithPrimary() error Stat(string) (string, error) Compact([]byte, []byte) error // contains filtered or unexported methods }
func NewDBManager ¶
NewDBManager returns DBManager interface. If SingleDB is false, each Database will have its own DB. If not, each Database will share one common DB.
func NewLevelDBManagerForTest ¶
NewLevelDBManagerForTest returns a DBManager, consisted of only LevelDB. It also accepts LevelDB option, opt.Options.
func NewMemoryDBManager ¶
func NewMemoryDBManager() DBManager
type Database ¶
type Database interface { KeyValueWriter KeyValueStater Compacter Get(key []byte) ([]byte, error) Has(key []byte) (bool, error) Close() NewBatch() Batch Type() DBType Meter(prefix string) Iteratee TryCatchUpWithPrimary() error }
Database wraps all database operations. All methods are safe for concurrent use.
func NewDynamoDB ¶ added in v1.5.2
func NewDynamoDB(config *DynamoDBConfig) (Database, error)
NewDynamoDB creates either dynamoDB or dynamoDBReadOnly depending on config.ReadOnly.
func NewRocksDB ¶ added in v1.11.0
func NewRocksDB(path string, config *RocksDBConfig) (Database, error)
type DynamoDBConfig ¶ added in v1.5.2
type DynamoDBConfig struct { TableName string Region string // AWS region Endpoint string // Where DynamoDB reside (Used to specify the localstack endpoint on the test) S3Endpoint string // Where S3 reside IsProvisioned bool // Billing mode ReadCapacityUnits int64 // read capacity when provisioned WriteCapacityUnits int64 // write capacity when provisioned ReadOnly bool // disables write PerfCheck bool }
func GetDefaultDynamoDBConfig ¶ added in v1.5.2
func GetDefaultDynamoDBConfig() *DynamoDBConfig
GetTestDynamoConfig gets dynamo config for actual aws DynamoDB test
If you use this config, you will be charged for what you use. You need to set AWS credentials to access to dynamoDB.
$ export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY $ export AWS_SECRET_ACCESS_KEY=YOUR_SECRET
type DynamoData ¶ added in v1.5.2
type Iteratee ¶ added in v1.5.0
type Iteratee interface { // NewIterator creates a binary-alphabetical iterator over a subset // of database content with a particular key prefix, starting at a particular // initial key (or after, if it does not exist). // // Note: This method assumes that the prefix is NOT part of the start, so there's // no need for the caller to prepend the prefix to the start NewIterator(prefix []byte, start []byte) Iterator }
Iteratee wraps the NewIterator methods of a backing data store.
type Iterator ¶ added in v1.5.0
type Iterator interface { // Next moves the iterator to the next key/value pair. It returns whether the // iterator is exhausted. Next() bool // Error returns any accumulated error. Exhausting all the key/value pairs // is not considered to be an error. Error() error // Key returns the key of the current key/value pair, or nil if done. The caller // should not modify the contents of the returned slice, and its contents may // change on the next call to Next. Key() []byte // Value returns the value of the current key/value pair, or nil if done. The // caller should not modify the contents of the returned slice, and its contents // may change on the next call to Next. Value() []byte // Release releases associated resources. Release should always succeed and can // be called multiple times without causing error. Release() }
Iterator iterates over a database's key/value pairs in ascending key order.
When it encounters an error any seek will return false and will yield no key/ value pairs. The error can be queried by calling the Error method. Calling Release is still necessary.
An iterator must be released after use, but it is not necessary to read an iterator until exhaustion. An iterator is not safe for concurrent use, but it is safe to use multiple iterators concurrently.
type KeyValueStater ¶ added in v1.12.1
type KeyValueStater interface { // Stat returns a particular internal stat of the database. Stat(property string) (string, error) }
KeyValueStater wraps the Stat method of a backing data store.
type KeyValueWriter ¶ added in v1.6.0
type KeyValueWriter interface { // Put inserts the given value into the key-value data store. Put(key []byte, value []byte) error // Delete removes the key from the key-value data store. Delete(key []byte) error }
KeyValueWriter wraps the Put method of a backing data store.
type LevelDBCompressionType ¶
type LevelDBCompressionType uint8
const ( AllNoCompression LevelDBCompressionType = iota ReceiptOnlySnappyCompression StateTrieOnlyNoCompression AllSnappyCompression )
type MemDB ¶
type MemDB struct {
// contains filtered or unexported fields
}
* This is a test memory database. Do not use for any production it does not get persisted
func NewMemDBWithCap ¶
func (*MemDB) Close ¶
func (db *MemDB) Close()
Close deallocates the internal map and ensures any consecutive data access op fails with an error.
func (*MemDB) Compact ¶ added in v1.6.0
Compact is not supported on a memory database, but there's no need either as a memory database doesn't waste space anyway.
func (*MemDB) Len ¶
Len returns the number of entries currently present in the memory database.
Note, this method is only used for testing (i.e. not public in general) and does not have explicit checks for closed-ness to allow simpler testing code.
func (*MemDB) NewIterator ¶ added in v1.5.0
NewIterator creates a binary-alphabetical iterator over a subset of database content with a particular key prefix, starting at a particular initial key (or after, if it does not exist).
func (*MemDB) TryCatchUpWithPrimary ¶ added in v1.12.0
type PruningMark ¶ added in v1.11.0
type RocksDBConfig ¶ added in v1.11.0
type RocksDBConfig struct { Secondary bool DumpMallocStat bool DisableMetrics bool CacheSize uint64 CompressionType string BottommostCompressionType string FilterPolicy string MaxOpenFiles int CacheIndexAndFilter bool }
func GetDefaultRocksDBConfig ¶ added in v1.11.0
func GetDefaultRocksDBConfig() *RocksDBConfig
type SnapshotDBBatch ¶ added in v1.8.0
type SnapshotDBBatch interface { Batch WriteSnapshotRoot(root common.Hash) DeleteSnapshotRoot() WriteAccountSnapshot(hash common.Hash, entry []byte) DeleteAccountSnapshot(hash common.Hash) WriteStorageSnapshot(accountHash, storageHash common.Hash, entry []byte) DeleteStorageSnapshot(accountHash, storageHash common.Hash) WriteSnapshotJournal(journal []byte) DeleteSnapshotJournal() WriteSnapshotGenerator(generator []byte) DeleteSnapshotGenerator() WriteSnapshotDisabled() DeleteSnapshotDisabled() WriteSnapshotRecoveryNumber(number uint64) DeleteSnapshotRecoveryNumber() }
type TransactionLookup ¶
type TransactionLookup struct { Tx *types.Transaction *TxLookupEntry }
Source Files ¶
- badger_database.go
- batch.go
- cache_manager.go
- database_test_util.go
- db_manager.go
- db_manager_stakinginfo.go
- db_migration.go
- doc.go
- dynamodb.go
- dynamodb_readonly.go
- filedb.go
- interface.go
- iterator.go
- leveldb_database.go
- memory_database.go
- metrics.go
- rocksdb_database_config.go
- rocksdb_database_nobuild.go
- s3filedb.go
- schema.go
- sharded_database.go