Documentation ¶
Overview ¶
Package database implements various types of databases used in Klaytn. This package is used to read/write data from/to the persistent layer.
Overview of database package ¶
DBManager is the interface used by the consumers of database package. databaseManager is the implementation of DBManager interface. It contains cacheManager and a list of Database interfaces. cacheManager caches data stored in the persistent layer, to decrease the direct access to the persistent layer. Database is the interface for persistent layer implementation. Currently there are 4 implementations, levelDB, memDB, badgerDB, dynamoDB and shardedDB.
Source Files
- badger_database.go : implementation of badgerDB, which wraps github.com/dgraph-io/badger
- cache_manager.go : implementation of cacheManager, which manages cache layer over persistent layer
- db_manager.go : contains DBManager and databaseManager
- dynamodb.go : implementation of dynamoDB, which wraps github.com/aws/aws-sdk-go/service/dynamodb
- interface.go : interfaces used outside database package
- leveldb_database.go : implementation of levelDB, which wraps github.com/syndtr/goleveldb
- memory_database.go : implementation of MemDB, which wraps go native map structure
- metrics.go : metrics used in database package, mostly related to cacheManager
- sharded_database.go : implementation of shardedDB, which wraps a list of Database interface
- schema.go : prefixes and suffixes for database keys and database key generating functions
Index ¶
- Constants
- Variables
- func BloomBitsKey(bit uint, section uint64, hash common.Hash) []byte
- func GetDefaultLevelDBOption() *opt.Options
- func GetOpenFilesLimit() int
- func IsPow2(num uint) bool
- func NewBadgerDB(dbDir string) (*badgerDB, error)
- func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)
- func NewLevelDBWithOption(dbPath string, ldbOption *opt.Options) (*levelDB, error)
- func PutAndWriteBatchesOverThreshold(batch Batch, key, val []byte) error
- func SenderTxHashToTxHashKey(senderTxHash common.Hash) []byte
- func TxLookupKey(hash common.Hash) []byte
- func WriteBatches(batches ...Batch) (int, error)
- func WriteBatchesOverThreshold(batches ...Batch) (int, error)
- func WriteBatchesParallel(batches ...Batch) (int, error)
- type Batch
- type Batcher
- type DBConfig
- type DBEntryType
- type DBManager
- type DBType
- type Database
- type DynamoDBConfig
- type DynamoData
- type Iteratee
- type Iterator
- type KeyValueWriter
- type LevelDBCompressionType
- type MemDB
- func (db *MemDB) Close()
- func (db *MemDB) Compact(start []byte, limit []byte) error
- func (db *MemDB) Delete(key []byte) error
- func (db *MemDB) Get(key []byte) ([]byte, error)
- func (db *MemDB) Has(key []byte) (bool, error)
- func (db *MemDB) Keys() [][]byte
- func (db *MemDB) Len() int
- func (db *MemDB) Meter(prefix string)
- func (db *MemDB) NewBatch() Batch
- func (db *MemDB) NewIterator(prefix []byte, start []byte) Iterator
- func (db *MemDB) Put(key []byte, value []byte) error
- func (db *MemDB) Stat(property string) (string, error)
- func (db *MemDB) Type() DBType
- type TransactionLookup
- type TxLookupEntry
Constants ¶
const ( LevelDB DBType = "LevelDB" BadgerDB = "BadgerDB" MemoryDB = "MemoryDB" DynamoDB = "DynamoDBS3" ShardedDB = "ShardedDB" )
const IdealBatchSize = 100 * 1024
IdealBatchSize defines the size of the data batches should ideally add in one write.
const (
MinOpenFilesCacheCapacity = 16
)
const WorkerNum = 10
batch write
Variables ¶
var ( // Chain index prefixes (use `i` + single byte to avoid mixing data types). BloomBitsIndexPrefix = []byte("iB") // BloomBitsIndexPrefix is the data table of a chain indexer to track its progress )
The fields below define the low level database schema prefixing.
var OpenFileLimit = 64
Functions ¶
func BloomBitsKey ¶
bloomBitsKey = bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash
func GetDefaultLevelDBOption ¶
GetDefaultLevelDBOption returns default LevelDB option copied from defaultLevelDBOption. defaultLevelDBOption has fields with minimum values.
func GetOpenFilesLimit ¶
func GetOpenFilesLimit() int
GetOpenFilesLimit raises out the number of allowed file handles per process for Klaytn and returns half of the allowance to assign to the database.
func NewBadgerDB ¶
func NewLevelDB ¶
func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)
func NewLevelDBWithOption ¶
NewLevelDBWithOption explicitly receives LevelDB option to construct a LevelDB object.
func SenderTxHashToTxHashKey ¶
func WriteBatches ¶
func WriteBatchesParallel ¶ added in v1.5.0
Types ¶
type Batch ¶
type Batch interface { KeyValueWriter // ValueSize retrieves the amount of data queued up for writing. ValueSize() int // Write flushes any accumulated data to disk. Write() error // Reset resets the Batch for reuse. Reset() // Replay replays the Batch contents. Replay(w KeyValueWriter) error }
Batch is a write-only database that commits changes to its host database when Write is called. A Batch cannot be used concurrently.
func NewStateTrieDBBatch ¶ added in v1.5.0
type Batcher ¶ added in v1.6.0
type Batcher interface { // NewBatch creates a write-only database that buffers changes to its host db // until a final write is called. NewBatch() Batch }
Batcher wraps the NewBatch method of a backing data store.
type DBConfig ¶
type DBConfig struct { // General configurations for all types of DB. Dir string DBType DBType SingleDB bool // whether dbs (such as MiscDB, headerDB and etc) share one physical DB NumStateTrieShards uint // the number of shards of state trie db ParallelDBWrite bool OpenFilesLimit int EnableDBPerfMetrics bool // If true, read and write performance will be logged // LevelDB related configurations. LevelDBCacheSize int // LevelDBCacheSize = BlockCacheCapacity + WriteBuffer LevelDBCompression LevelDBCompressionType LevelDBBufferPool bool // DynamoDB related configurations DynamoDBConfig *DynamoDBConfig }
DBConfig handles database related configurations.
type DBEntryType ¶
type DBEntryType uint8
const ( MiscDB DBEntryType = iota // Do not move MiscDB which has the path of others DB. BodyDB ReceiptsDB StateTrieDB StateTrieMigrationDB TxLookUpEntryDB )
func (DBEntryType) String ¶ added in v1.6.0
func (et DBEntryType) String() string
type DBManager ¶
type DBManager interface { IsParallelDBWrite() bool IsSingle() bool InMigration() bool MigrationBlockNumber() uint64 Close() NewBatch(dbType DBEntryType) Batch GetMemDB() *MemDB GetDBConfig() *DBConfig CreateMigrationDBAndSetStatus(blockNum uint64) error FinishStateMigration(succeed bool) chan struct{} GetStateTrieDB() Database GetStateTrieMigrationDB() Database GetMiscDB() Database // from accessors_chain.go ReadCanonicalHash(number uint64) common.Hash WriteCanonicalHash(hash common.Hash, number uint64) DeleteCanonicalHash(number uint64) ReadHeadHeaderHash() common.Hash WriteHeadHeaderHash(hash common.Hash) ReadHeadBlockHash() common.Hash WriteHeadBlockHash(hash common.Hash) ReadHeadFastBlockHash() common.Hash WriteHeadFastBlockHash(hash common.Hash) ReadFastTrieProgress() uint64 WriteFastTrieProgress(count uint64) HasHeader(hash common.Hash, number uint64) bool ReadHeader(hash common.Hash, number uint64) *types.Header ReadHeaderRLP(hash common.Hash, number uint64) rlp.RawValue WriteHeader(header *types.Header) DeleteHeader(hash common.Hash, number uint64) ReadHeaderNumber(hash common.Hash) *uint64 HasBody(hash common.Hash, number uint64) bool ReadBody(hash common.Hash, number uint64) *types.Body ReadBodyInCache(hash common.Hash) *types.Body ReadBodyRLP(hash common.Hash, number uint64) rlp.RawValue ReadBodyRLPByHash(hash common.Hash) rlp.RawValue WriteBody(hash common.Hash, number uint64, body *types.Body) PutBodyToBatch(batch Batch, hash common.Hash, number uint64, body *types.Body) WriteBodyRLP(hash common.Hash, number uint64, rlp rlp.RawValue) DeleteBody(hash common.Hash, number uint64) ReadTd(hash common.Hash, number uint64) *big.Int WriteTd(hash common.Hash, number uint64, td *big.Int) DeleteTd(hash common.Hash, number uint64) ReadReceipt(txHash common.Hash) (*types.Receipt, common.Hash, uint64, uint64) ReadReceipts(blockHash common.Hash, number uint64) types.Receipts ReadReceiptsByBlockHash(hash common.Hash) types.Receipts WriteReceipts(hash common.Hash, number uint64, receipts types.Receipts) PutReceiptsToBatch(batch Batch, hash common.Hash, number uint64, receipts types.Receipts) DeleteReceipts(hash common.Hash, number uint64) ReadBlock(hash common.Hash, number uint64) *types.Block ReadBlockByHash(hash common.Hash) *types.Block ReadBlockByNumber(number uint64) *types.Block HasBlock(hash common.Hash, number uint64) bool WriteBlock(block *types.Block) DeleteBlock(hash common.Hash, number uint64) FindCommonAncestor(a, b *types.Header) *types.Header ReadIstanbulSnapshot(hash common.Hash) ([]byte, error) WriteIstanbulSnapshot(hash common.Hash, blob []byte) error WriteMerkleProof(key, value []byte) // State Trie Database related operations ReadCachedTrieNode(hash common.Hash) ([]byte, error) ReadCachedTrieNodePreimage(secureKey []byte) ([]byte, error) ReadStateTrieNode(key []byte) ([]byte, error) HasStateTrieNode(key []byte) (bool, error) ReadPreimage(hash common.Hash) []byte // Read StateTrie from new DB ReadCachedTrieNodeFromNew(hash common.Hash) ([]byte, error) ReadCachedTrieNodePreimageFromNew(secureKey []byte) ([]byte, error) ReadStateTrieNodeFromNew(key []byte) ([]byte, error) HasStateTrieNodeFromNew(key []byte) (bool, error) ReadPreimageFromNew(hash common.Hash) []byte // Read StateTrie from old DB ReadCachedTrieNodeFromOld(hash common.Hash) ([]byte, error) ReadCachedTrieNodePreimageFromOld(secureKey []byte) ([]byte, error) ReadStateTrieNodeFromOld(key []byte) ([]byte, error) HasStateTrieNodeFromOld(key []byte) (bool, error) ReadPreimageFromOld(hash common.Hash) []byte WritePreimages(number uint64, preimages map[common.Hash][]byte) // from accessors_indexes.go ReadTxLookupEntry(hash common.Hash) (common.Hash, uint64, uint64) WriteTxLookupEntries(block *types.Block) WriteAndCacheTxLookupEntries(block *types.Block) error PutTxLookupEntriesToBatch(batch Batch, block *types.Block) DeleteTxLookupEntry(hash common.Hash) ReadTxAndLookupInfo(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64) NewSenderTxHashToTxHashBatch() Batch PutSenderTxHashToTxHashToBatch(batch Batch, senderTxHash, txHash common.Hash) error ReadTxHashFromSenderTxHash(senderTxHash common.Hash) common.Hash ReadBloomBits(bloomBitsKey []byte) ([]byte, error) WriteBloomBits(bloomBitsKey []byte, bits []byte) error ReadValidSections() ([]byte, error) WriteValidSections(encodedSections []byte) ReadSectionHead(encodedSection []byte) ([]byte, error) WriteSectionHead(encodedSection []byte, hash common.Hash) DeleteSectionHead(encodedSection []byte) // from accessors_metadata.go ReadDatabaseVersion() *uint64 WriteDatabaseVersion(version uint64) ReadChainConfig(hash common.Hash) *params.ChainConfig WriteChainConfig(hash common.Hash, cfg *params.ChainConfig) // below operations are used in parent chain side, not child chain side. WriteChildChainTxHash(ccBlockHash common.Hash, ccTxHash common.Hash) ConvertChildChainBlockHashToParentChainTxHash(scBlockHash common.Hash) common.Hash WriteLastIndexedBlockNumber(blockNum uint64) GetLastIndexedBlockNumber() uint64 // below operations are used in child chain side, not parent chain side. WriteAnchoredBlockNumber(blockNum uint64) ReadAnchoredBlockNumber() uint64 WriteReceiptFromParentChain(blockHash common.Hash, receipt *types.Receipt) ReadReceiptFromParentChain(blockHash common.Hash) *types.Receipt WriteHandleTxHashFromRequestTxHash(rTx, hTx common.Hash) ReadHandleTxHashFromRequestTxHash(rTx common.Hash) common.Hash WriteParentOperatorFeePayer(feePayer common.Address) WriteChildOperatorFeePayer(feePayer common.Address) ReadParentOperatorFeePayer() common.Address ReadChildOperatorFeePayer() common.Address // cacheManager related functions. ClearHeaderChainCache() ClearBlockChainCache() ReadTxAndLookupInfoInCache(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64) ReadBlockReceiptsInCache(blockHash common.Hash) types.Receipts ReadTxReceiptInCache(txHash common.Hash) *types.Receipt // snapshot in clique(ConsensusClique) consensus WriteCliqueSnapshot(snapshotBlockHash common.Hash, encodedSnapshot []byte) error ReadCliqueSnapshot(snapshotBlockHash common.Hash) ([]byte, error) // Governance related functions WriteGovernance(data map[string]interface{}, num uint64) error WriteGovernanceIdx(num uint64) error ReadGovernance(num uint64) (map[string]interface{}, error) ReadRecentGovernanceIdx(count int) ([]uint64, error) ReadGovernanceAtNumber(num uint64, epoch uint64) (uint64, map[string]interface{}, error) WriteGovernanceState(b []byte) error ReadGovernanceState() ([]byte, error) // StakingInfo related functions ReadStakingInfo(blockNum uint64) ([]byte, error) WriteStakingInfo(blockNum uint64, stakingInfo []byte) error // DB migration related function StartDBMigration(DBManager) error // ChainDataFetcher checkpoint function WriteChainDataFetcherCheckpoint(checkpoint uint64) error ReadChainDataFetcherCheckpoint() (uint64, error) // contains filtered or unexported methods }
func NewDBManager ¶
NewDBManager returns DBManager interface. If SingleDB is false, each Database will have its own DB. If not, each Database will share one common DB.
func NewLevelDBManagerForTest ¶
NewLevelDBManagerForTest returns a DBManager, consisted of only LevelDB. It also accepts LevelDB option, opt.Options.
func NewMemoryDBManager ¶
func NewMemoryDBManager() DBManager
type Database ¶
type Database interface { KeyValueWriter Get(key []byte) ([]byte, error) Has(key []byte) (bool, error) Close() NewBatch() Batch Type() DBType Meter(prefix string) Iteratee }
Database wraps all database operations. All methods are safe for concurrent use.
func NewDynamoDB ¶ added in v1.5.2
func NewDynamoDB(config *DynamoDBConfig) (Database, error)
NewDynamoDB creates either dynamoDB or dynamoDBReadOnly depending on config.ReadOnly.
type DynamoDBConfig ¶ added in v1.5.2
type DynamoDBConfig struct { TableName string Region string // AWS region Endpoint string // Where DynamoDB reside (Used to specify the localstack endpoint on the test) S3Endpoint string // Where S3 reside IsProvisioned bool // Billing mode ReadCapacityUnits int64 // read capacity when provisioned WriteCapacityUnits int64 // write capacity when provisioned ReadOnly bool // disables write PerfCheck bool }
func GetDefaultDynamoDBConfig ¶ added in v1.5.2
func GetDefaultDynamoDBConfig() *DynamoDBConfig
GetTestDynamoConfig gets dynamo config for actual aws DynamoDB test
If you use this config, you will be charged for what you use. You need to set AWS credentials to access to dynamoDB.
$ export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY $ export AWS_SECRET_ACCESS_KEY=YOUR_SECRET
type DynamoData ¶ added in v1.5.2
type Iteratee ¶ added in v1.5.0
type Iteratee interface { // NewIterator creates a binary-alphabetical iterator over a subset // of database content with a particular key prefix, starting at a particular // initial key (or after, if it does not exist). // // Note: This method assumes that the prefix is NOT part of the start, so there's // no need for the caller to prepend the prefix to the start NewIterator(prefix []byte, start []byte) Iterator }
Iteratee wraps the NewIterator methods of a backing data store.
type Iterator ¶ added in v1.5.0
type Iterator interface { // Next moves the iterator to the next key/value pair. It returns whether the // iterator is exhausted. Next() bool // Error returns any accumulated error. Exhausting all the key/value pairs // is not considered to be an error. Error() error // Key returns the key of the current key/value pair, or nil if done. The caller // should not modify the contents of the returned slice, and its contents may // change on the next call to Next. Key() []byte // Value returns the value of the current key/value pair, or nil if done. The // caller should not modify the contents of the returned slice, and its contents // may change on the next call to Next. Value() []byte // Release releases associated resources. Release should always succeed and can // be called multiple times without causing error. Release() }
Iterator iterates over a database's key/value pairs in ascending key order.
When it encounters an error any seek will return false and will yield no key/ value pairs. The error can be queried by calling the Error method. Calling Release is still necessary.
An iterator must be released after use, but it is not necessary to read an iterator until exhaustion. An iterator is not safe for concurrent use, but it is safe to use multiple iterators concurrently.
type KeyValueWriter ¶ added in v1.6.0
type KeyValueWriter interface { // Put inserts the given value into the key-value data store. Put(key []byte, value []byte) error // Delete removes the key from the key-value data store. Delete(key []byte) error }
KeyValueWriter wraps the Put method of a backing data store.
type LevelDBCompressionType ¶
type LevelDBCompressionType uint8
const ( AllNoCompression LevelDBCompressionType = iota ReceiptOnlySnappyCompression StateTrieOnlyNoCompression AllSnappyCompression )
type MemDB ¶
type MemDB struct {
// contains filtered or unexported fields
}
* This is a test memory database. Do not use for any production it does not get persisted
func NewMemDBWithCap ¶
func (*MemDB) Close ¶
func (db *MemDB) Close()
Close deallocates the internal map and ensures any consecutive data access op fails with an error.
func (*MemDB) Compact ¶ added in v1.6.0
Compact is not supported on a memory database, but there's no need either as a memory database doesn't waste space anyway.
func (*MemDB) Len ¶
Len returns the number of entries currently present in the memory database.
Note, this method is only used for testing (i.e. not public in general) and does not have explicit checks for closed-ness to allow simpler testing code.
func (*MemDB) NewIterator ¶ added in v1.5.0
NewIterator creates a binary-alphabetical iterator over a subset of database content with a particular key prefix, starting at a particular initial key (or after, if it does not exist).
type TransactionLookup ¶
type TransactionLookup struct { Tx *types.Transaction *TxLookupEntry }