Documentation ¶
Overview ¶
Package database implements various types of databases used in Klaytn. This package is used to read/write data from/to the persistent layer.
Overview of database package ¶
DBManager is the interface used by the consumers of database package. databaseManager is the implementation of DBManager interface. It contains cacheManager and a list of Database interfaces. cacheManager caches data stored in the persistent layer, to decrease the direct access to the persistent layer. Database is the interface for persistent layer implementation. Currently there are 4 implementations, levelDB, memDB, badgerDB and partitionedDB.
Source Files
- badger_database.go : implementation of badgerDB, which wraps github.com/dgraph-io/badger
- cache_manager.go : implementation of cacheManager, which manages cache layer over persistent layer
- db_manager.go : contains DBManager and databaseManager
- interface.go : interfaces used outside database package
- leveldb_database.go : implementation of levelDB, which wraps github.com/syndtr/goleveldb
- memory_database.go : implementation of MemDB, which wraps go native map structure
- metrics.go : metrics used in database package, mostly related to cacheManager
- partitioned_database.go : implementation of partitionedDB, which wraps a list of Database interface
- schema.go : prefixes and suffixes for database keys and database key generating functions
Index ¶
- Constants
- Variables
- func BloomBitsKey(bit uint, section uint64, hash common.Hash) []byte
- func GetDefaultLevelDBOption() *opt.Options
- func GetOpenFilesLimit() int
- func IsPow2(num uint) bool
- func NewBadgerDB(dbDir string) (*badgerDB, error)
- func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)
- func NewLevelDBWithOption(dbPath string, ldbOption *opt.Options) (*levelDB, error)
- func PutAndWriteBatchesOverThreshold(batch Batch, key, val []byte) error
- func SenderTxHashToTxHashKey(senderTxHash common.Hash) []byte
- func TxLookupKey(hash common.Hash) []byte
- func WriteBatches(batches ...Batch) (int, error)
- func WriteBatchesOverThreshold(batches ...Batch) (int, error)
- type Batch
- type DBConfig
- type DBEntryType
- type DBManager
- type DBType
- type Database
- type LevelDBCompressionType
- type MemDB
- func (db *MemDB) Close()
- func (db *MemDB) Delete(key []byte) error
- func (db *MemDB) Get(key []byte) ([]byte, error)
- func (db *MemDB) Has(key []byte) (bool, error)
- func (db *MemDB) Keys() [][]byte
- func (db *MemDB) Len() int
- func (db *MemDB) Meter(prefix string)
- func (db *MemDB) NewBatch() Batch
- func (db *MemDB) Put(key []byte, value []byte) error
- func (db *MemDB) Type() DBType
- type Putter
- type TransactionLookup
- type TxLookupEntry
Constants ¶
const IdealBatchSize = 100 * 1024
const (
MinOpenFilesCacheCapacity = 16
)
Variables ¶
var ( // Chain index prefixes (use `i` + single byte to avoid mixing data types). BloomBitsIndexPrefix = []byte("iB") // BloomBitsIndexPrefix is the data table of a chain indexer to track its progress )
The fields below define the low level database schema prefixing.
var OpenFileLimit = 64
Functions ¶
func BloomBitsKey ¶
bloomBitsKey = bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash
func GetDefaultLevelDBOption ¶
GetDefaultLevelDBOption returns default LevelDB option copied from defaultLevelDBOption. defaultLevelDBOption has fields with minimum values.
func GetOpenFilesLimit ¶
func GetOpenFilesLimit() int
GetOpenFilesLimit raises out the number of allowed file handles per process for Klaytn and returns half of the allowance to assign to the database.
func NewBadgerDB ¶
func NewLevelDB ¶
func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)
func NewLevelDBWithOption ¶
NewLevelDBWithOption explicitly receives LevelDB option to construct a LevelDB object.
func SenderTxHashToTxHashKey ¶
func WriteBatches ¶
Types ¶
type Batch ¶
type Batch interface { Putter ValueSize() int // amount of data in the batch Write() error // Reset resets the batch for reuse Reset() }
Batch is a write-only database that commits changes to its host database when Write is called. Batch cannot be used concurrently.
type DBConfig ¶
type DBConfig struct { // General configurations for all types of DB. Dir string DBType DBType Partitioned bool NumStateTriePartitions uint ParallelDBWrite bool OpenFilesLimit int // LevelDB related configurations. LevelDBCacheSize int // LevelDBCacheSize = BlockCacheCapacity + WriteBuffer LevelDBCompression LevelDBCompressionType LevelDBBufferPool bool }
DBConfig handles database related configurations.
type DBEntryType ¶
type DBEntryType uint8
const ( BodyDB DBEntryType ReceiptsDB StateTrieDB TxLookUpEntryDB MiscDB )
type DBManager ¶
type DBManager interface { IsParallelDBWrite() bool IsPartitioned() bool InMigration() bool Close() NewBatch(dbType DBEntryType) Batch GetMemDB() *MemDB GetDBConfig() *DBConfig SetStateTrieMigrationDB(blockNum uint64) // from accessors_chain.go ReadCanonicalHash(number uint64) common.Hash WriteCanonicalHash(hash common.Hash, number uint64) DeleteCanonicalHash(number uint64) ReadHeadHeaderHash() common.Hash WriteHeadHeaderHash(hash common.Hash) ReadHeadBlockHash() common.Hash WriteHeadBlockHash(hash common.Hash) ReadHeadFastBlockHash() common.Hash WriteHeadFastBlockHash(hash common.Hash) ReadFastTrieProgress() uint64 WriteFastTrieProgress(count uint64) HasHeader(hash common.Hash, number uint64) bool ReadHeader(hash common.Hash, number uint64) *types.Header ReadHeaderRLP(hash common.Hash, number uint64) rlp.RawValue WriteHeader(header *types.Header) DeleteHeader(hash common.Hash, number uint64) ReadHeaderNumber(hash common.Hash) *uint64 HasBody(hash common.Hash, number uint64) bool ReadBody(hash common.Hash, number uint64) *types.Body ReadBodyInCache(hash common.Hash) *types.Body ReadBodyRLP(hash common.Hash, number uint64) rlp.RawValue ReadBodyRLPByHash(hash common.Hash) rlp.RawValue WriteBody(hash common.Hash, number uint64, body *types.Body) PutBodyToBatch(batch Batch, hash common.Hash, number uint64, body *types.Body) WriteBodyRLP(hash common.Hash, number uint64, rlp rlp.RawValue) DeleteBody(hash common.Hash, number uint64) ReadTd(hash common.Hash, number uint64) *big.Int WriteTd(hash common.Hash, number uint64, td *big.Int) DeleteTd(hash common.Hash, number uint64) ReadReceipt(txHash common.Hash) (*types.Receipt, common.Hash, uint64, uint64) ReadReceipts(blockHash common.Hash, number uint64) types.Receipts ReadReceiptsByBlockHash(hash common.Hash) types.Receipts WriteReceipts(hash common.Hash, number uint64, receipts types.Receipts) PutReceiptsToBatch(batch Batch, hash common.Hash, number uint64, receipts types.Receipts) DeleteReceipts(hash common.Hash, number uint64) ReadBlock(hash common.Hash, number uint64) *types.Block ReadBlockByHash(hash common.Hash) *types.Block ReadBlockByNumber(number uint64) *types.Block HasBlock(hash common.Hash, number uint64) bool WriteBlock(block *types.Block) DeleteBlock(hash common.Hash, number uint64) FindCommonAncestor(a, b *types.Header) *types.Header ReadIstanbulSnapshot(hash common.Hash) ([]byte, error) WriteIstanbulSnapshot(hash common.Hash, blob []byte) error WriteMerkleProof(key, value []byte) // State Trie Database related operations ReadCachedTrieNode(hash common.Hash) ([]byte, error) ReadCachedTrieNodePreimage(secureKey []byte) ([]byte, error) ReadStateTrieNode(key []byte) ([]byte, error) HasStateTrieNode(key []byte) (bool, error) ReadPreimage(hash common.Hash) []byte ReadCachedTrieNodeFromOld(hash common.Hash) ([]byte, error) ReadCachedTrieNodePreimageFromOld(secureKey []byte) ([]byte, error) ReadStateTrieNodeFromOld(key []byte) ([]byte, error) HasStateTrieNodeFromOld(key []byte) (bool, error) ReadPreimageFromOld(hash common.Hash) []byte WritePreimages(number uint64, preimages map[common.Hash][]byte) // from accessors_indexes.go ReadTxLookupEntry(hash common.Hash) (common.Hash, uint64, uint64) WriteTxLookupEntries(block *types.Block) WriteAndCacheTxLookupEntries(block *types.Block) error PutTxLookupEntriesToBatch(batch Batch, block *types.Block) DeleteTxLookupEntry(hash common.Hash) ReadTxAndLookupInfo(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64) NewSenderTxHashToTxHashBatch() Batch PutSenderTxHashToTxHashToBatch(batch Batch, senderTxHash, txHash common.Hash) error ReadTxHashFromSenderTxHash(senderTxHash common.Hash) common.Hash ReadBloomBits(bloomBitsKey []byte) ([]byte, error) WriteBloomBits(bloomBitsKey []byte, bits []byte) error ReadValidSections() ([]byte, error) WriteValidSections(encodedSections []byte) ReadSectionHead(encodedSection []byte) ([]byte, error) WriteSectionHead(encodedSection []byte, hash common.Hash) DeleteSectionHead(encodedSection []byte) // from accessors_metadata.go ReadDatabaseVersion() *uint64 WriteDatabaseVersion(version uint64) ReadChainConfig(hash common.Hash) *params.ChainConfig WriteChainConfig(hash common.Hash, cfg *params.ChainConfig) // below operations are used in parent chain side, not child chain side. WriteChildChainTxHash(ccBlockHash common.Hash, ccTxHash common.Hash) ConvertChildChainBlockHashToParentChainTxHash(scBlockHash common.Hash) common.Hash WriteLastIndexedBlockNumber(blockNum uint64) GetLastIndexedBlockNumber() uint64 // below operations are used in child chain side, not parent chain side. WriteAnchoredBlockNumber(blockNum uint64) ReadAnchoredBlockNumber() uint64 WriteReceiptFromParentChain(blockHash common.Hash, receipt *types.Receipt) ReadReceiptFromParentChain(blockHash common.Hash) *types.Receipt WriteHandleTxHashFromRequestTxHash(rTx, hTx common.Hash) ReadHandleTxHashFromRequestTxHash(rTx common.Hash) common.Hash WriteParentOperatorFeePayer(feePayer common.Address) WriteChildOperatorFeePayer(feePayer common.Address) ReadParentOperatorFeePayer() common.Address ReadChildOperatorFeePayer() common.Address // cacheManager related functions. ClearHeaderChainCache() ClearBlockChainCache() ReadTxAndLookupInfoInCache(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64) ReadBlockReceiptsInCache(blockHash common.Hash) types.Receipts ReadTxReceiptInCache(txHash common.Hash) *types.Receipt // snapshot in clique(ConsensusClique) consensus WriteCliqueSnapshot(snapshotBlockHash common.Hash, encodedSnapshot []byte) error ReadCliqueSnapshot(snapshotBlockHash common.Hash) ([]byte, error) // Governance related functions WriteGovernance(data map[string]interface{}, num uint64) error WriteGovernanceIdx(num uint64) error ReadGovernance(num uint64) (map[string]interface{}, error) ReadRecentGovernanceIdx(count int) ([]uint64, error) ReadGovernanceAtNumber(num uint64, epoch uint64) (uint64, map[string]interface{}, error) WriteGovernanceState(b []byte) error ReadGovernanceState() ([]byte, error) }
func NewDBManager ¶
NewDBManager returns DBManager interface. If Partitioned is true, each Database will have its own LevelDB. If not, each Database will share one common LevelDB.
func NewLevelDBManagerForTest ¶
NewLevelDBManagerForTest returns a DBManager, consisted of only LevelDB. It also accepts LevelDB option, opt.Options.
func NewMemoryDBManager ¶
func NewMemoryDBManager() DBManager
type Database ¶
type Database interface { Putter Get(key []byte) ([]byte, error) Has(key []byte) (bool, error) Delete(key []byte) error Close() NewBatch() Batch Type() DBType Meter(prefix string) }
Database wraps all database operations. All methods are safe for concurrent use.
type LevelDBCompressionType ¶
type LevelDBCompressionType uint8
const ( AllNoCompression LevelDBCompressionType = iota ReceiptOnlySnappyCompression StateTrieOnlyNoCompression AllSnappyCompression )
type MemDB ¶
type MemDB struct {
// contains filtered or unexported fields
}
* This is a test memory database. Do not use for any production it does not get persisted
func NewMemDBWithCap ¶
type Putter ¶
Putter wraps the database write operation supported by both batches and regular databases.
type TransactionLookup ¶
type TransactionLookup struct { Tx *types.Transaction *TxLookupEntry }