merkledb

package
v1.10.12-rc.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 10, 2023 License: BSD-3-Clause Imports: 34 Imported by: 26

README

Path Based Merkelized Radix Trie

TODOs

  • Remove special casing around the root node from the physical structure of the hashed tree.
  • Analyze performance of using database snapshots rather than in-memory history
  • Improve intermediate node regeneration after ungraceful shutdown by reusing successfully written subtrees

Introduction

The Merkle Trie is a data structure that allows efficient and secure verification of the contents. It is a combination of a Merkle Tree and a Radix Trie.

The trie contains Merkle Nodes, which store key/value and children information.

Each Merkle Node represents a key path into the trie. It stores the key, the value (if one exists), its ID, and the IDs of its children nodes. The children have keys that contain the current node's key path as a prefix, and the index of each child indicates the next nibble in that child's key. For example, if we have two nodes, Node 1 with key path 0x91A and Node 2 with key path 0x91A4, Node 2 is stored in index 0x4 of Node 1's children (since 0x4 is the first value after the common prefix).

To reduce the depth of nodes in the trie, a Merkle Node utilizes path compression. Instead of having a long chain of nodes each containing only a single nibble of the key, we can "compress" the path by recording additional key information with each of a node's children. For example, if we have three nodes, Node 1 with key path 0x91A, Node 2 with key path 0x91A4, and Node 3 with key path 0x91A5132, then Node 1 has a key of 0x91A. Node 2 is stored at index 0x4 of Node 1's children since 4 is the next nibble in Node 2's key after skipping the common nibbles from Node 1's key. Node 3 is stored at index 0x5 of Node 1's children. Rather than have extra nodes for the remainder of Node 3's key, we instead store the rest of the key (132) in Node 1's children info.

+-----------------------------------+
| Merkle Node                       |
|                                   |
| ID: 0x0131                        |  an id representing the current node, derived from the node's value and all children ids
| Key: 0x91                         |  prefix of the key path, representing the location of the node in the trie
| Value: 0x00                       |  the value, if one exists, that is stored at the key path (pathPrefix + compressedPath)
| Children:                         |  a map of children node ids for any nodes in the trie that have this node's key path as a prefix
|   0: [:0x00542F]                  |  child 0 represents a node with key 0x910 with ID 0x00542F
|   1: [0x432:0xA0561C]             |  child 1 represents a node with key 0x911432 with ID 0xA0561C
|   ...                             |
|   15: [0x9A67B:0x02FB093]         |  child 15 represents a node with key 0x91F9A67B with ID 0x02FB093
+-----------------------------------+

Serialization

Node

Nodes are persisted in an underlying database. In order to persist nodes, we must first serialize them. Serialization is done by the encoder interface defined in codec.go.

The node serialization format is as follows:

+----------------------------------------------------+
| Value existence flag (1 byte)                      |
+----------------------------------------------------+
| Value length (varint) (optional)                   |
+----------------------------------------------------+
| Value (variable length bytes) (optional)           |
+----------------------------------------------------+
| Number of children (varint)                        |
+----------------------------------------------------+
| Child index (varint)                               |
+----------------------------------------------------+
| Child compressed path length (varint)              |
+----------------------------------------------------+
| Child compressed path (variable length bytes)      |
+----------------------------------------------------+
| Child ID (32 bytes)                                |
+----------------------------------------------------+
| Child has value (1 bytes)                          |
+----------------------------------------------------+
| Child index (varint)                               |
+----------------------------------------------------+
| Child compressed path length (varint)              |
+----------------------------------------------------+
| Child compressed path (variable length bytes)      |
+----------------------------------------------------+
| Child ID (32 bytes)                                |
+----------------------------------------------------+
| Child has value (1 bytes)                          |
+----------------------------------------------------+
|...                                                 |
+----------------------------------------------------+

Where:

  • Value existence flag is 1 if this node has a value, otherwise 0.
  • Value length is the length of the value, if it exists (i.e. if Value existince flag is 1.) Otherwise not serialized.
  • Value is the value, if it exists (i.e. if Value existince flag is 1.) Otherwise not serialized.
  • Number of children is the number of children this node has.
  • Child index is the index of a child node within the list of the node's children.
  • Child compressed path length is the length of the child node's compressed path.
  • Child compressed path is the child node's compressed path.
  • Child ID is the child node's ID.
  • Child has value indicates if that child has a value.

For each child of the node, we have an additional:

+----------------------------------------------------+
| Child index (varint)                               |
+----------------------------------------------------+
| Child compressed path length (varint)              |
+----------------------------------------------------+
| Child compressed path (variable length bytes)      |
+----------------------------------------------------+
| Child ID (32 bytes)                                |
+----------------------------------------------------+
| Child has value (1 bytes)                          |
+----------------------------------------------------+

Note that the Child index are not necessarily sequential. For example, if a node has 3 children, the Child index values could be 0, 2, and 15. However, the Child index values must be strictly increasing. For example, the Child index values cannot be 0, 0, and 1, or 1, 0.

Since a node can have up to 16 children, there can be up to 16 such blocks of children data.

Example

Let's take a look at an example node.

Its byte representation (in hex) is: 0x01020204000210579EB3718A7E437D2DDCE931AC7CC05A0BC695A9C2084F5DF12FB96AD0FA32660E06FFF09845893C4F9D92C4E097FCF2589BC9D6882B1F18D1C2FC91D7DF1D3FCBDB4238

The node's key is empty (its the root) and has value 0x02. It has two children. The first is at child index 0, has compressed path 0x01 and ID (in hex) 0x579eb3718a7e437d2ddce931ac7cc05a0bc695a9c2084f5df12fb96ad0fa3266. The second is at child index 14, has compressed path 0x0F0F0F and ID (in hex) 0x9845893c4f9d92c4e097fcf2589bc9d6882b1f18d1c2fc91d7df1d3fcbdb4238.

+--------------------------------------------------------------------+
| Value existence flag (1 byte)                                      |
| 0x01                                                               |
+--------------------------------------------------------------------+
| Value length (varint) (optional)                                   |
| 0x02                                                               |
+--------------------------------------------------------------------+
| Value (variable length bytes) (optional)                           |
| 0x02                                                               |
+--------------------------------------------------------------------+
| Number of children (varint)                                        |
| 0x04                                                               |
+--------------------------------------------------------------------+
| Child index (varint)                                               |
| 0x00                                                               |
+--------------------------------------------------------------------+
| Child compressed path length (varint)                              |
| 0x02                                                               |
+--------------------------------------------------------------------+
| Child compressed path (variable length bytes)                      |
| 0x10                                                               |
+--------------------------------------------------------------------+
| Child ID (32 bytes)                                                |
| 0x579EB3718A7E437D2DDCE931AC7CC05A0BC695A9C2084F5DF12FB96AD0FA3266 |
+--------------------------------------------------------------------+
| Child index (varint)                                               |
| 0x0E                                                               |
+--------------------------------------------------------------------+
| Child compressed path length (varint)                              |
| 0x06                                                               |
+--------------------------------------------------------------------+
| Child compressed path (variable length bytes)                      |
| 0xFFF0                                                             |
+--------------------------------------------------------------------+
| Child ID (32 bytes)                                                |
| 0x9845893C4F9D92C4E097FCF2589BC9D6882B1F18D1C2FC91D7DF1D3FCBDB4238 |
+--------------------------------------------------------------------+
Node Hashing

Each node must have a unique ID that identifies it. This ID is calculated by hashing the following values:

  • The node's children
  • The node's value digest
  • The node's key

Specifically, we encode these values in the following way:

+----------------------------------------------------+
| Number of children (varint)                        |
+----------------------------------------------------+
| Child index (varint)                               |
+----------------------------------------------------+
| Child ID (32 bytes)                                |
+----------------------------------------------------+
| Child index (varint)                               |
+----------------------------------------------------+
| Child ID (32 bytes)                                |
+----------------------------------------------------+
|...                                                 |
+----------------------------------------------------+
| Value existence flag (1 byte)                      |
+----------------------------------------------------+
| Value length (varint) (optional)                   |
+----------------------------------------------------+
| Value (variable length bytes) (optional)           |
+----------------------------------------------------+
| Key length (varint)                                |
+----------------------------------------------------+
| Key (variable length bytes)                        |
+----------------------------------------------------+

Where:

  • Number of children is the number of children this node has.
  • Child index is the index of a child node within the list of the node's children.
  • Child ID is the child node's ID.
  • Value existence flag is 1 if this node has a value, otherwise 0.
  • Value length is the length of the value, if it exists (i.e. if Value existince flag is 1.) Otherwise not serialized.
  • Value is the value, if it exists (i.e. if Value existince flag is 1.) Otherwise not serialized.
  • Key length is the number of nibbles in this node's key.
  • Key is the node's key.

Note that, as with the node serialization format, the Child index values aren't necessarily sequential, but they are unique and strictly increasing. Also like the node serialization format, there can be up to 16 blocks of children data. However, note that child compressed paths are not included in the node ID calculation.

Once this is encoded, we sha256 hash the resulting bytes to get the node's ID.

Encoding Varints and Bytes

Varints are encoded with binary.PutUvarint from the standard library's binary/encoding package. Bytes are encoded by simply copying them onto the buffer.

Design choices

[]byte copying

Nodes contain a []byte which represents its value. This slice should never be edited internally. This allows usage without having to make copies of it for safety. Anytime these values leave the library, for example in Get, GetValue, GetProof, GetRangeProof, etc, they need to be copied into a new slice to prevent edits made outside the library from being reflected in the DB/TrieViews.

Split Node Storage

The nodes are stored under two different prefixes depending on if the node contains a value.
If it does contain a value it is stored within the ValueNodeDB and if it doesn't it is stored in the IntermediateNodeDB. By splitting the nodes up by value, it allows better key/value iteration and a more compact key format.

Single node type

A Merkle Node holds the IDs of its children, its value, as well as any path extension. This simplifies some logic and allows all of the data about a node to be loaded in a single database read. This trades off a small amount of storage efficiency (some fields may be nil but are still stored for every node).

Validity

A trieView is built atop another trie, and there may be other trieViews built atop the same trie. We call these siblings. If one sibling is committed to database, we invalidate all other siblings and their descendants. Operations on an invalid trie return ErrInvalid. The children of the committed trieView are updated so that their new parentTrie is the database.

Locking

merkleDB has a RWMutex named lock. Its read operations don't store data in a map, so a read lock suffices for read operations. merkleDB has a Mutex named commitLock. It enforces that only a single view/batch is attempting to commit to the database at one time. lock is insufficient because there is a period of view preparation where read access should still be allowed, followed by a period where a full write lock is needed. The commitLock ensures that only a single goroutine makes the transition from read => write.

A trieView is built atop another trie, which may be the underlying merkleDB or another trieView. We use locking to guarantee atomicity/consistency of trie operations.

trieView has a RWMutex named commitLock which ensures that we don't create a view atop the trieView while it's being committed. It also has a RWMutex named validityTrackingLock that is held during methods that change the view's validity, tracking of child views' validity, or of the trieView parent trie. This lock ensures that writing/reading from trieView or any of its descendants is safe. The CommitToDB method grabs the merkleDB's commitLock. This is the only trieView method that modifies the underlying merkleDB.

In some of merkleDB's methods, we create a trieView and call unexported methods on it without locking it. We do so because the exported counterpart of the method read locks the merkleDB, which is already locked. This pattern is safe because the merkleDB is locked, so no data under the view is changing, and nobody else has a reference to the view, so there can't be any concurrent access.

To prevent deadlocks, trieView and merkleDB never acquire the commitLock of descendant views. That is, locking is always done from a view toward to the underlying merkleDB, never the other way around. The validityTrackingLock goes the opposite way. A view can lock the validityTrackingLock of its children, but not its ancestors. Because of this, any function that takes the validityTrackingLock must not take the commitLock as this may cause a deadlock. Keeping commitLock solely in the ancestor direction and validityTrackingLock solely in the descendant direction prevents deadlocks from occurring.

Documentation

Overview

Package merkledb is a generated GoMock package.

Index

Constants

View Source
const HashLength = 32

Variables

View Source
var (
	ErrInvalidProof                = errors.New("proof obtained an invalid root ID")
	ErrInvalidMaxLength            = errors.New("expected max length to be > 0")
	ErrNonIncreasingValues         = errors.New("keys sent are not in increasing order")
	ErrStateFromOutsideOfRange     = errors.New("state key falls outside of the start->end range")
	ErrNonIncreasingProofNodes     = errors.New("each proof node key must be a strict prefix of the next")
	ErrExtraProofNodes             = errors.New("extra proof nodes in path")
	ErrDataInMissingRootProof      = errors.New("there should be no state or deleted keys in a change proof that had a missing root")
	ErrNoMerkleProof               = errors.New("empty key response must include merkle proof")
	ErrShouldJustBeRoot            = errors.New("end proof should only contain root")
	ErrNoStartProof                = errors.New("no start proof")
	ErrNoEndProof                  = errors.New("no end proof")
	ErrNoProof                     = errors.New("proof has no nodes")
	ErrProofNodeNotForKey          = errors.New("the provided node has a key that is not a prefix of the specified key")
	ErrProofValueDoesntMatch       = errors.New("the provided value does not match the proof node for the provided key's value")
	ErrProofNodeHasUnincludedValue = errors.New("the provided proof has a value for a key within the range that is not present in the provided key/values")
	ErrInvalidMaybe                = errors.New("maybe is nothing but has value")
	ErrInvalidChildIndex           = errors.New("child index must be less than branch factor")
	ErrNilProofNode                = errors.New("proof node is nil")
	ErrNilValueOrHash              = errors.New("proof node's valueOrHash field is nil")
	ErrNilPath                     = errors.New("path is nil")
	ErrInvalidPathLength           = errors.New("path length doesn't match bytes length, check specified branchFactor")
	ErrNilRangeProof               = errors.New("range proof is nil")
	ErrNilChangeProof              = errors.New("change proof is nil")
	ErrNilMaybeBytes               = errors.New("maybe bytes is nil")
	ErrNilProof                    = errors.New("proof is nil")
	ErrNilValue                    = errors.New("value is nil")
	ErrUnexpectedEndProof          = errors.New("end proof should be empty")
	ErrInconsistentBranchFactor    = errors.New("all keys in proof nodes should have the same branch factor")
)
View Source
var (
	ErrCommitted                  = errors.New("view has been committed")
	ErrInvalid                    = errors.New("the trie this view was based on has changed, rendering this view invalid")
	ErrPartialByteLengthWithValue = errors.New(
		"the underlying db only supports whole number of byte keys, so cannot record changes with partial byte lengths",
	)
	ErrGetPathToFailure       = errors.New("GetPathTo failed to return the closest node")
	ErrStartAfterEnd          = errors.New("start key > end key")
	ErrNoValidRoot            = errors.New("a valid root was not provided to the trieView constructor")
	ErrParentNotDatabase      = errors.New("parent trie is not database")
	ErrNodesAlreadyCalculated = errors.New("cannot modify the trie after the node changes have been calculated")
)
View Source
var ErrInsufficientHistory = errors.New("insufficient history to generate proof")

Functions

This section is empty.

Types

type BranchFactor added in v1.10.12

type BranchFactor int
const (
	BranchFactor2   BranchFactor = 2
	BranchFactor4   BranchFactor = 4
	BranchFactor16  BranchFactor = 16
	BranchFactor256 BranchFactor = 256
)

func (BranchFactor) Valid added in v1.10.12

func (f BranchFactor) Valid() error

type ChangeOrRangeProof added in v1.10.9

type ChangeOrRangeProof struct {
	ChangeProof *ChangeProof
	RangeProof  *RangeProof
}

Exactly one of ChangeProof or RangeProof is non-nil.

type ChangeProof

type ChangeProof struct {

	// A proof that the smallest key in the requested range does/doesn't
	// exist in the trie with the requested start root.
	// Empty if no lower bound on the requested range was given.
	// Note that this may not be an entire proof -- nodes are omitted if
	// they are also in [EndProof].
	StartProof []ProofNode

	// If [KeyChanges] is non-empty, this is a proof of the largest key
	// in [KeyChanges].
	//
	// If [KeyChanges] is empty and an upper range bound was given,
	// this is a proof of the upper range bound.
	//
	// If [KeyChanges] is empty and no upper range bound was given,
	// this is empty.
	EndProof []ProofNode

	// A subset of key-values that were added, removed, or had their values
	// modified between the requested start root (exclusive) and the requested
	// end root (inclusive).
	// Each key is in the requested range (inclusive).
	// The first key-value is the first key-value at/after the range start.
	// The key-value pairs are consecutive. That is, if keys k1 and k2 are
	// in [KeyChanges] then there is no k3 that was modified between the start and
	// end roots such that k1 < k3 < k2.
	// This is a subset of the requested key-value range, rather than the entire
	// range, because otherwise the proof may be too large.
	// Sorted by increasing key and with no duplicate keys.
	//
	// Example: Suppose that between the start root and the end root, the following
	// key-value pairs were added, removed, or modified:
	//
	// [kv1, kv2, kv3, kv4, kv5]
	// where start <= kv1 < ... < kv5 <= end.
	//
	// The following are possible values of [KeyChanges]:
	//
	// []
	// [kv1]
	// [kv1, kv2]
	// [kv1, kv2, kv3]
	// [kv1, kv2, kv3, kv4]
	// [kv1, kv2, kv3, kv4, kv5]
	//
	// The following values of [KeyChanges] are always invalid, for example:
	//
	// [kv2] (Doesn't include kv1, the first key-value at/after the range start)
	// [kv1, kv3] (Doesn't include kv2, the key-value between kv1 and kv3)
	// [kv1, kv3, kv2] (Not sorted by increasing key)
	// [kv1, kv1] (Duplicate key-value pairs)
	// [kv0, kv1] (For some kv1 < start)
	// [kv1, kv2, kv3, kv4, kv5, kv6] (For some kv6 > end)
	KeyChanges []KeyChange
}

A change proof proves that a set of key-value changes occurred between two trie roots, where each key-value pair's key is between some lower and upper bound (inclusive).

func (*ChangeProof) Empty

func (proof *ChangeProof) Empty() bool

func (*ChangeProof) ToProto added in v1.10.3

func (proof *ChangeProof) ToProto() *pb.ChangeProof

func (*ChangeProof) UnmarshalProto added in v1.10.3

func (proof *ChangeProof) UnmarshalProto(pbProof *pb.ChangeProof, bf BranchFactor) error

type ChangeProofer added in v1.10.3

type ChangeProofer interface {
	// GetChangeProof returns a proof for a subset of the key/value changes in key range
	// [start, end] that occurred between [startRootID] and [endRootID].
	// Returns at most [maxLength] key/value pairs.
	// Returns [ErrInsufficientHistory] if this node has insufficient history
	// to generate the proof.
	GetChangeProof(
		ctx context.Context,
		startRootID ids.ID,
		endRootID ids.ID,
		start maybe.Maybe[[]byte],
		end maybe.Maybe[[]byte],
		maxLength int,
	) (*ChangeProof, error)

	// Returns nil iff all of the following hold:
	//   - [start] <= [end].
	//   - [proof] is non-empty.
	//   - All keys in [proof.KeyValues] and [proof.DeletedKeys] are in [start, end].
	//     If [start] is nothing, all keys are considered > [start].
	//     If [end] is nothing, all keys are considered < [end].
	//   - [proof.KeyValues] and [proof.DeletedKeys] are sorted in order of increasing key.
	//   - [proof.StartProof] and [proof.EndProof] are well-formed.
	//   - When the changes in [proof.KeyChanes] are applied,
	//     the root ID of the database is [expectedEndRootID].
	VerifyChangeProof(
		ctx context.Context,
		proof *ChangeProof,
		start maybe.Maybe[[]byte],
		end maybe.Maybe[[]byte],
		expectedEndRootID ids.ID,
	) error

	// CommitChangeProof commits the key/value pairs within the [proof] to the db.
	CommitChangeProof(ctx context.Context, proof *ChangeProof) error
}

type Config

type Config struct {
	// BranchFactor determines the number of children each node can have.
	BranchFactor BranchFactor

	// RootGenConcurrency is the number of goroutines to use when
	// generating a new state root.
	//
	// If 0 is specified, [runtime.NumCPU] will be used.
	RootGenConcurrency uint
	// The number of bytes to write to disk when intermediate nodes are evicted
	// from their cache and written to disk.
	EvictionBatchSize uint
	// The number of changes to the database that we store in memory in order to
	// serve change proofs.
	HistoryLength uint
	// The number of bytes to cache nodes with values.
	ValueNodeCacheSize uint
	// The number of bytes to cache nodes without values.
	IntermediateNodeCacheSize uint
	// If [Reg] is nil, metrics are collected locally but not exported through
	// Prometheus.
	// This may be useful for testing.
	Reg        prometheus.Registerer
	TraceLevel TraceLevel
	Tracer     trace.Tracer
}

type KeyChange added in v1.10.1

type KeyChange struct {
	Key   []byte
	Value maybe.Maybe[[]byte]
}

type KeyValue

type KeyValue struct {
	Key   []byte
	Value []byte
}

type MerkleDB added in v1.10.2

func New

func New(ctx context.Context, db database.Database, config Config) (MerkleDB, error)

New returns a new merkle database.

type MerkleRootGetter added in v1.10.3

type MerkleRootGetter interface {
	// GetMerkleRoot returns the merkle root of the Trie
	GetMerkleRoot(ctx context.Context) (ids.ID, error)
}

type MockMerkleDB added in v1.10.2

type MockMerkleDB struct {
	// contains filtered or unexported fields
}

MockMerkleDB is a mock of MerkleDB interface.

func NewMockMerkleDB added in v1.10.2

func NewMockMerkleDB(ctrl *gomock.Controller) *MockMerkleDB

NewMockMerkleDB creates a new mock instance.

func (*MockMerkleDB) Close added in v1.10.2

func (m *MockMerkleDB) Close() error

Close mocks base method.

func (*MockMerkleDB) CommitChangeProof added in v1.10.2

func (m *MockMerkleDB) CommitChangeProof(arg0 context.Context, arg1 *ChangeProof) error

CommitChangeProof mocks base method.

func (*MockMerkleDB) CommitRangeProof added in v1.10.2

func (m *MockMerkleDB) CommitRangeProof(arg0 context.Context, arg1, arg2 maybe.Maybe[[]uint8], arg3 *RangeProof) error

CommitRangeProof mocks base method.

func (*MockMerkleDB) Compact added in v1.10.2

func (m *MockMerkleDB) Compact(arg0, arg1 []byte) error

Compact mocks base method.

func (*MockMerkleDB) Delete added in v1.10.2

func (m *MockMerkleDB) Delete(arg0 []byte) error

Delete mocks base method.

func (*MockMerkleDB) EXPECT added in v1.10.2

EXPECT returns an object that allows the caller to indicate expected use.

func (*MockMerkleDB) Get added in v1.10.2

func (m *MockMerkleDB) Get(arg0 []byte) ([]byte, error)

Get mocks base method.

func (*MockMerkleDB) GetChangeProof added in v1.10.2

func (m *MockMerkleDB) GetChangeProof(arg0 context.Context, arg1, arg2 ids.ID, arg3, arg4 maybe.Maybe[[]uint8], arg5 int) (*ChangeProof, error)

GetChangeProof mocks base method.

func (*MockMerkleDB) GetMerkleRoot added in v1.10.2

func (m *MockMerkleDB) GetMerkleRoot(arg0 context.Context) (ids.ID, error)

GetMerkleRoot mocks base method.

func (*MockMerkleDB) GetProof added in v1.10.2

func (m *MockMerkleDB) GetProof(arg0 context.Context, arg1 []byte) (*Proof, error)

GetProof mocks base method.

func (*MockMerkleDB) GetRangeProof added in v1.10.2

func (m *MockMerkleDB) GetRangeProof(arg0 context.Context, arg1, arg2 maybe.Maybe[[]uint8], arg3 int) (*RangeProof, error)

GetRangeProof mocks base method.

func (*MockMerkleDB) GetRangeProofAtRoot added in v1.10.2

func (m *MockMerkleDB) GetRangeProofAtRoot(arg0 context.Context, arg1 ids.ID, arg2, arg3 maybe.Maybe[[]uint8], arg4 int) (*RangeProof, error)

GetRangeProofAtRoot mocks base method.

func (*MockMerkleDB) GetValue added in v1.10.2

func (m *MockMerkleDB) GetValue(arg0 context.Context, arg1 []byte) ([]byte, error)

GetValue mocks base method.

func (*MockMerkleDB) GetValues added in v1.10.2

func (m *MockMerkleDB) GetValues(arg0 context.Context, arg1 [][]byte) ([][]byte, []error)

GetValues mocks base method.

func (*MockMerkleDB) Has added in v1.10.2

func (m *MockMerkleDB) Has(arg0 []byte) (bool, error)

Has mocks base method.

func (*MockMerkleDB) HealthCheck added in v1.10.2

func (m *MockMerkleDB) HealthCheck(arg0 context.Context) (interface{}, error)

HealthCheck mocks base method.

func (*MockMerkleDB) NewBatch added in v1.10.2

func (m *MockMerkleDB) NewBatch() database.Batch

NewBatch mocks base method.

func (*MockMerkleDB) NewIterator added in v1.10.2

func (m *MockMerkleDB) NewIterator() database.Iterator

NewIterator mocks base method.

func (*MockMerkleDB) NewIteratorWithPrefix added in v1.10.2

func (m *MockMerkleDB) NewIteratorWithPrefix(arg0 []byte) database.Iterator

NewIteratorWithPrefix mocks base method.

func (*MockMerkleDB) NewIteratorWithStart added in v1.10.2

func (m *MockMerkleDB) NewIteratorWithStart(arg0 []byte) database.Iterator

NewIteratorWithStart mocks base method.

func (*MockMerkleDB) NewIteratorWithStartAndPrefix added in v1.10.2

func (m *MockMerkleDB) NewIteratorWithStartAndPrefix(arg0, arg1 []byte) database.Iterator

NewIteratorWithStartAndPrefix mocks base method.

func (*MockMerkleDB) NewView added in v1.10.2

func (m *MockMerkleDB) NewView(arg0 context.Context, arg1 ViewChanges) (TrieView, error)

NewView mocks base method.

func (*MockMerkleDB) Put added in v1.10.2

func (m *MockMerkleDB) Put(arg0, arg1 []byte) error

Put mocks base method.

func (*MockMerkleDB) VerifyChangeProof added in v1.10.2

func (m *MockMerkleDB) VerifyChangeProof(arg0 context.Context, arg1 *ChangeProof, arg2, arg3 maybe.Maybe[[]uint8], arg4 ids.ID) error

VerifyChangeProof mocks base method.

type MockMerkleDBMockRecorder added in v1.10.2

type MockMerkleDBMockRecorder struct {
	// contains filtered or unexported fields
}

MockMerkleDBMockRecorder is the mock recorder for MockMerkleDB.

func (*MockMerkleDBMockRecorder) Close added in v1.10.2

func (mr *MockMerkleDBMockRecorder) Close() *gomock.Call

Close indicates an expected call of Close.

func (*MockMerkleDBMockRecorder) CommitChangeProof added in v1.10.2

func (mr *MockMerkleDBMockRecorder) CommitChangeProof(arg0, arg1 interface{}) *gomock.Call

CommitChangeProof indicates an expected call of CommitChangeProof.

func (*MockMerkleDBMockRecorder) CommitRangeProof added in v1.10.2

func (mr *MockMerkleDBMockRecorder) CommitRangeProof(arg0, arg1, arg2, arg3 interface{}) *gomock.Call

CommitRangeProof indicates an expected call of CommitRangeProof.

func (*MockMerkleDBMockRecorder) Compact added in v1.10.2

func (mr *MockMerkleDBMockRecorder) Compact(arg0, arg1 interface{}) *gomock.Call

Compact indicates an expected call of Compact.

func (*MockMerkleDBMockRecorder) Delete added in v1.10.2

func (mr *MockMerkleDBMockRecorder) Delete(arg0 interface{}) *gomock.Call

Delete indicates an expected call of Delete.

func (*MockMerkleDBMockRecorder) Get added in v1.10.2

func (mr *MockMerkleDBMockRecorder) Get(arg0 interface{}) *gomock.Call

Get indicates an expected call of Get.

func (*MockMerkleDBMockRecorder) GetChangeProof added in v1.10.2

func (mr *MockMerkleDBMockRecorder) GetChangeProof(arg0, arg1, arg2, arg3, arg4, arg5 interface{}) *gomock.Call

GetChangeProof indicates an expected call of GetChangeProof.

func (*MockMerkleDBMockRecorder) GetMerkleRoot added in v1.10.2

func (mr *MockMerkleDBMockRecorder) GetMerkleRoot(arg0 interface{}) *gomock.Call

GetMerkleRoot indicates an expected call of GetMerkleRoot.

func (*MockMerkleDBMockRecorder) GetProof added in v1.10.2

func (mr *MockMerkleDBMockRecorder) GetProof(arg0, arg1 interface{}) *gomock.Call

GetProof indicates an expected call of GetProof.

func (*MockMerkleDBMockRecorder) GetRangeProof added in v1.10.2

func (mr *MockMerkleDBMockRecorder) GetRangeProof(arg0, arg1, arg2, arg3 interface{}) *gomock.Call

GetRangeProof indicates an expected call of GetRangeProof.

func (*MockMerkleDBMockRecorder) GetRangeProofAtRoot added in v1.10.2

func (mr *MockMerkleDBMockRecorder) GetRangeProofAtRoot(arg0, arg1, arg2, arg3, arg4 interface{}) *gomock.Call

GetRangeProofAtRoot indicates an expected call of GetRangeProofAtRoot.

func (*MockMerkleDBMockRecorder) GetValue added in v1.10.2

func (mr *MockMerkleDBMockRecorder) GetValue(arg0, arg1 interface{}) *gomock.Call

GetValue indicates an expected call of GetValue.

func (*MockMerkleDBMockRecorder) GetValues added in v1.10.2

func (mr *MockMerkleDBMockRecorder) GetValues(arg0, arg1 interface{}) *gomock.Call

GetValues indicates an expected call of GetValues.

func (*MockMerkleDBMockRecorder) Has added in v1.10.2

func (mr *MockMerkleDBMockRecorder) Has(arg0 interface{}) *gomock.Call

Has indicates an expected call of Has.

func (*MockMerkleDBMockRecorder) HealthCheck added in v1.10.2

func (mr *MockMerkleDBMockRecorder) HealthCheck(arg0 interface{}) *gomock.Call

HealthCheck indicates an expected call of HealthCheck.

func (*MockMerkleDBMockRecorder) NewBatch added in v1.10.2

func (mr *MockMerkleDBMockRecorder) NewBatch() *gomock.Call

NewBatch indicates an expected call of NewBatch.

func (*MockMerkleDBMockRecorder) NewIterator added in v1.10.2

func (mr *MockMerkleDBMockRecorder) NewIterator() *gomock.Call

NewIterator indicates an expected call of NewIterator.

func (*MockMerkleDBMockRecorder) NewIteratorWithPrefix added in v1.10.2

func (mr *MockMerkleDBMockRecorder) NewIteratorWithPrefix(arg0 interface{}) *gomock.Call

NewIteratorWithPrefix indicates an expected call of NewIteratorWithPrefix.

func (*MockMerkleDBMockRecorder) NewIteratorWithStart added in v1.10.2

func (mr *MockMerkleDBMockRecorder) NewIteratorWithStart(arg0 interface{}) *gomock.Call

NewIteratorWithStart indicates an expected call of NewIteratorWithStart.

func (*MockMerkleDBMockRecorder) NewIteratorWithStartAndPrefix added in v1.10.2

func (mr *MockMerkleDBMockRecorder) NewIteratorWithStartAndPrefix(arg0, arg1 interface{}) *gomock.Call

NewIteratorWithStartAndPrefix indicates an expected call of NewIteratorWithStartAndPrefix.

func (*MockMerkleDBMockRecorder) NewView added in v1.10.2

func (mr *MockMerkleDBMockRecorder) NewView(arg0, arg1 interface{}) *gomock.Call

NewView indicates an expected call of NewView.

func (*MockMerkleDBMockRecorder) Put added in v1.10.2

func (mr *MockMerkleDBMockRecorder) Put(arg0, arg1 interface{}) *gomock.Call

Put indicates an expected call of Put.

func (*MockMerkleDBMockRecorder) VerifyChangeProof added in v1.10.2

func (mr *MockMerkleDBMockRecorder) VerifyChangeProof(arg0, arg1, arg2, arg3, arg4 interface{}) *gomock.Call

VerifyChangeProof indicates an expected call of VerifyChangeProof.

type Path added in v1.10.12

type Path struct {
	// contains filtered or unexported fields
}

func NewPath added in v1.10.12

func NewPath(p []byte, branchFactor BranchFactor) Path

NewPath returns [p] as a new path with the given [branchFactor]. Assumes [branchFactor] is valid.

func (Path) Append added in v1.10.12

func (p Path) Append(token byte) Path

Append returns a new Path that equals the current Path with [token] appended to the end.

func (Path) Bytes added in v1.10.12

func (p Path) Bytes() []byte

Bytes returns the raw bytes of the Path Invariant: The returned value must not be modified.

func (Path) Extend added in v1.10.12

func (p Path) Extend(path Path) Path

Extend returns a new Path that equals the passed Path appended to the current Path

func (Path) Greater added in v1.10.12

func (p Path) Greater(other Path) bool

Greater returns true if current Path is greater than other Path

func (Path) HasPrefix added in v1.10.12

func (p Path) HasPrefix(prefix Path) bool

HasPrefix returns true iff [prefix] is a prefix of [p] or equal to it.

func (Path) HasStrictPrefix added in v1.10.12

func (p Path) HasStrictPrefix(prefix Path) bool

HasStrictPrefix returns true iff [prefix] is a prefix of [p] but is not equal to it.

func (Path) Less added in v1.10.12

func (p Path) Less(other Path) bool

Less returns true if current Path is less than other Path

func (Path) Skip added in v1.10.12

func (p Path) Skip(tokensToSkip int) Path

Skip returns a new Path that contains the last p.length-tokensToSkip tokens of [p].

func (Path) Take added in v1.10.12

func (p Path) Take(tokensToTake int) Path

Take returns a new Path that contains the first tokensToTake tokens of the current Path

func (Path) Token added in v1.10.12

func (p Path) Token(index int) byte

Token returns the token at the specified index,

func (Path) TokensLength added in v1.10.12

func (p Path) TokensLength() int

TokensLength returns the number of tokens in [p].

type Proof

type Proof struct {
	// Nodes in the proof path from root --> target key
	// (or node that would be where key is if it doesn't exist).
	// Must always be non-empty (i.e. have the root node).
	Path []ProofNode
	// This is a proof that [key] exists/doesn't exist.
	Key Path

	// Nothing if [Key] isn't in the trie.
	// Otherwise the value corresponding to [Key].
	Value maybe.Maybe[[]byte]
}

An inclusion/exclustion proof of a key.

func (*Proof) ToProto added in v1.10.3

func (proof *Proof) ToProto() *pb.Proof

func (*Proof) UnmarshalProto added in v1.10.3

func (proof *Proof) UnmarshalProto(pbProof *pb.Proof, bf BranchFactor) error

func (*Proof) Verify

func (proof *Proof) Verify(ctx context.Context, expectedRootID ids.ID) error

Returns nil if the trie given in [proof] has root [expectedRootID]. That is, this is a valid proof that [proof.Key] exists/doesn't exist in the trie with root [expectedRootID].

type ProofGetter added in v1.10.3

type ProofGetter interface {
	// GetProof generates a proof of the value associated with a particular key,
	// or a proof of its absence from the trie
	GetProof(ctx context.Context, bytesPath []byte) (*Proof, error)
}

type ProofNode

type ProofNode struct {
	KeyPath Path
	// Nothing if this is an intermediate node.
	// The value in this node if its length < [HashLen].
	// The hash of the value in this node otherwise.
	ValueOrHash maybe.Maybe[[]byte]
	Children    map[byte]ids.ID
}

func (*ProofNode) ToProto added in v1.10.3

func (node *ProofNode) ToProto() *pb.ProofNode

Assumes [node.Key.KeyPath.length] <= math.MaxUint64.

func (*ProofNode) UnmarshalProto added in v1.10.3

func (node *ProofNode) UnmarshalProto(pbNode *pb.ProofNode, bf BranchFactor) error

type RangeProof

type RangeProof struct {

	// A proof that the smallest key in the requested range does/doesn't exist.
	// Note that this may not be an entire proof -- nodes are omitted if
	// they are also in [EndProof].
	StartProof []ProofNode

	// If no upper range bound was given, [KeyValues] is empty,
	// and [StartProof] is non-empty, this is empty.
	//
	// If no upper range bound was given, [KeyValues] is empty,
	// and [StartProof] is empty, this is the root.
	//
	// If an upper range bound was given and [KeyValues] is empty,
	// this is a proof for the upper range bound.
	//
	// Otherwise, this is a proof for the largest key in [KeyValues].
	EndProof []ProofNode

	// This proof proves that the key-value pairs in [KeyValues] are in the trie.
	// Sorted by increasing key.
	KeyValues []KeyValue
}

A proof that a given set of key-value pairs are in a trie.

func (*RangeProof) ToProto added in v1.10.3

func (proof *RangeProof) ToProto() *pb.RangeProof

func (*RangeProof) UnmarshalProto added in v1.10.3

func (proof *RangeProof) UnmarshalProto(pbProof *pb.RangeProof, bf BranchFactor) error

func (*RangeProof) Verify

func (proof *RangeProof) Verify(
	ctx context.Context,
	start maybe.Maybe[[]byte],
	end maybe.Maybe[[]byte],
	expectedRootID ids.ID,
) error

Verify returns nil iff all the following hold:

  • The invariants of RangeProof hold.
  • [start] <= [end].
  • [proof] proves the key-value pairs in [proof.KeyValues] are in the trie whose root is [expectedRootID].

All keys in [proof.KeyValues] are in the range [start, end].

If [start] is Nothing, all keys are considered > [start].
If [end] is Nothing, all keys are considered < [end].

type RangeProofer added in v1.10.3

type RangeProofer interface {
	// GetRangeProofAtRoot returns a proof for the key/value pairs in this trie within the range
	// [start, end] when the root of the trie was [rootID].
	// If [start] is Nothing, there's no lower bound on the range.
	// If [end] is Nothing, there's no upper bound on the range.
	GetRangeProofAtRoot(
		ctx context.Context,
		rootID ids.ID,
		start maybe.Maybe[[]byte],
		end maybe.Maybe[[]byte],
		maxLength int,
	) (*RangeProof, error)

	// CommitRangeProof commits the key/value pairs within the [proof] to the db.
	// [start] is the smallest possible key in the range this [proof] covers.
	// [end] is the largest possible key in the range this [proof] covers.
	CommitRangeProof(ctx context.Context, start, end maybe.Maybe[[]byte], proof *RangeProof) error
}

type ReadOnlyTrie

type ReadOnlyTrie interface {
	MerkleRootGetter
	ProofGetter

	// GetValue gets the value associated with the specified key
	// database.ErrNotFound if the key is not present
	GetValue(ctx context.Context, key []byte) ([]byte, error)

	// GetValues gets the values associated with the specified keys
	// database.ErrNotFound if the key is not present
	GetValues(ctx context.Context, keys [][]byte) ([][]byte, []error)

	// GetRangeProof returns a proof of up to [maxLength] key-value pairs with
	// keys in range [start, end].
	// If [start] is Nothing, there's no lower bound on the range.
	// If [end] is Nothing, there's no upper bound on the range.
	GetRangeProof(ctx context.Context, start maybe.Maybe[[]byte], end maybe.Maybe[[]byte], maxLength int) (*RangeProof, error)

	database.Iteratee
	// contains filtered or unexported methods
}

type TraceLevel added in v1.10.10

type TraceLevel int
const (
	DebugTrace TraceLevel = iota - 1
	InfoTrace             // Default
	NoTrace
)

type Trie

type Trie interface {
	ReadOnlyTrie

	// NewView returns a new view on top of this Trie where the passed changes
	// have been applied.
	NewView(
		ctx context.Context,
		changes ViewChanges,
	) (TrieView, error)
}

type TrieView

type TrieView interface {
	Trie

	// CommitToDB writes the changes in this view to the database.
	// Takes the DB commit lock.
	CommitToDB(ctx context.Context) error
}

type ViewChanges added in v1.10.10

type ViewChanges struct {
	BatchOps []database.BatchOp
	MapOps   map[string]maybe.Maybe[[]byte]
	// ConsumeBytes when set to true will skip copying of bytes and assume
	// ownership of the provided bytes.
	ConsumeBytes bool
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL