atree

package module
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 23, 2023 License: Apache-2.0 Imports: 14 Imported by: 27

README

Atree

Atree provides scalable arrays and scalable ordered maps. It is used by Cadence in the Flow blockchain.

Inspired by patterns used in modern variants of B+ Trees, Atree provides two types of data structures: Scalable Array Type (SAT) and Ordered Map Type (OMT).

  • Scalable Array Type (SAT) is a heterogeneous variable-size array, storing any type of values into a smaller ordered list of values and provides efficient functionality to lookup, insert and remove elements anywhere in the array.

  • Ordered Map Type (OMT) is an ordered map of key-value pairs; keys can be any hashable type and values can be any serializable value type. It supports heterogeneous key or value types (e.g. first key storing a boolean and second key storing a string). OMT keeps values in specific sorted order and operations are deterministic so the state of the segments after a sequence of operations are always unique. OMT uses CircleHash64f with Deferred+Segmented BLAKE3 Digests.

Under the Hood

Atree uses new types of high-fanout B+ tree and some heuristics to balance the trade-off between latency of operations and the number of reads and writes.

Each data structure holds the data as several relatively fixed-size segments of bytes (also known as slabs) forming a tree and as the size of data structures grows or shrinks, it adjusts the number of segments used. After each operation, Atree tries to keep segment size within an acceptable size range by merging segments when needed (lower than min threshold) and splitting large-size slabs (above max threshold) or moving some values to neighbouring segments (rebalancing). For ordered maps and arrays with small number of elements, Atree is designed to have a very minimal overhead in compare to less scalable standard array and ordermaps (using a single data segment at start).

In order to minimize the number of bytes touched after each operation, Atree uses a deterministic greedy approach ("Optimistic Encasing Algorithm") to postpone merge, split and rebalancing the tree as much as possible. In other words, it tolerates the tree to get unbalanced with the cost of keeping some space for future insertions or growing a segment a bit larger than what it should be which would minimize the number of segments (and bytes) that are touched at each operation.

Example

1 - An ordered map metadata slab keeps the very first key hash of any children to navigate the path. It uses a combination of linear scan and binary search to find the next slab.

2 - Similarly the array metadata slab keeps the count of each child and uses that to navigate the path.

3 - Nested structures (e.g. map holding an array under a key) are handled by storing nested map or array as separate objects and using a one-way reference from parent to the nested object.

4 - Extremely large objects are handled by storing them as an external data slab and using a pointer to the external data slab. This way we maintain the size requirements of slabs and preserve the performance of atree. In the future work external data slabs can be broken into a sequence of smaller size slabs.

5 - Atree Ordered Map uses a collision handling design that is performant and resilient against hash-flooding attacks. It uses multi-level hashing that combines a fast 64-bit non-cryptographic hash with a 256-bit cryptographic hash. For speed, the cryptographic hash is only computed if there's a collision. For smaller storage size, the digests are divided into 64-bit segments with only the minimum required being stored. Collisions that cannot be resolved by hashes will eventually use linear lookup, but that is very unlikely as it would require collisions on two different hashes (CircleHash64f + BLAKE3) from the same input.

6 - Forwarding data slab pointers are used to make sequential iterations more efficient.

OMT uses CircleHash64f with Deferred+Segmented BLAKE3 Digests

Inputs hashed by OMT are typically short inputs (usually smaller than 128 bytes). OMT uses state-of-the-art hash algorithms and a novel collision-handling design to balance speed, security, and storage space.

CircleHash64f 🏅
(seeded)
SipHash
(seeded)
BLAKE3 🏅
(crypto)
SHA3-256
(crypto)
4 bytes 1.34 GB/s 0.361 GB/s 0.027 GB/s 0.00491 GB/s
8 bytes 2.70 GB/s 0.642 GB/s 0.106 GB/s 0.00984 GB/s
16 bytes 5.48 GB/s 1.03 GB/s 0.217 GB/s 0.0197 GB/s
32 bytes 8.01 GB/s 1.46 GB/s 0.462 GB/s 0.0399 GB/s
64 bytes 10.3 GB/s 1.83 GB/s 0.911 GB/s 0.0812 GB/s
128 bytes 12.8 GB/s 2.09 GB/s 1.03 GB/s 0.172 GB/s
192 bytes 14.2 GB/s 2.17 GB/s 1.04 GB/s 0.158 GB/s
256 bytes 15.0 GB/s 2.22 GB/s 1.06 GB/s 0.219 GB/s
  • Using Go 1.17.7, darwin_amd64, i7-1068NG7 CPU.
  • Results are from go test -bench=. -count=20 and benchstat.
  • Some hash libraries have slowdowns at some larger sizes.

API Reference

Atree's API is documented with godoc at pkg.go.dev and will be updated when new versions of Atree are tagged.

Contributing

If you would like to contribute to Atree, have a look at the contributing guide.

Additionally, all non-error code paths must be covered by tests. And pull requests should not lower the code coverage percent.

License

The Atree library is licensed under the terms of the Apache license. See LICENSE for more information.

Logo is based on the artwork of Raisul Hadi licensed under Creative Commons.

Copyright © 2021-2022 Dapper Labs, Inc.

Documentation

Overview

Package atree provides scalable arrays and scalable ordered maps. It is used by Cadence in the Flow blockchain.

Atree is maintained at https://github.com/onflow/atree

Index

Constants

View Source
const (
	CBORTagInlineCollisionGroup   = 253
	CBORTagExternalCollisionGroup = 254

	CBORTagStorageID = 255
)
View Source
const LedgerBaseStorageSlabPrefix = "$"

Variables

View Source
var (
	MaxInlineArrayElementSize uint64

	MaxInlineMapKeyOrValueSize uint64
)
View Source
var (
	AddressUndefined      = Address{}
	StorageIndexUndefined = StorageIndex{}
	StorageIDUndefined    = StorageID{}
)
View Source
var MaxCollisionLimitPerDigest = uint32(255)

MaxCollisionLimitPerDigest is the noncryptographic hash collision limit (per digest per map) we enforce in the first level. In the same map for the same digest, having a non-intentional collision should be rare and several collisions should be extremely rare. The default limit should be high enough to ignore accidental collisions while mitigating attacks.

Functions

func CheckStorageHealth

func CheckStorageHealth(storage SlabStorage, expectedNumberOfRootSlabs int) (map[StorageID]struct{}, error)

CheckStorageHealth checks for the health of slab storage. It traverses the slabs and checks these factors: - All non-root slabs only has a single parent reference (no double referencing) - Every child of a parent shares the same ownership (childStorageID.Address == parentStorageID.Address) - The number of root slabs are equal to the expected number (skipped if expectedNumberOfRootSlabs is -1) This should be used for testing purposes only, as it might be slow to process

func DumpArraySlabs

func DumpArraySlabs(a *Array) ([]string, error)

func DumpMapSlabs

func DumpMapSlabs(m *OrderedMap) ([]string, error)

func Encode

func Encode(storable Storable, encMode cbor.EncMode) ([]byte, error)

Encode is a wrapper for Storable.Encode()

func GetUintCBORSize

func GetUintCBORSize(n uint64) uint32

TODO: make it inline

func HasPointers

func HasPointers(slabData []byte) (bool, error)

func HasSizeLimit

func HasSizeLimit(slabData []byte) (bool, error)

func IsRootOfAnObject

func IsRootOfAnObject(slabData []byte) (bool, error)

func LedgerKeyIsSlabKey

func LedgerKeyIsSlabKey(key string) bool

func NewCollisionLimitError added in v0.4.0

func NewCollisionLimitError(collisionLimitPerDigest uint32) error

NewCollisionLimitError constructs a CollisionLimitError

func NewDecodingError

func NewDecodingError(err error) error

NewDecodingError constructs a DecodingError

func NewDecodingErrorf

func NewDecodingErrorf(msg string, args ...interface{}) error

NewDecodingErrorf constructs a new DecodingError with error formating

func NewDuplicateKeyError

func NewDuplicateKeyError(key interface{}) error

func NewEncodingError

func NewEncodingError(err error) error

NewEncodingError constructs a EncodingError

func NewEncodingErrorf

func NewEncodingErrorf(msg string, args ...interface{}) error

NewEncodingErrorf constructs a new EncodingError with error formating

func NewExternalError added in v0.6.0

func NewExternalError(err error, msg string) error

func NewFatalError

func NewFatalError(err error) error

func NewHashError

func NewHashError(err error) error

NewHashError constructs a HashError

func NewHashLevelErrorf

func NewHashLevelErrorf(msg string, args ...interface{}) error

NewHashLevelError constructs a HashLevelError

func NewHashSeedUninitializedError

func NewHashSeedUninitializedError() error

func NewIndexOutOfBoundsError

func NewIndexOutOfBoundsError(index, min, max uint64) error

NewIndexOutOfBoundsError constructs a IndexOutOfBoundsError

func NewInvalidSliceIndexError added in v0.2.0

func NewInvalidSliceIndexError(startIndex, endIndex uint64) error

NewInvalidSliceIndexError constructs an InvalidSliceIndexError

func NewKeyNotFoundError

func NewKeyNotFoundError(key interface{}) error

NewKeyNotFoundError constructs a KeyNotFoundError

func NewMapElementCountError added in v0.4.0

func NewMapElementCountError(msg string) error

NewMapElementCountError constructs a MapElementCountError.

func NewNotApplicableError

func NewNotApplicableError(typeName, interfaceName, methodName string) error

NewNotApplicableError constructs a NotImplementedError

func NewNotImplementedError

func NewNotImplementedError(methodName string) error

NewNotImplementedError constructs a NotImplementedError

func NewNotValueError

func NewNotValueError(id StorageID) error

NewNotValueError constructs a NotValueError.

func NewSlabDataError

func NewSlabDataError(err error) error

NewSlabDataError constructs a SlabDataError

func NewSlabDataErrorf

func NewSlabDataErrorf(msg string, args ...interface{}) error

NewSlabDataErrorf constructs a new SlabError with error formating

func NewSlabMergeError

func NewSlabMergeError(err error) error

NewSlabMergeError constructs a SlabMergeError

func NewSlabMergeErrorf

func NewSlabMergeErrorf(msg string, args ...interface{}) error

NewSlabMergeErrorf constructs a new SlabMergeError with error formating

func NewSlabNotFoundError

func NewSlabNotFoundError(storageID StorageID, err error) error

NewSlabNotFoundError constructs a SlabNotFoundError

func NewSlabNotFoundErrorf

func NewSlabNotFoundErrorf(storageID StorageID, msg string, args ...interface{}) error

NewSlabNotFoundErrorf constructs a new SlabNotFoundError with error formating

func NewSlabRebalanceError

func NewSlabRebalanceError(err error) error

NewSlabRebalanceError constructs a SlabRebalanceError

func NewSlabRebalanceErrorf

func NewSlabRebalanceErrorf(msg string, args ...interface{}) error

NewSlabErrorf constructs a new SlabError with error formating

func NewSlabSplitError

func NewSlabSplitError(err error) error

NewSlabSplitError constructs a SlabSplitError

func NewSlabSplitErrorf

func NewSlabSplitErrorf(msg string, args ...interface{}) error

NewSlabSplitErrorf constructs a new SlabSplitError with error formating

func NewSliceOutOfBoundsError added in v0.2.0

func NewSliceOutOfBoundsError(startIndex, endIndex, min, max uint64) error

NewSliceOutOfBoundsError constructs a SliceOutOfBoundsError.

func NewStorageIDError

func NewStorageIDError(msg string) error

NewStorageIDError constructs a fatal error of StorageIDError.

func NewStorageIDErrorf

func NewStorageIDErrorf(msg string, args ...interface{}) error

NewStorageIDErrorf constructs a fatal error of StorageIDError.

func NewUnreachableError

func NewUnreachableError() error

func NewUserError added in v0.6.0

func NewUserError(err error) error

func PrintArray

func PrintArray(a *Array)

PrintArray prints array slab data to stdout.

func PrintMap

func PrintMap(m *OrderedMap)

func SetThreshold

func SetThreshold(threshold uint64) (uint64, uint64, uint64, uint64)

func SlabIndexToLedgerKey

func SlabIndexToLedgerKey(ind StorageIndex) []byte

func ValidArray

func ValidArray(a *Array, typeInfo TypeInfo, tic TypeInfoComparator, hip HashInputProvider) error

func ValidArraySerialization

func ValidArraySerialization(
	a *Array,
	cborDecMode cbor.DecMode,
	cborEncMode cbor.EncMode,
	decodeStorable StorableDecoder,
	decodeTypeInfo TypeInfoDecoder,
	compare StorableComparator,
) error

ValidArraySerialization traverses array tree and verifies serialization by encoding, decoding, and re-encoding slabs. It compares in-memory objects of original slab with decoded slab. It also compares encoded data of original slab with encoded data of decoded slab.

func ValidMap

func ValidMap(m *OrderedMap, typeInfo TypeInfo, tic TypeInfoComparator, hip HashInputProvider) error

func ValidMapSerialization

func ValidMapSerialization(
	m *OrderedMap,
	cborDecMode cbor.DecMode,
	cborEncMode cbor.EncMode,
	decodeStorable StorableDecoder,
	decodeTypeInfo TypeInfoDecoder,
	compare StorableComparator,
) error

ValidMapSerialization traverses ordered map tree and verifies serialization by encoding, decoding, and re-encoding slabs. It compares in-memory objects of original slab with decoded slab. It also compares encoded data of original slab with encoded data of decoded slab.

func ValidValue

func ValidValue(value Value, typeInfo TypeInfo, tic TypeInfoComparator, hip HashInputProvider) error

func ValidValueSerialization

func ValidValueSerialization(
	value Value,
	cborDecMode cbor.DecMode,
	cborEncMode cbor.EncMode,
	decodeStorable StorableDecoder,
	decodeTypeInfo TypeInfoDecoder,
	compare StorableComparator,
) error

Types

type Address

type Address [8]byte

type Array

type Array struct {
	Storage SlabStorage
	// contains filtered or unexported fields
}

Array is tree

func NewArray

func NewArray(storage SlabStorage, address Address, typeInfo TypeInfo) (*Array, error)

func NewArrayFromBatchData

func NewArrayFromBatchData(storage SlabStorage, address Address, typeInfo TypeInfo, fn ArrayElementProvider) (*Array, error)

func NewArrayWithRootID

func NewArrayWithRootID(storage SlabStorage, rootID StorageID) (*Array, error)

func (*Array) Address

func (a *Array) Address() Address

func (*Array) Append

func (a *Array) Append(value Value) error

func (*Array) Count

func (a *Array) Count() uint64

func (*Array) Get

func (a *Array) Get(i uint64) (Storable, error)

func (*Array) Insert

func (a *Array) Insert(index uint64, value Value) error

func (*Array) Iterate

func (a *Array) Iterate(fn ArrayIterationFunc) error

func (*Array) IterateRange added in v0.2.0

func (a *Array) IterateRange(startIndex uint64, endIndex uint64, fn ArrayIterationFunc) error

func (*Array) Iterator

func (a *Array) Iterator() (*ArrayIterator, error)

func (*Array) PopIterate

func (a *Array) PopIterate(fn ArrayPopIterationFunc) error

PopIterate iterates and removes elements backward. Each element is passed to ArrayPopIterationFunc callback before removal.

func (*Array) RangeIterator added in v0.2.0

func (a *Array) RangeIterator(startIndex uint64, endIndex uint64) (*ArrayIterator, error)

func (*Array) Remove

func (a *Array) Remove(index uint64) (Storable, error)

func (*Array) Set

func (a *Array) Set(index uint64, value Value) (Storable, error)

func (*Array) Storable

func (a *Array) Storable(_ SlabStorage, _ Address, _ uint64) (Storable, error)

func (*Array) StorageID

func (a *Array) StorageID() StorageID

func (*Array) String

func (a *Array) String() string

func (*Array) Type

func (a *Array) Type() TypeInfo

type ArrayDataSlab

type ArrayDataSlab struct {
	// contains filtered or unexported fields
}

ArrayDataSlab is leaf node, implementing ArraySlab.

func (*ArrayDataSlab) BorrowFromRight

func (a *ArrayDataSlab) BorrowFromRight(slab Slab) error

BorrowFromRight rebalances slabs by moving elements from right slab to left slab.

func (*ArrayDataSlab) ByteSize

func (a *ArrayDataSlab) ByteSize() uint32

func (*ArrayDataSlab) CanLendToLeft

func (a *ArrayDataSlab) CanLendToLeft(size uint32) bool

CanLendToLeft returns true if elements on the left of the slab could be removed so that the slab still stores more than the min threshold.

func (*ArrayDataSlab) CanLendToRight

func (a *ArrayDataSlab) CanLendToRight(size uint32) bool

CanLendToRight returns true if elements on the right of the slab could be removed so that the slab still stores more than the min threshold.

func (*ArrayDataSlab) ChildStorables

func (a *ArrayDataSlab) ChildStorables() []Storable

func (*ArrayDataSlab) Encode

func (a *ArrayDataSlab) Encode(enc *Encoder) error

Encode encodes this array data slab to the given encoder.

Header (18 bytes):

+-------------------------------+--------------------------------+
| slab version + flag (2 bytes) | next sib storage ID (16 bytes) |
+-------------------------------+--------------------------------+

Content (for now):

CBOR encoded array of elements

If this is root slab, extra data section is prepended to slab's encoded content. See ArrayExtraData.Encode() for extra data section format.

func (*ArrayDataSlab) ExtraData

func (a *ArrayDataSlab) ExtraData() *ArrayExtraData

func (*ArrayDataSlab) Get

func (a *ArrayDataSlab) Get(_ SlabStorage, index uint64) (Storable, error)

func (*ArrayDataSlab) Header

func (a *ArrayDataSlab) Header() ArraySlabHeader

func (*ArrayDataSlab) ID

func (a *ArrayDataSlab) ID() StorageID

func (*ArrayDataSlab) Insert

func (a *ArrayDataSlab) Insert(storage SlabStorage, address Address, index uint64, value Value) error

func (*ArrayDataSlab) IsData

func (a *ArrayDataSlab) IsData() bool

func (*ArrayDataSlab) IsFull

func (a *ArrayDataSlab) IsFull() bool

func (*ArrayDataSlab) IsUnderflow

func (a *ArrayDataSlab) IsUnderflow() (uint32, bool)

IsUnderflow returns the number of bytes needed for the data slab to reach the min threshold. Returns true if the min threshold has not been reached yet.

func (*ArrayDataSlab) LendToRight

func (a *ArrayDataSlab) LendToRight(slab Slab) error

LendToRight rebalances slabs by moving elements from left slab to right slab

func (*ArrayDataSlab) Merge

func (a *ArrayDataSlab) Merge(slab Slab) error

func (*ArrayDataSlab) PopIterate

func (a *ArrayDataSlab) PopIterate(storage SlabStorage, fn ArrayPopIterationFunc) error

func (*ArrayDataSlab) Remove

func (a *ArrayDataSlab) Remove(storage SlabStorage, index uint64) (Storable, error)

func (*ArrayDataSlab) RemoveExtraData

func (a *ArrayDataSlab) RemoveExtraData() *ArrayExtraData

func (*ArrayDataSlab) Set

func (a *ArrayDataSlab) Set(storage SlabStorage, address Address, index uint64, value Value) (Storable, error)

func (*ArrayDataSlab) SetExtraData

func (a *ArrayDataSlab) SetExtraData(extraData *ArrayExtraData)

func (*ArrayDataSlab) SetID

func (a *ArrayDataSlab) SetID(id StorageID)

func (*ArrayDataSlab) Split

func (a *ArrayDataSlab) Split(storage SlabStorage) (Slab, Slab, error)

func (*ArrayDataSlab) StoredValue

func (a *ArrayDataSlab) StoredValue(storage SlabStorage) (Value, error)

func (*ArrayDataSlab) String

func (a *ArrayDataSlab) String() string

type ArrayElementProvider

type ArrayElementProvider func() (Value, error)

type ArrayExtraData

type ArrayExtraData struct {
	TypeInfo TypeInfo // array type
}

func (*ArrayExtraData) Encode

func (a *ArrayExtraData) Encode(enc *Encoder, flag byte) error

Encode encodes extra data to the given encoder.

Header (2 bytes):

+-----------------------------+--------------------------+
| extra data version (1 byte) | extra data flag (1 byte) |
+-----------------------------+--------------------------+

Content (for now):

CBOR encoded array of extra data: cborArray{type info}

Extra data flag is the same as the slab flag it prepends.

type ArrayIterationFunc

type ArrayIterationFunc func(element Value) (resume bool, err error)

type ArrayIterator

type ArrayIterator struct {
	// contains filtered or unexported fields
}

func (*ArrayIterator) Next

func (i *ArrayIterator) Next() (Value, error)

type ArrayMetaDataSlab

type ArrayMetaDataSlab struct {
	// contains filtered or unexported fields
}

ArrayMetaDataSlab is internal node, implementing ArraySlab.

func (*ArrayMetaDataSlab) BorrowFromRight

func (a *ArrayMetaDataSlab) BorrowFromRight(slab Slab) error

func (*ArrayMetaDataSlab) ByteSize

func (a *ArrayMetaDataSlab) ByteSize() uint32

func (*ArrayMetaDataSlab) CanLendToLeft

func (a *ArrayMetaDataSlab) CanLendToLeft(size uint32) bool

func (*ArrayMetaDataSlab) CanLendToRight

func (a *ArrayMetaDataSlab) CanLendToRight(size uint32) bool

func (*ArrayMetaDataSlab) ChildStorables

func (a *ArrayMetaDataSlab) ChildStorables() []Storable

func (*ArrayMetaDataSlab) Encode

func (a *ArrayMetaDataSlab) Encode(enc *Encoder) error

Encode encodes this array meta-data slab to the given encoder.

Header (4 bytes):

+-----------------------+--------------------+------------------------------+
| slab version (1 byte) | slab flag (1 byte) | child header count (2 bytes) |
+-----------------------+--------------------+------------------------------+

Content (n * 16 bytes):

[[count, size, storage id], ...]

If this is root slab, extra data section is prepended to slab's encoded content. See ArrayExtraData.Encode() for extra data section format.

func (*ArrayMetaDataSlab) ExtraData

func (a *ArrayMetaDataSlab) ExtraData() *ArrayExtraData

func (*ArrayMetaDataSlab) Get

func (a *ArrayMetaDataSlab) Get(storage SlabStorage, index uint64) (Storable, error)

func (*ArrayMetaDataSlab) Header

func (a *ArrayMetaDataSlab) Header() ArraySlabHeader

func (*ArrayMetaDataSlab) ID

func (a *ArrayMetaDataSlab) ID() StorageID

func (*ArrayMetaDataSlab) Insert

func (a *ArrayMetaDataSlab) Insert(storage SlabStorage, address Address, index uint64, value Value) error

Insert inserts v into the correct child slab. index must be >=0 and <= a.header.count. If index == a.header.count, Insert appends v to the end of underlying slab.

func (ArrayMetaDataSlab) IsData

func (a ArrayMetaDataSlab) IsData() bool

func (ArrayMetaDataSlab) IsFull

func (a ArrayMetaDataSlab) IsFull() bool

func (ArrayMetaDataSlab) IsUnderflow

func (a ArrayMetaDataSlab) IsUnderflow() (uint32, bool)

func (*ArrayMetaDataSlab) LendToRight

func (a *ArrayMetaDataSlab) LendToRight(slab Slab) error

func (*ArrayMetaDataSlab) Merge

func (a *ArrayMetaDataSlab) Merge(slab Slab) error

func (*ArrayMetaDataSlab) MergeOrRebalanceChildSlab

func (a *ArrayMetaDataSlab) MergeOrRebalanceChildSlab(
	storage SlabStorage,
	child ArraySlab,
	childHeaderIndex int,
	underflowSize uint32,
) error

MergeOrRebalanceChildSlab merges or rebalances child slab. If merged, then parent slab's data is adjusted.

+-----------------------+-----------------------+----------------------+-----------------------+ | | no left sibling (sib) | left sib can't lend | left sib can lend | +=======================+=======================+======================+=======================+ | no right sib | panic | merge with left | rebalance with left | +-----------------------+-----------------------+----------------------+-----------------------+ | right sib can't lend | merge with right | merge with smaller | rebalance with left | +-----------------------+-----------------------+----------------------+-----------------------+ | right sib can lend | rebalance with right | rebalance with right | rebalance with bigger | +-----------------------+-----------------------+----------------------+-----------------------+

func (*ArrayMetaDataSlab) PopIterate

func (a *ArrayMetaDataSlab) PopIterate(storage SlabStorage, fn ArrayPopIterationFunc) error

func (*ArrayMetaDataSlab) Remove

func (a *ArrayMetaDataSlab) Remove(storage SlabStorage, index uint64) (Storable, error)

func (*ArrayMetaDataSlab) RemoveExtraData

func (a *ArrayMetaDataSlab) RemoveExtraData() *ArrayExtraData

func (*ArrayMetaDataSlab) Set

func (a *ArrayMetaDataSlab) Set(storage SlabStorage, address Address, index uint64, value Value) (Storable, error)

func (*ArrayMetaDataSlab) SetExtraData

func (a *ArrayMetaDataSlab) SetExtraData(extraData *ArrayExtraData)

func (*ArrayMetaDataSlab) SetID

func (a *ArrayMetaDataSlab) SetID(id StorageID)

func (*ArrayMetaDataSlab) Split

func (a *ArrayMetaDataSlab) Split(storage SlabStorage) (Slab, Slab, error)

func (*ArrayMetaDataSlab) SplitChildSlab

func (a *ArrayMetaDataSlab) SplitChildSlab(storage SlabStorage, child ArraySlab, childHeaderIndex int) error

func (*ArrayMetaDataSlab) StoredValue

func (a *ArrayMetaDataSlab) StoredValue(storage SlabStorage) (Value, error)

func (*ArrayMetaDataSlab) String

func (a *ArrayMetaDataSlab) String() string

type ArrayPopIterationFunc

type ArrayPopIterationFunc func(Storable)

type ArraySlab

type ArraySlab interface {
	Slab
	fmt.Stringer

	Get(storage SlabStorage, index uint64) (Storable, error)
	Set(storage SlabStorage, address Address, index uint64, value Value) (Storable, error)
	Insert(storage SlabStorage, address Address, index uint64, value Value) error
	Remove(storage SlabStorage, index uint64) (Storable, error)

	IsData() bool

	IsFull() bool
	IsUnderflow() (uint32, bool)
	CanLendToLeft(size uint32) bool
	CanLendToRight(size uint32) bool

	SetID(StorageID)

	Header() ArraySlabHeader

	ExtraData() *ArrayExtraData
	RemoveExtraData() *ArrayExtraData
	SetExtraData(*ArrayExtraData)

	PopIterate(SlabStorage, ArrayPopIterationFunc) error
}

type ArraySlabHeader

type ArraySlabHeader struct {
	// contains filtered or unexported fields
}

type ArrayStats

type ArrayStats struct {
	Levels            uint64
	ElementCount      uint64
	MetaDataSlabCount uint64
	DataSlabCount     uint64
	StorableSlabCount uint64
}

func GetArrayStats

func GetArrayStats(a *Array) (ArrayStats, error)

GetArrayStats returns stats about array slabs.

func (*ArrayStats) SlabCount

func (s *ArrayStats) SlabCount() uint64

type BaseStorage

type BaseStorage interface {
	Store(StorageID, []byte) error
	Retrieve(StorageID) ([]byte, bool, error)
	Remove(StorageID) error
	GenerateStorageID(Address) (StorageID, error)
	SegmentCounts() int // number of segments stored in the storage
	Size() int          // total byte size stored
	BaseStorageUsageReporter
}

type BaseStorageUsageReporter

type BaseStorageUsageReporter interface {
	BytesRetrieved() int
	BytesStored() int
	SegmentsReturned() int
	SegmentsUpdated() int
	SegmentsTouched() int
	ResetReporter()
}

type BasicArray

type BasicArray struct {
	// contains filtered or unexported fields
}

func NewBasicArray

func NewBasicArray(storage SlabStorage, address Address) (*BasicArray, error)

func NewBasicArrayWithRootID

func NewBasicArrayWithRootID(storage SlabStorage, id StorageID) (*BasicArray, error)

func (*BasicArray) Address

func (a *BasicArray) Address() Address

func (*BasicArray) Append

func (a *BasicArray) Append(v Value) error

func (*BasicArray) Count

func (a *BasicArray) Count() uint64

func (*BasicArray) Get

func (a *BasicArray) Get(index uint64) (Value, error)

func (*BasicArray) Insert

func (a *BasicArray) Insert(index uint64, v Value) error

func (*BasicArray) Remove

func (a *BasicArray) Remove(index uint64) (Value, error)

func (*BasicArray) Set

func (a *BasicArray) Set(index uint64, v Value) error

func (*BasicArray) Storable

func (a *BasicArray) Storable(_ SlabStorage, _ Address, _ uint64) (Storable, error)

func (*BasicArray) StorageID

func (a *BasicArray) StorageID() StorageID

func (*BasicArray) String

func (a *BasicArray) String() string

type BasicArrayDataSlab

type BasicArrayDataSlab struct {
	// contains filtered or unexported fields
}

func (*BasicArrayDataSlab) BorrowFromRight

func (a *BasicArrayDataSlab) BorrowFromRight(_ Slab) error

func (*BasicArrayDataSlab) ByteSize

func (a *BasicArrayDataSlab) ByteSize() uint32

func (*BasicArrayDataSlab) ChildStorables

func (a *BasicArrayDataSlab) ChildStorables() []Storable

func (*BasicArrayDataSlab) Count

func (a *BasicArrayDataSlab) Count() uint64

func (*BasicArrayDataSlab) Encode

func (a *BasicArrayDataSlab) Encode(enc *Encoder) error

func (*BasicArrayDataSlab) Get

func (a *BasicArrayDataSlab) Get(_ SlabStorage, index uint64) (Storable, error)

func (*BasicArrayDataSlab) Header

func (a *BasicArrayDataSlab) Header() ArraySlabHeader

func (*BasicArrayDataSlab) ID

func (a *BasicArrayDataSlab) ID() StorageID

func (*BasicArrayDataSlab) Insert

func (a *BasicArrayDataSlab) Insert(storage SlabStorage, index uint64, v Storable) error

func (*BasicArrayDataSlab) LendToRight

func (a *BasicArrayDataSlab) LendToRight(_ Slab) error

func (*BasicArrayDataSlab) Merge

func (a *BasicArrayDataSlab) Merge(_ Slab) error

func (*BasicArrayDataSlab) Remove

func (a *BasicArrayDataSlab) Remove(storage SlabStorage, index uint64) (Storable, error)

func (*BasicArrayDataSlab) Set

func (a *BasicArrayDataSlab) Set(storage SlabStorage, index uint64, v Storable) error

func (*BasicArrayDataSlab) Split

func (a *BasicArrayDataSlab) Split(_ SlabStorage) (Slab, Slab, error)

func (*BasicArrayDataSlab) StoredValue

func (a *BasicArrayDataSlab) StoredValue(storage SlabStorage) (Value, error)

func (*BasicArrayDataSlab) String

func (a *BasicArrayDataSlab) String() string

type BasicSlabStorage

type BasicSlabStorage struct {
	Slabs map[StorageID]Slab

	DecodeStorable StorableDecoder
	DecodeTypeInfo TypeInfoDecoder
	// contains filtered or unexported fields
}

func NewBasicSlabStorage

func NewBasicSlabStorage(
	cborEncMode cbor.EncMode,
	cborDecMode cbor.DecMode,
	decodeStorable StorableDecoder,
	decodeTypeInfo TypeInfoDecoder,
) *BasicSlabStorage

func (*BasicSlabStorage) Count

func (s *BasicSlabStorage) Count() int

func (*BasicSlabStorage) Encode

func (s *BasicSlabStorage) Encode() (map[StorageID][]byte, error)

Encode returns serialized slabs in storage. This is currently used for testing.

func (*BasicSlabStorage) GenerateStorageID

func (s *BasicSlabStorage) GenerateStorageID(address Address) (StorageID, error)

func (*BasicSlabStorage) Remove

func (s *BasicSlabStorage) Remove(id StorageID) error

func (*BasicSlabStorage) Retrieve

func (s *BasicSlabStorage) Retrieve(id StorageID) (Slab, bool, error)

func (*BasicSlabStorage) SlabIterator

func (s *BasicSlabStorage) SlabIterator() (SlabIterator, error)

func (*BasicSlabStorage) StorageIDs

func (s *BasicSlabStorage) StorageIDs() []StorageID

func (*BasicSlabStorage) Store

func (s *BasicSlabStorage) Store(id StorageID, slab Slab) error

type CollisionLimitError added in v0.4.0

type CollisionLimitError struct {
	// contains filtered or unexported fields
}

CollisionLimitError is a fatal error returned when a noncryptographic hash collision would exceed collision limit (per digest per map) we enforce in the first level.

func (*CollisionLimitError) Error added in v0.4.0

func (e *CollisionLimitError) Error() string

type DecodingError

type DecodingError struct {
	// contains filtered or unexported fields
}

DecodingError is a fatal error returned when a decoding operation fails

func (*DecodingError) Error

func (e *DecodingError) Error() string

type Digest

type Digest uint64

type Digester

type Digester interface {
	// DigestPrefix returns digests before specified level.
	// If level is 0, DigestPrefix returns nil.
	DigestPrefix(level uint) ([]Digest, error)

	// Digest returns digest at specified level.
	Digest(level uint) (Digest, error)

	// Reset data for reuse
	Reset()

	Levels() uint
}

type DigesterBuilder

type DigesterBuilder interface {
	SetSeed(k0 uint64, k1 uint64)
	Digest(HashInputProvider, Value) (Digester, error)
}

func NewDefaultDigesterBuilder

func NewDefaultDigesterBuilder() DigesterBuilder

type DuplicateKeyError

type DuplicateKeyError struct {
	// contains filtered or unexported fields
}

DuplicateKeyError is returned when the duplicate key is found in the dictionary when none is expected.

func (*DuplicateKeyError) Error

func (e *DuplicateKeyError) Error() string

type Encoder

type Encoder struct {
	io.Writer
	CBOR    *cbor.StreamEncoder
	Scratch [64]byte
}

Encoder writes atree slabs to io.Writer.

func NewEncoder

func NewEncoder(w io.Writer, encMode cbor.EncMode) *Encoder

type EncodingError

type EncodingError struct {
	// contains filtered or unexported fields
}

EncodingError is a fatal error returned when a encoding operation fails

func (*EncodingError) Error

func (e *EncodingError) Error() string

type ExternalError added in v0.6.0

type ExternalError struct {
	// contains filtered or unexported fields
}

func (*ExternalError) Error added in v0.6.0

func (e *ExternalError) Error() string

func (*ExternalError) Unwrap added in v0.6.0

func (e *ExternalError) Unwrap() error

type FatalError

type FatalError struct {
	// contains filtered or unexported fields
}

func (*FatalError) Error

func (e *FatalError) Error() string

func (*FatalError) Unwrap

func (e *FatalError) Unwrap() error

type HashError

type HashError struct {
	// contains filtered or unexported fields
}

HashError is a fatal error returned when hash calculation fails

func (*HashError) Error

func (e *HashError) Error() string

type HashInputProvider

type HashInputProvider func(value Value, buffer []byte) ([]byte, error)

type HashLevelError

type HashLevelError struct {
	// contains filtered or unexported fields
}

HashLevelError is a fatal error returned when hash level is wrong.

func (*HashLevelError) Error

func (e *HashLevelError) Error() string

type HashSeedUninitializedError

type HashSeedUninitializedError struct {
}

HashSeedUninitializedError is a fatal error returned when hash seed is uninitialized.

func (*HashSeedUninitializedError) Error

type IndexOutOfBoundsError

type IndexOutOfBoundsError struct {
	// contains filtered or unexported fields
}

IndexOutOfBoundsError is returned when get, insert or delete operation is attempted on an array index which is out of bounds

func (*IndexOutOfBoundsError) Error

func (e *IndexOutOfBoundsError) Error() string

type InvalidSliceIndexError added in v0.2.0

type InvalidSliceIndexError struct {
	// contains filtered or unexported fields
}

InvalidSliceIndexError is returned when array slice index is invalid, such as startIndex > endIndex This error can be returned even when startIndex and endIndex are both within bounds.

func (*InvalidSliceIndexError) Error added in v0.2.0

func (e *InvalidSliceIndexError) Error() string

type KeyNotFoundError

type KeyNotFoundError struct {
	// contains filtered or unexported fields
}

KeyNotFoundError is returned when the key not found in the dictionary

func (*KeyNotFoundError) Error

func (e *KeyNotFoundError) Error() string

type Ledger

type Ledger interface {
	// GetValue gets a value for the given key in the storage, owned by the given account.
	GetValue(owner, key []byte) (value []byte, err error)
	// SetValue sets a value for the given key in the storage, owned by the given account.
	SetValue(owner, key, value []byte) (err error)
	// ValueExists returns true if the given key exists in the storage, owned by the given account.
	ValueExists(owner, key []byte) (exists bool, err error)
	// AllocateStorageIndex allocates a new storage index under the given account.
	AllocateStorageIndex(owner []byte) (StorageIndex, error)
}

type LedgerBaseStorage

type LedgerBaseStorage struct {
	// contains filtered or unexported fields
}

func NewLedgerBaseStorage

func NewLedgerBaseStorage(ledger Ledger) *LedgerBaseStorage

func (*LedgerBaseStorage) BytesRetrieved

func (s *LedgerBaseStorage) BytesRetrieved() int

func (*LedgerBaseStorage) BytesStored

func (s *LedgerBaseStorage) BytesStored() int

func (*LedgerBaseStorage) GenerateStorageID

func (s *LedgerBaseStorage) GenerateStorageID(address Address) (StorageID, error)

func (*LedgerBaseStorage) Remove

func (s *LedgerBaseStorage) Remove(id StorageID) error

func (*LedgerBaseStorage) ResetReporter

func (s *LedgerBaseStorage) ResetReporter()

func (*LedgerBaseStorage) Retrieve

func (s *LedgerBaseStorage) Retrieve(id StorageID) ([]byte, bool, error)

func (*LedgerBaseStorage) SegmentCounts

func (s *LedgerBaseStorage) SegmentCounts() int

func (*LedgerBaseStorage) SegmentsReturned

func (s *LedgerBaseStorage) SegmentsReturned() int

func (*LedgerBaseStorage) SegmentsTouched

func (s *LedgerBaseStorage) SegmentsTouched() int

func (*LedgerBaseStorage) SegmentsUpdated

func (s *LedgerBaseStorage) SegmentsUpdated() int

func (*LedgerBaseStorage) Size

func (s *LedgerBaseStorage) Size() int

func (*LedgerBaseStorage) Store

func (s *LedgerBaseStorage) Store(id StorageID, data []byte) error

type MapDataSlab

type MapDataSlab struct {
	// contains filtered or unexported fields
}

MapDataSlab is leaf node, implementing MapSlab. anySize is true for data slab that isn't restricted by size requirement.

func (*MapDataSlab) BorrowFromRight

func (m *MapDataSlab) BorrowFromRight(slab Slab) error

func (*MapDataSlab) ByteSize

func (m *MapDataSlab) ByteSize() uint32

func (*MapDataSlab) CanLendToLeft

func (m *MapDataSlab) CanLendToLeft(size uint32) bool

CanLendToLeft returns true if elements on the left of the slab could be removed so that the slab still stores more than the min threshold.

func (*MapDataSlab) CanLendToRight

func (m *MapDataSlab) CanLendToRight(size uint32) bool

CanLendToRight returns true if elements on the right of the slab could be removed so that the slab still stores more than the min threshold.

func (*MapDataSlab) ChildStorables

func (m *MapDataSlab) ChildStorables() []Storable

func (*MapDataSlab) Encode

func (m *MapDataSlab) Encode(enc *Encoder) error

Encode encodes this map data slab to the given encoder.

Header (18 bytes):

+-------------------------------+--------------------------------+
| slab version + flag (2 bytes) | next sib storage ID (16 bytes) |
+-------------------------------+--------------------------------+

Content (for now):

CBOR array of 3 elements (level, hkeys, elements)

If this is root slab, extra data section is prepended to slab's encoded content. See MapExtraData.Encode() for extra data section format.

func (*MapDataSlab) ExtraData

func (m *MapDataSlab) ExtraData() *MapExtraData

func (*MapDataSlab) Header

func (m *MapDataSlab) Header() MapSlabHeader

func (*MapDataSlab) ID

func (m *MapDataSlab) ID() StorageID

func (*MapDataSlab) IsData

func (m *MapDataSlab) IsData() bool

func (*MapDataSlab) IsFull

func (m *MapDataSlab) IsFull() bool

func (*MapDataSlab) IsUnderflow

func (m *MapDataSlab) IsUnderflow() (uint32, bool)

IsUnderflow returns the number of bytes needed for the data slab to reach the min threshold. Returns true if the min threshold has not been reached yet.

func (*MapDataSlab) LendToRight

func (m *MapDataSlab) LendToRight(slab Slab) error

func (*MapDataSlab) Merge

func (m *MapDataSlab) Merge(slab Slab) error

func (*MapDataSlab) PopIterate

func (m *MapDataSlab) PopIterate(storage SlabStorage, fn MapPopIterationFunc) error

func (*MapDataSlab) Remove

func (m *MapDataSlab) Remove(storage SlabStorage, digester Digester, level uint, hkey Digest, comparator ValueComparator, key Value) (MapKey, MapValue, error)

func (*MapDataSlab) RemoveExtraData

func (m *MapDataSlab) RemoveExtraData() *MapExtraData

func (*MapDataSlab) Set

func (m *MapDataSlab) Set(storage SlabStorage, b DigesterBuilder, digester Digester, level uint, hkey Digest, comparator ValueComparator, hip HashInputProvider, key Value, value Value) (MapValue, error)

func (*MapDataSlab) SetExtraData

func (m *MapDataSlab) SetExtraData(extraData *MapExtraData)

func (*MapDataSlab) SetID

func (m *MapDataSlab) SetID(id StorageID)

func (*MapDataSlab) Split

func (m *MapDataSlab) Split(storage SlabStorage) (Slab, Slab, error)

func (*MapDataSlab) StoredValue

func (m *MapDataSlab) StoredValue(storage SlabStorage) (Value, error)

func (*MapDataSlab) String

func (m *MapDataSlab) String() string

type MapElementCountError added in v0.4.0

type MapElementCountError struct {
	// contains filtered or unexported fields
}

MapElementCountError is a fatal error returned when element count is unexpected. It is an implemtation error.

func (*MapElementCountError) Error added in v0.4.0

func (e *MapElementCountError) Error() string

type MapElementIterationFunc

type MapElementIterationFunc func(Value) (resume bool, err error)

type MapElementIterator

type MapElementIterator struct {
	// contains filtered or unexported fields
}

func (*MapElementIterator) Next

func (i *MapElementIterator) Next() (key MapKey, value MapValue, err error)

type MapElementProvider

type MapElementProvider func() (Value, Value, error)

type MapEntryIterationFunc

type MapEntryIterationFunc func(Value, Value) (resume bool, err error)

type MapExtraData

type MapExtraData struct {
	TypeInfo TypeInfo
	Count    uint64
	Seed     uint64
}

func (*MapExtraData) Encode

func (m *MapExtraData) Encode(enc *Encoder, version byte, flag byte) error

Encode encodes extra data to the given encoder.

Header (2 bytes):

+-----------------------------+--------------------------+
| extra data version (1 byte) | extra data flag (1 byte) |
+-----------------------------+--------------------------+

Content (for now):

CBOR encoded array of extra data

Extra data flag is the same as the slab flag it prepends.

type MapIterator

type MapIterator struct {
	// contains filtered or unexported fields
}

func (*MapIterator) Next

func (i *MapIterator) Next() (key Value, value Value, err error)

func (*MapIterator) NextKey

func (i *MapIterator) NextKey() (key Value, err error)

func (*MapIterator) NextValue

func (i *MapIterator) NextValue() (value Value, err error)

type MapKey

type MapKey Storable

type MapMetaDataSlab

type MapMetaDataSlab struct {
	// contains filtered or unexported fields
}

MapMetaDataSlab is internal node, implementing MapSlab.

func (*MapMetaDataSlab) BorrowFromRight

func (m *MapMetaDataSlab) BorrowFromRight(slab Slab) error

func (*MapMetaDataSlab) ByteSize

func (m *MapMetaDataSlab) ByteSize() uint32

func (*MapMetaDataSlab) CanLendToLeft

func (m *MapMetaDataSlab) CanLendToLeft(size uint32) bool

func (*MapMetaDataSlab) CanLendToRight

func (m *MapMetaDataSlab) CanLendToRight(size uint32) bool

func (*MapMetaDataSlab) ChildStorables

func (m *MapMetaDataSlab) ChildStorables() []Storable

func (*MapMetaDataSlab) Encode

func (m *MapMetaDataSlab) Encode(enc *Encoder) error

Encode encodes this array meta-data slab to the given encoder.

Header (4 bytes):

+-----------------------+--------------------+------------------------------+
| slab version (1 byte) | slab flag (1 byte) | child header count (2 bytes) |
+-----------------------+--------------------+------------------------------+

Content (n * 28 bytes):

[[storage id, first key, size], ...]

If this is root slab, extra data section is prepended to slab's encoded content. See MapExtraData.Encode() for extra data section format.

func (*MapMetaDataSlab) ExtraData

func (m *MapMetaDataSlab) ExtraData() *MapExtraData

func (*MapMetaDataSlab) Get

func (m *MapMetaDataSlab) Get(storage SlabStorage, digester Digester, level uint, hkey Digest, comparator ValueComparator, key Value) (MapValue, error)

func (*MapMetaDataSlab) Header

func (m *MapMetaDataSlab) Header() MapSlabHeader

func (*MapMetaDataSlab) ID

func (m *MapMetaDataSlab) ID() StorageID

func (MapMetaDataSlab) IsData

func (m MapMetaDataSlab) IsData() bool

func (MapMetaDataSlab) IsFull

func (m MapMetaDataSlab) IsFull() bool

func (MapMetaDataSlab) IsUnderflow

func (m MapMetaDataSlab) IsUnderflow() (uint32, bool)

func (*MapMetaDataSlab) LendToRight

func (m *MapMetaDataSlab) LendToRight(slab Slab) error

func (*MapMetaDataSlab) Merge

func (m *MapMetaDataSlab) Merge(slab Slab) error

func (*MapMetaDataSlab) MergeOrRebalanceChildSlab

func (m *MapMetaDataSlab) MergeOrRebalanceChildSlab(
	storage SlabStorage,
	child MapSlab,
	childHeaderIndex int,
	underflowSize uint32,
) error

MergeOrRebalanceChildSlab merges or rebalances child slab. parent slab's data is adjusted. If merged, then parent slab's data is adjusted.

+-----------------------+-----------------------+----------------------+-----------------------+ | | no left sibling (sib) | left sib can't lend | left sib can lend | +=======================+=======================+======================+=======================+ | no right sib | panic | merge with left | rebalance with left | +-----------------------+-----------------------+----------------------+-----------------------+ | right sib can't lend | merge with right | merge with smaller | rebalance with left | +-----------------------+-----------------------+----------------------+-----------------------+ | right sib can lend | rebalance with right | rebalance with right | rebalance with bigger | +-----------------------+-----------------------+----------------------+-----------------------+

func (*MapMetaDataSlab) PopIterate

func (m *MapMetaDataSlab) PopIterate(storage SlabStorage, fn MapPopIterationFunc) error

func (*MapMetaDataSlab) Remove

func (m *MapMetaDataSlab) Remove(storage SlabStorage, digester Digester, level uint, hkey Digest, comparator ValueComparator, key Value) (MapKey, MapValue, error)

func (*MapMetaDataSlab) RemoveExtraData

func (m *MapMetaDataSlab) RemoveExtraData() *MapExtraData

func (*MapMetaDataSlab) Set

func (m *MapMetaDataSlab) Set(storage SlabStorage, b DigesterBuilder, digester Digester, level uint, hkey Digest, comparator ValueComparator, hip HashInputProvider, key Value, value Value) (MapValue, error)

func (*MapMetaDataSlab) SetExtraData

func (m *MapMetaDataSlab) SetExtraData(extraData *MapExtraData)

func (*MapMetaDataSlab) SetID

func (m *MapMetaDataSlab) SetID(id StorageID)

func (*MapMetaDataSlab) Split

func (m *MapMetaDataSlab) Split(storage SlabStorage) (Slab, Slab, error)

func (*MapMetaDataSlab) SplitChildSlab

func (m *MapMetaDataSlab) SplitChildSlab(storage SlabStorage, child MapSlab, childHeaderIndex int) error

func (*MapMetaDataSlab) StoredValue

func (m *MapMetaDataSlab) StoredValue(storage SlabStorage) (Value, error)

func (*MapMetaDataSlab) String

func (m *MapMetaDataSlab) String() string

type MapPopIterationFunc

type MapPopIterationFunc func(Storable, Storable)

type MapSlab

type MapSlab interface {
	Slab
	fmt.Stringer

	Get(storage SlabStorage, digester Digester, level uint, hkey Digest, comparator ValueComparator, key Value) (MapValue, error)
	Set(storage SlabStorage, b DigesterBuilder, digester Digester, level uint, hkey Digest, comparator ValueComparator, hip HashInputProvider, key Value, value Value) (existingValue MapValue, err error)
	Remove(storage SlabStorage, digester Digester, level uint, hkey Digest, comparator ValueComparator, key Value) (MapKey, MapValue, error)

	IsData() bool

	IsFull() bool
	IsUnderflow() (uint32, bool)
	CanLendToLeft(size uint32) bool
	CanLendToRight(size uint32) bool

	SetID(StorageID)

	Header() MapSlabHeader

	ExtraData() *MapExtraData
	RemoveExtraData() *MapExtraData
	SetExtraData(*MapExtraData)

	PopIterate(SlabStorage, MapPopIterationFunc) error
}

type MapSlabHeader

type MapSlabHeader struct {
	// contains filtered or unexported fields
}

type MapStats

type MapStats struct {
	Levels                 uint64
	ElementCount           uint64
	MetaDataSlabCount      uint64
	DataSlabCount          uint64
	CollisionDataSlabCount uint64
	StorableSlabCount      uint64
}

func GetMapStats

func GetMapStats(m *OrderedMap) (MapStats, error)

GetMapStats returns stats about the map slabs.

func (*MapStats) SlabCount

func (s *MapStats) SlabCount() uint64

type MapValue

type MapValue Storable

type NotApplicableError

type NotApplicableError struct {
	// contains filtered or unexported fields
}

NotApplicableError is a fatal error returned when a not applicable method is called

func (*NotApplicableError) Error

func (e *NotApplicableError) Error() string

type NotImplementedError

type NotImplementedError struct {
	// contains filtered or unexported fields
}

NotImplementedError is a fatal error returned when a method is called which is not yet implemented this is a temporary error

func (*NotImplementedError) Error

func (e *NotImplementedError) Error() string

type NotValueError

type NotValueError struct {
	// contains filtered or unexported fields
}

NotValueError is returned when we try to create Value objects from non-root slabs.

func (*NotValueError) Error

func (e *NotValueError) Error() string

type OrderedMap

type OrderedMap struct {
	Storage SlabStorage
	// contains filtered or unexported fields
}

func NewMap

func NewMap(storage SlabStorage, address Address, digestBuilder DigesterBuilder, typeInfo TypeInfo) (*OrderedMap, error)

func NewMapFromBatchData

func NewMapFromBatchData(
	storage SlabStorage,
	address Address,
	digesterBuilder DigesterBuilder,
	typeInfo TypeInfo,
	comparator ValueComparator,
	hip HashInputProvider,
	seed uint64,
	fn MapElementProvider,
) (
	*OrderedMap,
	error,
)

NewMapFromBatchData returns a new map with elements provided by fn callback. Provided seed must be the same seed used to create the original map. And callback function must return elements in the same order as the original map. New map uses and stores the same seed as the original map. This function should only be used for copying a map.

func NewMapWithRootID

func NewMapWithRootID(storage SlabStorage, rootID StorageID, digestBuilder DigesterBuilder) (*OrderedMap, error)

func (*OrderedMap) Address

func (m *OrderedMap) Address() Address

func (*OrderedMap) Count

func (m *OrderedMap) Count() uint64

func (*OrderedMap) Get

func (m *OrderedMap) Get(comparator ValueComparator, hip HashInputProvider, key Value) (Storable, error)

func (*OrderedMap) Has

func (m *OrderedMap) Has(comparator ValueComparator, hip HashInputProvider, key Value) (bool, error)

func (*OrderedMap) Iterate

func (m *OrderedMap) Iterate(fn MapEntryIterationFunc) error

func (*OrderedMap) IterateKeys

func (m *OrderedMap) IterateKeys(fn MapElementIterationFunc) error

func (*OrderedMap) IterateValues

func (m *OrderedMap) IterateValues(fn MapElementIterationFunc) error

func (*OrderedMap) Iterator

func (m *OrderedMap) Iterator() (*MapIterator, error)

func (*OrderedMap) PopIterate

func (m *OrderedMap) PopIterate(fn MapPopIterationFunc) error

PopIterate iterates and removes elements backward. Each element is passed to MapPopIterationFunc callback before removal.

func (*OrderedMap) Remove

func (m *OrderedMap) Remove(comparator ValueComparator, hip HashInputProvider, key Value) (Storable, Storable, error)

func (*OrderedMap) Seed

func (m *OrderedMap) Seed() uint64

func (*OrderedMap) Set

func (m *OrderedMap) Set(comparator ValueComparator, hip HashInputProvider, key Value, value Value) (Storable, error)

func (*OrderedMap) Storable

func (m *OrderedMap) Storable(_ SlabStorage, _ Address, _ uint64) (Storable, error)

func (*OrderedMap) StorageID

func (m *OrderedMap) StorageID() StorageID

func (*OrderedMap) StoredValue

func (m *OrderedMap) StoredValue(_ SlabStorage) (Value, error)

func (*OrderedMap) String

func (m *OrderedMap) String() string

func (*OrderedMap) Type

func (m *OrderedMap) Type() TypeInfo

type PersistentSlabStorage

type PersistentSlabStorage struct {
	DecodeStorable StorableDecoder
	DecodeTypeInfo TypeInfoDecoder
	// contains filtered or unexported fields
}

func NewPersistentSlabStorage

func NewPersistentSlabStorage(
	base BaseStorage,
	cborEncMode cbor.EncMode,
	cborDecMode cbor.DecMode,
	decodeStorable StorableDecoder,
	decodeTypeInfo TypeInfoDecoder,
	opts ...StorageOption,
) *PersistentSlabStorage

func (*PersistentSlabStorage) Commit

func (s *PersistentSlabStorage) Commit() error

func (*PersistentSlabStorage) Count

func (s *PersistentSlabStorage) Count() int

Warning Counts doesn't consider new segments in the deltas and only returns committed values

func (*PersistentSlabStorage) Deltas added in v0.4.0

func (s *PersistentSlabStorage) Deltas() uint

Deltas returns number of uncommitted slabs, including slabs with temp addresses.

func (*PersistentSlabStorage) DeltasSizeWithoutTempAddresses added in v0.5.0

func (s *PersistentSlabStorage) DeltasSizeWithoutTempAddresses() uint64

DeltasSizeWithoutTempAddresses returns total size of uncommitted slabs (in bytes), excluding slabs with temp addresses.

func (*PersistentSlabStorage) DeltasWithoutTempAddresses added in v0.4.0

func (s *PersistentSlabStorage) DeltasWithoutTempAddresses() uint

DeltasWithoutTempAddresses returns number of uncommitted slabs, excluding slabs with temp addresses.

func (*PersistentSlabStorage) DropCache

func (s *PersistentSlabStorage) DropCache()

func (*PersistentSlabStorage) DropDeltas

func (s *PersistentSlabStorage) DropDeltas()

func (*PersistentSlabStorage) FastCommit

func (s *PersistentSlabStorage) FastCommit(numWorkers int) error

func (*PersistentSlabStorage) GenerateStorageID

func (s *PersistentSlabStorage) GenerateStorageID(address Address) (StorageID, error)

func (*PersistentSlabStorage) Remove

func (s *PersistentSlabStorage) Remove(id StorageID) error

func (*PersistentSlabStorage) Retrieve

func (s *PersistentSlabStorage) Retrieve(id StorageID) (Slab, bool, error)

func (*PersistentSlabStorage) RetrieveIgnoringDeltas

func (s *PersistentSlabStorage) RetrieveIgnoringDeltas(id StorageID) (Slab, bool, error)

func (*PersistentSlabStorage) SlabIterator

func (s *PersistentSlabStorage) SlabIterator() (SlabIterator, error)

func (*PersistentSlabStorage) Store

func (s *PersistentSlabStorage) Store(id StorageID, slab Slab) error

type Slab

type Slab interface {
	Storable

	ID() StorageID
	Split(SlabStorage) (Slab, Slab, error)
	Merge(Slab) error
	// LendToRight rebalances slabs by moving elements from left to right
	LendToRight(Slab) error
	// BorrowFromRight rebalances slabs by moving elements from right to left
	BorrowFromRight(Slab) error
}

func DecodeSlab

func DecodeSlab(
	id StorageID,
	data []byte,
	decMode cbor.DecMode,
	decodeStorable StorableDecoder,
	decodeTypeInfo TypeInfoDecoder,
) (
	Slab,
	error,
)

type SlabDataError

type SlabDataError struct {
	// contains filtered or unexported fields
}

SlabError is a always fatal error returned when something is wrong with the content or type of the slab you can make this a fatal error by calling Fatal()

func (*SlabDataError) Error

func (e *SlabDataError) Error() string

type SlabIterator

type SlabIterator func() (StorageID, Slab)

type SlabMergeError

type SlabMergeError struct {
	// contains filtered or unexported fields
}

SlabMergeError is always a fatal error returned when merging two slabs fails

func (*SlabMergeError) Error

func (e *SlabMergeError) Error() string

type SlabNotFoundError

type SlabNotFoundError struct {
	// contains filtered or unexported fields
}

SlabNotFoundError is always a fatal error returned when an slab is not found

func (*SlabNotFoundError) Error

func (e *SlabNotFoundError) Error() string

type SlabRebalanceError

type SlabRebalanceError struct {
	// contains filtered or unexported fields
}

SlabRebalanceError is always a fatal error returned when rebalancing a slab has failed

func (*SlabRebalanceError) Error

func (e *SlabRebalanceError) Error() string

type SlabSplitError

type SlabSplitError struct {
	// contains filtered or unexported fields
}

SlabSplitError is always a fatal error returned when splitting an slab has failed

func (*SlabSplitError) Error

func (e *SlabSplitError) Error() string

type SlabStorage

type SlabStorage interface {
	Store(StorageID, Slab) error
	Retrieve(StorageID) (Slab, bool, error)
	Remove(StorageID) error
	GenerateStorageID(address Address) (StorageID, error)
	Count() int
	SlabIterator() (SlabIterator, error)
}

type SliceOutOfBoundsError added in v0.2.0

type SliceOutOfBoundsError struct {
	// contains filtered or unexported fields
}

SliceOutOfBoundsError is returned when index for array slice is out of bounds.

func (*SliceOutOfBoundsError) Error added in v0.2.0

func (e *SliceOutOfBoundsError) Error() string

type Storable

type Storable interface {
	Encode(*Encoder) error

	ByteSize() uint32

	StoredValue(storage SlabStorage) (Value, error)

	// ChildStorables only returns child storables in this storable
	// (not recursive).  This function shouldn't load extra slabs.
	ChildStorables() []Storable
}

func DecodeStorageIDStorable

func DecodeStorageIDStorable(dec *cbor.StreamDecoder) (Storable, error)

type StorableComparator

type StorableComparator func(Storable, Storable) bool

type StorableDecoder

type StorableDecoder func(
	decoder *cbor.StreamDecoder,
	storableSlabStorageID StorageID,
) (
	Storable,
	error,
)

type StorableSlab

type StorableSlab struct {
	StorageID StorageID
	Storable  Storable
}

StorableSlab allows storing storables (CBOR encoded data) directly in a slab. Eventually we will only have a dictionary at the account storage root, so this won't be needed, but during the refactor we have the need to store other non-dictionary values (e.g. strings, integers, etc.) directly in accounts (i.e. directly in slabs aka registers)

func (StorableSlab) BorrowFromRight

func (StorableSlab) BorrowFromRight(_ Slab) error

func (StorableSlab) ByteSize

func (s StorableSlab) ByteSize() uint32

func (StorableSlab) ChildStorables

func (s StorableSlab) ChildStorables() []Storable

func (StorableSlab) Encode

func (s StorableSlab) Encode(enc *Encoder) error

func (StorableSlab) ID

func (s StorableSlab) ID() StorageID

func (StorableSlab) LendToRight

func (StorableSlab) LendToRight(_ Slab) error

func (StorableSlab) Merge

func (StorableSlab) Merge(_ Slab) error

func (StorableSlab) Split

func (StorableSlab) Split(_ SlabStorage) (Slab, Slab, error)

func (StorableSlab) StoredValue

func (s StorableSlab) StoredValue(storage SlabStorage) (Value, error)

type StorageID

type StorageID struct {
	Address Address
	Index   StorageIndex
}

func NewStorageID

func NewStorageID(address Address, index StorageIndex) StorageID

func NewStorageIDFromRawBytes

func NewStorageIDFromRawBytes(b []byte) (StorageID, error)

func (StorageID) AddressAsUint64

func (id StorageID) AddressAsUint64() uint64

func (StorageID) Compare

func (id StorageID) Compare(other StorageID) int

func (StorageID) IndexAsUint64

func (id StorageID) IndexAsUint64() uint64

func (StorageID) String

func (id StorageID) String() string

func (StorageID) ToRawBytes

func (id StorageID) ToRawBytes(b []byte) (int, error)

func (StorageID) Valid

func (id StorageID) Valid() error

type StorageIDError

type StorageIDError struct {
	// contains filtered or unexported fields
}

StorageIDError is returned when storage id can't be created or it's invalid.

func (*StorageIDError) Error

func (e *StorageIDError) Error() string

type StorageIDStorable

type StorageIDStorable StorageID

func (StorageIDStorable) ByteSize

func (v StorageIDStorable) ByteSize() uint32

func (StorageIDStorable) ChildStorables

func (v StorageIDStorable) ChildStorables() []Storable

func (StorageIDStorable) Encode

func (v StorageIDStorable) Encode(enc *Encoder) error

Encode encodes StorageIDStorable as

cbor.Tag{
		Number:  cborTagStorageID,
		Content: byte(v),
}

func (StorageIDStorable) StoredValue

func (v StorageIDStorable) StoredValue(storage SlabStorage) (Value, error)

func (StorageIDStorable) String

func (v StorageIDStorable) String() string

type StorageIndex

type StorageIndex [8]byte

func (StorageIndex) Next

func (index StorageIndex) Next() StorageIndex

Next returns new StorageIndex with index+1 value. The caller is responsible for preventing overflow by checking if the index value is valid before calling this function.

type StorageOption

type StorageOption func(st *PersistentSlabStorage) *PersistentSlabStorage

type TypeInfo

type TypeInfo interface {
	Encode(*cbor.StreamEncoder) error
}

type TypeInfoComparator

type TypeInfoComparator func(TypeInfo, TypeInfo) bool

type TypeInfoDecoder

type TypeInfoDecoder func(
	decoder *cbor.StreamDecoder,
) (
	TypeInfo,
	error,
)

type UnreachableError

type UnreachableError struct {
	Stack []byte
}

UnreachableError is used by panic when unreachable code is reached. This is copied from Cadence.

func (UnreachableError) Error

func (e UnreachableError) Error() string

type UserError added in v0.6.0

type UserError struct {
	// contains filtered or unexported fields
}

func (*UserError) Error added in v0.6.0

func (e *UserError) Error() string

func (*UserError) Unwrap added in v0.6.0

func (e *UserError) Unwrap() error

type Value

type Value interface {
	Storable(SlabStorage, Address, uint64) (Storable, error)
}

type ValueComparator

type ValueComparator func(SlabStorage, Value, Storable) (bool, error)

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL