Documentation ¶
Overview ¶
Package bmt provides a binary merkle tree implementation used for swarm chunk hash
Package bmt is a simple nonconcurrent reference implementation for hashsize segment based Binary Merkle tree hash on arbitrary but fixed maximum chunksize
This implementation does not take advantage of any paralellisms and uses far more memory than necessary, but it is easy to see that it is correct. It can be used for generating test cases for optimized implementations. There is extra check on reference hasher correctness in bmt_test.go * TestRefHasher * testBMTHasherCorrectness function
Index ¶
- Constants
- Variables
- func LengthToSpan(length int) []byte
- type BaseHasherFunc
- type Hasher
- func (h *Hasher) BlockSize() int
- func (h *Hasher) Branches() int
- func (h *Hasher) GetCursor() int
- func (h *Hasher) GetHasher() hash.Hash
- func (h *Hasher) GetTree() *tree
- func (h *Hasher) GetZeroHash() []byte
- func (h *Hasher) ReleaseTree()
- func (h *Hasher) Reset()
- func (h *Hasher) SectionSize() int
- func (h *Hasher) SetCursor(c int)
- func (h *Hasher) SetSpan(length int)
- func (h *Hasher) SetSpanBytes(b []byte)
- func (h *Hasher) SetWriter(_ file.SectionWriterFunc) file.SectionWriter
- func (h *Hasher) Size() int
- func (h *Hasher) Sum(b []byte) (s []byte)
- func (h *Hasher) Write(b []byte) (int, error)
- func (h *Hasher) WriteSection(i int, section []byte, double bool, final bool)
- type RefHasher
- type TreePool
Constants ¶
const ( // PoolSize is the maximum number of bmt trees used by the hashers, i.e, // the maximum number of concurrent BMT hashing operations performed by the same hasher PoolSize = 8 )
Variables ¶
var (
ZeroSpan = make([]byte, 8)
)
Functions ¶
func LengthToSpan ¶
LengthToSpan creates a binary data span size representation It is required for calculating the BMT hash
Types ¶
type BaseHasherFunc ¶
BaseHasherFunc is a hash.Hash constructor function used for the base hash of the BMT. implemented by Keccak256 SHA3 sha3.NewLegacyKeccak256
type Hasher ¶
type Hasher struct {
// contains filtered or unexported fields
}
Hasher a reusable hasher for fixed maximum size chunks representing a BMT
- implements the hash.Hash interface
- reuses a pool of trees for amortised memory allocation and resource control
- supports order-agnostic concurrent segment writes and section (double segment) writes as well as sequential read and write
- the same hasher instance must not be called concurrently on more than one chunk
- the same hasher instance is synchronously reuseable
- Sum gives back the tree to the pool and guaranteed to leave the tree and itself in a state reusable for hashing a new chunk
- generates and verifies segment inclusion proofs (TODO:)
func New ¶
New creates a reusable BMT Hasher that pulls a new tree from a resource pool for hashing each chunk
func (*Hasher) GetTree ¶
func (h *Hasher) GetTree() *tree
GetTree gets the underlying tree in use by the Hasher
func (*Hasher) GetZeroHash ¶
GetZeroHash returns the zero hash of the full depth of the Hasher instance
func (*Hasher) ReleaseTree ¶
func (h *Hasher) ReleaseTree()
GetTree releases the underlying tree in use by the Hasher
func (*Hasher) SectionSize ¶
SectionSize implements file.SectionWriter
func (*Hasher) SetSpanBytes ¶
SetSpanBytes implements storage.SwarmHash
func (*Hasher) SetWriter ¶
func (h *Hasher) SetWriter(_ file.SectionWriterFunc) file.SectionWriter
SetWriter implements file.SectionWriter
func (*Hasher) Sum ¶
Sum returns the BMT root hash of the buffer using Sum presupposes sequential synchronous writes (io.Writer interface) Implements hash.Hash in file.SectionWriter
func (*Hasher) Write ¶
Write calls sequentially add to the buffer to be hashed, with every full segment calls WriteSection in a go routine Implements hash.Hash and file.SectionWriter
func (*Hasher) WriteSection ¶
Writesection writes data to the data level in the section at index i. Setting final to true tells the hasher no further data will be written and prepares the data for h.Sum() TODO remove double as argument, push responsibility for handling data context to caller
type RefHasher ¶
type RefHasher struct {
// contains filtered or unexported fields
}
RefHasher is the non-optimized easy-to-read reference implementation of BMT
func NewRefHasher ¶
func NewRefHasher(hasher BaseHasherFunc, count int) *RefHasher
NewRefHasher returns a new RefHasher
type TreePool ¶
type TreePool struct { SegmentSize int // size of leaf segments, stipulated to be = hash size SegmentCount int // the number of segments on the base level of the BMT Capacity int // pool capacity, controls concurrency Depth int // depth of the bmt trees = int(log2(segmentCount))+1 Size int // the total length of the data (count * size) // contains filtered or unexported fields }
TreePool provides a pool of trees used as resources by the BMT Hasher. A tree popped from the pool is guaranteed to have a clean state ready for hashing a new chunk.
func NewTreePool ¶
func NewTreePool(hasher BaseHasherFunc, segmentCount, capacity int) *TreePool
NewTreePool creates a tree pool with hasher, segment size, segment count and capacity on Hasher.getTree it reuses free trees or creates a new one if capacity is not reached