Documentation
¶
Index ¶
- Constants
- Variables
- func CreateCommitment(blob *Blob) ([]byte, error)
- func CreateCommitments(blobs []*Blob) ([][]byte, error)
- func HashFromByteSlices(items [][]byte) []byte
- func HashFromByteSlicesIterative(input [][]byte) []byte
- func New() hash.Hash
- func NewTruncated() hash.Hash
- func SplitBlobs(blobs ...Blob) ([]share.Share, error)
- func Sum(bz []byte) []byte
- func SumTruncated(bz []byte) []byte
- type Blob
- type Commitment
- type Proof
Constants ¶
const ( Size = sha256.Size BlockSize = sha256.BlockSize )
const ( // NMTIgnoreMaxNamespace is currently used value for IgnoreMaxNamespace option in NMT. // IgnoreMaxNamespace defines whether the largest possible Namespace MAX_NID should be 'ignored'. // If set to true, this allows for shorter proofs in particular use-cases. NMTIgnoreMaxNamespace = true )
const (
TruncatedSize = 20
)
Variables ¶
var ( ErrBlobNotFound = errors.New("blob: not found") ErrInvalidProof = errors.New("blob: invalid proof") )
Functions ¶
func CreateCommitment ¶ added in v0.1.2
CreateCommitment generates the share commitment for a given blob. See [data square layout rationale] and [blob share commitment rules].
[data square layout rationale]: ../../specs/src/specs/data_square_layout.md [blob share commitment rules]: ../../specs/src/specs/data_square_layout.md#blob-share-commitment-rules NOTE: We assume Blob.Namespace contains both id and version bytes
func CreateCommitments ¶ added in v0.1.2
func HashFromByteSlices ¶ added in v0.1.2
HashFromByteSlices computes a Merkle tree where the leaves are the byte slice, in the provided order. It follows RFC-6962.
func HashFromByteSlicesIterative ¶ added in v0.1.2
HashFromByteSliceIterative is an iterative alternative to HashFromByteSlice motivated by potential performance improvements. (#2611) had suggested that an iterative version of HashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large.
Provided here is an iterative alternative, a test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference:
BenchmarkHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkHashAlternatives/iterative-4 20000 76802 ns/op
On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single [][]byte slices which it then re-slices at each level of the tree. The iterative version reproduces [][]byte once within the function and then rewrites sub-slices of that array at each level of the tree.
Experimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance.
These preliminary results suggest:
- The performance of the HashFromByteSlice is pretty good
- Go has low overhead for recursive functions
- The performance of the HashFromByteSlice routine is dominated by the actual hashing of data
Although this work is in no way exhaustive, point #3 suggests that optimization of this routine would need to take an alternative approach to make significant improvements on the current performance.
Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit.
func NewTruncated ¶ added in v0.1.2
NewTruncated returns a new hash.Hash.
func SplitBlobs ¶ added in v0.1.2
SplitBlobs splits the provided blobs into shares.
func SumTruncated ¶ added in v0.1.2
SumTruncated returns the first 20 bytes of SHA256 of the bz.
Types ¶
type Blob ¶
type Blob struct { // NOTE: Namespace _must_ include both version and id bytes Namespace []byte `protobuf:"bytes,1,opt,name=namespace,proto3" json:"namespace,omitempty"` Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"` NamespaceVersion uint32 `protobuf:"varint,4,opt,name=namespace_version,json=namespaceVersion,proto3" json:"namespace_version,omitempty"` Commitment []byte `protobuf:"bytes,5,opt,name=commitment,proto3" json:"commitment,omitempty"` }
Blob represents any application-specific binary data that anyone can submit to Celestia.
func NewBlobV0 ¶
NewBlobV0 constructs a new blob from the provided Namespace and data. The blob will be formatted as v0 shares.
func (*Blob) MarshalJSON ¶
func (*Blob) UnmarshalJSON ¶
type Commitment ¶
type Commitment []byte
Commitment is a Merkle Root of the subtree built from shares of the Blob. It is computed by splitting the blob into shares and building the Merkle subtree to be included after Submit.
func (Commitment) Equal ¶
func (com Commitment) Equal(c Commitment) bool
Equal ensures that commitments are the same
func (Commitment) String ¶
func (com Commitment) String() string