blob

package
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 17, 2023 License: Apache-2.0 Imports: 10 Imported by: 0

Documentation

Index

Constants

View Source
const (
	Size      = sha256.Size
	BlockSize = sha256.BlockSize
)
View Source
const (
	// NMTIgnoreMaxNamespace is currently used value for IgnoreMaxNamespace option in NMT.
	// IgnoreMaxNamespace defines whether the largest possible Namespace MAX_NID should be 'ignored'.
	// If set to true, this allows for shorter proofs in particular use-cases.
	NMTIgnoreMaxNamespace = true
)
View Source
const (
	TruncatedSize = 20
)

Variables

View Source
var (
	ErrBlobNotFound = errors.New("blob: not found")
	ErrInvalidProof = errors.New("blob: invalid proof")
)

Functions

func CreateCommitment added in v0.1.2

func CreateCommitment(blob *Blob) ([]byte, error)

CreateCommitment generates the share commitment for a given blob. See [data square layout rationale] and [blob share commitment rules].

[data square layout rationale]: ../../specs/src/specs/data_square_layout.md [blob share commitment rules]: ../../specs/src/specs/data_square_layout.md#blob-share-commitment-rules NOTE: We assume Blob.Namespace contains both id and version bytes

func CreateCommitments added in v0.1.2

func CreateCommitments(blobs []*Blob) ([][]byte, error)

func HashFromByteSlices added in v0.1.2

func HashFromByteSlices(items [][]byte) []byte

HashFromByteSlices computes a Merkle tree where the leaves are the byte slice, in the provided order. It follows RFC-6962.

func HashFromByteSlicesIterative added in v0.1.2

func HashFromByteSlicesIterative(input [][]byte) []byte

HashFromByteSliceIterative is an iterative alternative to HashFromByteSlice motivated by potential performance improvements. (#2611) had suggested that an iterative version of HashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large.

Provided here is an iterative alternative, a test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference:

BenchmarkHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkHashAlternatives/iterative-4 20000 76802 ns/op

On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single [][]byte slices which it then re-slices at each level of the tree. The iterative version reproduces [][]byte once within the function and then rewrites sub-slices of that array at each level of the tree.

Experimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance.

These preliminary results suggest:

  1. The performance of the HashFromByteSlice is pretty good
  2. Go has low overhead for recursive functions
  3. The performance of the HashFromByteSlice routine is dominated by the actual hashing of data

Although this work is in no way exhaustive, point #3 suggests that optimization of this routine would need to take an alternative approach to make significant improvements on the current performance.

Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit.

func New added in v0.1.2

func New() hash.Hash

New returns a new hash.Hash.

func NewTruncated added in v0.1.2

func NewTruncated() hash.Hash

NewTruncated returns a new hash.Hash.

func SplitBlobs added in v0.1.2

func SplitBlobs(blobs ...Blob) ([]share.Share, error)

SplitBlobs splits the provided blobs into shares.

func Sum added in v0.1.2

func Sum(bz []byte) []byte

Sum returns the SHA256 of the bz.

func SumTruncated added in v0.1.2

func SumTruncated(bz []byte) []byte

SumTruncated returns the first 20 bytes of SHA256 of the bz.

Types

type Blob

type Blob struct {
	// NOTE: Namespace _must_ include both version and id bytes
	Namespace        []byte `protobuf:"bytes,1,opt,name=namespace,proto3" json:"namespace,omitempty"`
	Data             []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
	ShareVersion     uint32 `protobuf:"varint,3,opt,name=share_version,json=shareVersion,proto3" json:"share_version,omitempty"`
	NamespaceVersion uint32 `protobuf:"varint,4,opt,name=namespace_version,json=namespaceVersion,proto3" json:"namespace_version,omitempty"`
	Commitment       []byte `protobuf:"bytes,5,opt,name=commitment,proto3" json:"commitment,omitempty"`
}

Blob represents any application-specific binary data that anyone can submit to Celestia.

func NewBlob

func NewBlob(shareVersion uint8, namespace share.Namespace, data []byte) (*Blob, error)

NewBlob constructs a new blob from the provided Namespace, data and share version.

func NewBlobV0

func NewBlobV0(namespace share.Namespace, data []byte) (*Blob, error)

NewBlobV0 constructs a new blob from the provided Namespace and data. The blob will be formatted as v0 shares.

func (*Blob) MarshalJSON

func (b *Blob) MarshalJSON() ([]byte, error)

func (*Blob) UnmarshalJSON

func (b *Blob) UnmarshalJSON(data []byte) error

type Commitment

type Commitment []byte

Commitment is a Merkle Root of the subtree built from shares of the Blob. It is computed by splitting the blob into shares and building the Merkle subtree to be included after Submit.

func (Commitment) Equal

func (com Commitment) Equal(c Commitment) bool

Equal ensures that commitments are the same

func (Commitment) String

func (com Commitment) String() string

type Proof

type Proof []*nmt.Proof

Proof is a collection of nmt.Proofs that verifies the inclusion of the data.

func (Proof) Len

func (p Proof) Len() int

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL