v0

package
v0.0.0-...-3805acb Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 24, 2024 License: Apache-2.0 Imports: 20 Imported by: 0

Documentation

Index

Constants

View Source
const (

	// These also impact the circuit constraints (compile / setup time)
	MaxUncompressedBytes = 800_000   // defines the max size we can handle for a blob (uncompressed) input
	MaxUsableBytes       = 32 * 4096 // TODO @gbotrel confirm this value // defines the number of bytes available in a blob
)

Variables

This section is empty.

Functions

func DecodeBlockFromUncompressed

func DecodeBlockFromUncompressed(r *bytes.Reader) (encode.DecodedBlockData, error)

DecodeBlockFromUncompressed inverts EncodeBlockForCompression. It is primarily meant for testing and ensuring the encoding is bijective.

func DecodeTxFromUncompressed

func DecodeTxFromUncompressed(r *bytes.Reader, from *common.Address) (types.TxData, error)

DecodeTxFromUncompressed puts all the transaction data into the output, except for the from address, which will be put where the argument "from" is referencing

func EncodeBlockForCompression

func EncodeBlockForCompression(block *types.Block, w io.Writer, encodingOptions ...encode.Option) error

EncodeBlockForCompression encodes a block for compression.

func EncodeTxForCompression

func EncodeTxForCompression(tx *types.Transaction, w io.Writer, encodingOptions ...encode.Option) error

EncodeTxForCompression encodes a transaction for compression. this code is from zk-evm-monorepo/prover/... but doesn't include the chainID

func PackAlign

func PackAlign(w io.Writer, a, b []byte) (n int64, err error)

PackAlign writes a and b to w, aligned to fr.Element (bls12-377) boundary. It returns the length of the data written to w.

func PackAlignSize

func PackAlignSize(a, b []byte) (n int)

PackAlignSize returns the size of the data when packed with PackAlign.

func ReadTxAsRlp

func ReadTxAsRlp(r *bytes.Reader) (fields []any, _type uint8, err error)

func UnpackAlign

func UnpackAlign(r []byte) ([]byte, error)

UnpackAlign unpacks r (packed with PackAlign) and returns the unpacked data.

Types

type BlobMaker

type BlobMaker struct {
	// contains filtered or unexported fields
}

BlobMaker is a bm for RLP encoded blocks (see EIP-4844). It takes a batch of blocks as input (see StartNewBatch and Write). And it compresses them into a "blob" (see Bytes).

func NewBlobMaker

func NewBlobMaker(dataLimit int, dictPath string) (*BlobMaker, error)

NewBlobMaker returns a new bm.

func (*BlobMaker) Bytes

func (bm *BlobMaker) Bytes() []byte

Bytes returns the compressed data. Note that it returns a slice of the internal buffer, it is the caller's responsibility to copy the data if needed.

func (*BlobMaker) Clone

func (bm *BlobMaker) Clone() *BlobMaker

Clone returns a (almost) deep copy of the bm -- this is used for test purposes.

func (*BlobMaker) Equals

func (bm *BlobMaker) Equals(other *BlobMaker) bool

Equals returns true if the two compressors are ~equal -- this is used for test purposes.

func (*BlobMaker) Len

func (bm *BlobMaker) Len() int

Len returns the length of the compressed data, which includes the header.

func (*BlobMaker) RawCompressedSize

func (bm *BlobMaker) RawCompressedSize(data []byte) (int, error)

RawCompressedSize compresses the (raw) input and returns the length of the compressed data. The returned length account for the "padding" used by the blob maker to fit the data in field elements. Input size must be less than 256kB. If an error occurred, returns -1.

This function is thread-safe. Concurrent calls are allowed, but the other functions are not thread-safe.

func (*BlobMaker) Reset

func (bm *BlobMaker) Reset()

Reset resets the bm to its initial state.

func (*BlobMaker) StartNewBatch

func (bm *BlobMaker) StartNewBatch()

StartNewBatch starts a new batch of blocks.

func (*BlobMaker) WorstCompressedBlockSize

func (bm *BlobMaker) WorstCompressedBlockSize(rlpBlock []byte) (bool, int, error)

WorstCompressedBlockSize returns the size of the given block, as compressed by an "empty" blob maker. That is, with more context, blob maker could compress the block further, but this function returns the maximum size that can be achieved.

The input is a RLP encoded block. Returns the length of the compressed data, or -1 if an error occurred.

This function is thread-safe. Concurrent calls are allowed, but the other functions may not be thread-safe.

func (*BlobMaker) WorstCompressedTxSize

func (bm *BlobMaker) WorstCompressedTxSize(rlpTx []byte) (int, error)

WorstCompressedTxSize returns the size of the given transaction, as compressed by an "empty" blob maker. That is, with more context, blob maker could compress the transaction further, but this function returns the maximum size that can be achieved.

The input is a RLP encoded transaction. Returns the length of the compressed data, or -1 if an error occurred.

This function is thread-safe. Concurrent calls are allowed, but the other functions may not be thread-safe.

func (*BlobMaker) Write

func (bm *BlobMaker) Write(rlpBlock []byte, forceReset bool, encodingOptions ...encode.Option) (ok bool, err error)

Write attempts to append the RLP block to the current batch. if forceReset is set; this will NOT append the bytes but still returns true if the chunk could have been appended

func (*BlobMaker) Written

func (bm *BlobMaker) Written() int
type Header struct {
	DictChecksum [fr.Bytes]byte
	// contains filtered or unexported fields
}

A Header is a list of batches of blocks of len(blocks) len(Header) == nb of batches in the blob len(Header[i]) == nb of blocks in the batch i Header[i][j] == len (bytes) of the j-th block in the batch i

func DecompressBlob

func DecompressBlob(b []byte, dictStore dictionary.Store) (blobHeader *Header, rawBlocks []byte, blocks [][]byte, err error)

DecompressBlob decompresses a blob and returns the header and the blocks as they were compressed. rawBlocks is the raw payload of the blob, delivered in packed format @TODO bad idea. fix

func (*Header) BlockLength

func (s *Header) BlockLength(i, j int) int

func (*Header) ByteSize

func (s *Header) ByteSize() int

func (*Header) ByteSizePacked

func (s *Header) ByteSizePacked() int

func (*Header) CheckEquality

func (s *Header) CheckEquality(other *Header) error

CheckEquality similar to Equals but returning a description of the mismatch, returning nil if the objects are equal

func (*Header) Equals

func (s *Header) Equals(other *Header) bool

func (*Header) NbBatches

func (s *Header) NbBatches() int

func (*Header) NbBlocksInBatch

func (s *Header) NbBlocksInBatch(i int) int

func (*Header) ReadFrom

func (s *Header) ReadFrom(r io.Reader) (int64, error)

ReadFrom reads the header table from r.

func (*Header) WriteTo

func (s *Header) WriteTo(w io.Writer) (int64, error)

WriteTo writes the header table to w.

Directories

Path Synopsis
lzss/internal/suffixarray
Package suffixarray implements substring search in logarithmic time using an in-memory suffix array.
Package suffixarray implements substring search in logarithmic time using an in-memory suffix array.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL