file

package
v2.3.0-rc5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 5, 2024 License: BSD-3-Clause Imports: 8 Imported by: 3

Documentation

Overview

Package file provides interfaces for file-oriented operations.

Index

Constants

This section is empty.

Variables

View Source
var Spans []int64

Functions

func ChunkAddresses

func ChunkAddresses(data []byte, parities, reflen int) (addrs []swarm.Address, shardCnt int)

ChunkAddresses returns data shards and parities of the intermediate chunk assumes data is truncated by ChunkPayloadSize

func ChunkPayloadSize

func ChunkPayloadSize(data []byte) (int, error)

ChunkPayloadSize returns the effective byte length of an intermediate chunk assumes data is always chunk size (without span)

func GenerateSpanSizes

func GenerateSpanSizes(levels, branches int) []int64

GenerateSpanSizes generates a dictionary of maximum span lengths per level represented by one SectionSize() of data

func JoinReadAll

func JoinReadAll(ctx context.Context, j Joiner, outFile io.Writer) (int64, error)

JoinReadAll reads all output from the provided Joiner.

func Levels

func Levels(length int64, sectionSize, branches int) int

Levels calculates the last level index which a particular data section count will result in. The returned level will be the level of the root hash.

func NewAbortError

func NewAbortError(err error) error

NewAbortError creates a new AbortError instance.

func NewChunkPipe

func NewChunkPipe() io.ReadWriteCloser

Creates a new ChunkPipe

func NewHashError

func NewHashError(err error) error

NewHashError creates a new HashError instance.

func NewSimpleReadCloser

func NewSimpleReadCloser(buffer []byte) io.ReadCloser

NewSimpleReadCloser creates a new simpleReadCloser.

func ReferenceCount

func ReferenceCount(span uint64, level redundancy.Level, encrytedChunk bool) (int, int)

ReferenceCount brute-forces the data shard count from which identify the parity count as well in a substree assumes span > swarm.chunkSize returns data and parity shard number

func SplitWriteAll

func SplitWriteAll(ctx context.Context, s Splitter, r io.Reader, l int64, toEncrypt bool) (swarm.Address, error)

SplitWriteAll writes all input from provided reader to the provided splitter

Types

type AbortError

type AbortError struct {
	// contains filtered or unexported fields
}

AbortError should be returned whenever a file operation is terminated before it has completed.

func (*AbortError) Error

func (e *AbortError) Error() string

Error implements standard go error interface.

func (*AbortError) Unwrap

func (e *AbortError) Unwrap() error

Unwrap returns an underlying error.

type ChunkPipe

type ChunkPipe struct {
	io.ReadCloser
	// contains filtered or unexported fields
}

ChunkPipe ensures that only the last read is smaller than the chunk size, regardless of size of individual writes.

func (*ChunkPipe) Close

func (c *ChunkPipe) Close() error

Close implements io.Closer

func (*ChunkPipe) Read

func (c *ChunkPipe) Read(b []byte) (int, error)

Read implements io.Reader

func (*ChunkPipe) Write

func (c *ChunkPipe) Write(b []byte) (int, error)

Writer implements io.Writer

type HashError

type HashError struct {
	// contains filtered or unexported fields
}

HashError should be returned whenever a file operation is terminated before it has completed.

func (*HashError) Error

func (e *HashError) Error() string

Error implements standard go error interface.

func (*HashError) Unwrap

func (e *HashError) Unwrap() error

Unwrap returns an underlying error.

type Joiner

type Joiner interface {
	Reader
	// IterateChunkAddresses is used to iterate over chunks addresses of some root hash.
	IterateChunkAddresses(swarm.AddressIterFunc) error
	// Size returns the span of the hash trie represented by the joiner's root hash.
	Size() int64
}

Joiner provides the inverse functionality of the Splitter.

type LoadSaver

type LoadSaver interface {
	Loader
	Saver
}

type Loader

type Loader interface {
	// Load a reference in byte slice representation and return all content associated with the reference.
	Load(context.Context, []byte) ([]byte, error)
}

type Reader

type Reader interface {
	io.ReadSeeker
	io.ReaderAt
}

type Saver

type Saver interface {
	// Save an arbitrary byte slice and return the reference byte slice representation.
	Save(context.Context, []byte) ([]byte, error)
}

type Splitter

type Splitter interface {
	Split(ctx context.Context, dataIn io.ReadCloser, dataLength int64, toEncrypt bool) (addr swarm.Address, err error)
}

Splitter starts a new file splitting job.

Data is read from the provided reader. If the dataLength parameter is 0, data is read until io.EOF is encountered. When EOF is received and splitting is done, the resulting Swarm Address is returned.

Directories

Path Synopsis
Package joiner provides implementations of the file.Joiner interface
Package joiner provides implementations of the file.Joiner interface
Package loadsave provides lightweight persistence abstraction for manifest operations.
Package loadsave provides lightweight persistence abstraction for manifest operations.
Package pipeline provides functionality for hashing pipelines needed to create different flavors of merkelised representations of arbitrary data.
Package pipeline provides functionality for hashing pipelines needed to create different flavors of merkelised representations of arbitrary data.
bmt
Package splitter provides implementations of the file.Splitter interface
Package splitter provides implementations of the file.Splitter interface

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL