Documentation ¶
Overview ¶
Package file provides interfaces for file-oriented operations.
Index ¶
- Variables
- func ChunkAddresses(data []byte, parities, reflen int) (addrs []swarm.Address, shardCnt int)
- func ChunkPayloadSize(data []byte) (int, error)
- func GenerateSpanSizes(levels, branches int) []int64
- func JoinReadAll(ctx context.Context, j Joiner, outFile io.Writer) (int64, error)
- func Levels(length int64, sectionSize, branches int) int
- func NewAbortError(err error) error
- func NewChunkPipe() io.ReadWriteCloser
- func NewHashError(err error) error
- func NewSimpleReadCloser(buffer []byte) io.ReadCloser
- func ReferenceCount(span uint64, level redundancy.Level, encrytedChunk bool) (int, int)
- func SplitWriteAll(ctx context.Context, s Splitter, r io.Reader, l int64, toEncrypt bool) (swarm.Address, error)
- type AbortError
- type ChunkPipe
- type HashError
- type Joiner
- type LoadSaver
- type Loader
- type Reader
- type Saver
- type Splitter
Constants ¶
This section is empty.
Variables ¶
var Spans []int64
Functions ¶
func ChunkAddresses ¶
ChunkAddresses returns data shards and parities of the intermediate chunk assumes data is truncated by ChunkPayloadSize
func ChunkPayloadSize ¶
ChunkPayloadSize returns the effective byte length of an intermediate chunk assumes data is always chunk size (without span)
func GenerateSpanSizes ¶
GenerateSpanSizes generates a dictionary of maximum span lengths per level represented by one SectionSize() of data
func JoinReadAll ¶
JoinReadAll reads all output from the provided Joiner.
func Levels ¶
Levels calculates the last level index which a particular data section count will result in. The returned level will be the level of the root hash.
func NewAbortError ¶
NewAbortError creates a new AbortError instance.
func NewHashError ¶
NewHashError creates a new HashError instance.
func NewSimpleReadCloser ¶
func NewSimpleReadCloser(buffer []byte) io.ReadCloser
NewSimpleReadCloser creates a new simpleReadCloser.
func ReferenceCount ¶
ReferenceCount brute-forces the data shard count from which identify the parity count as well in a substree assumes span > swarm.chunkSize returns data and parity shard number
Types ¶
type AbortError ¶
type AbortError struct {
// contains filtered or unexported fields
}
AbortError should be returned whenever a file operation is terminated before it has completed.
func (*AbortError) Error ¶
func (e *AbortError) Error() string
Error implements standard go error interface.
type ChunkPipe ¶
type ChunkPipe struct { io.ReadCloser // contains filtered or unexported fields }
ChunkPipe ensures that only the last read is smaller than the chunk size, regardless of size of individual writes.
type HashError ¶
type HashError struct {
// contains filtered or unexported fields
}
HashError should be returned whenever a file operation is terminated before it has completed.
type Joiner ¶
type Joiner interface { Reader // IterateChunkAddresses is used to iterate over chunks addresses of some root hash. IterateChunkAddresses(swarm.AddressIterFunc) error // Size returns the span of the hash trie represented by the joiner's root hash. Size() int64 }
Joiner provides the inverse functionality of the Splitter.
type Splitter ¶
type Splitter interface {
Split(ctx context.Context, dataIn io.ReadCloser, dataLength int64, toEncrypt bool) (addr swarm.Address, err error)
}
Splitter starts a new file splitting job.
Data is read from the provided reader. If the dataLength parameter is 0, data is read until io.EOF is encountered. When EOF is received and splitting is done, the resulting Swarm Address is returned.
Directories ¶
Path | Synopsis |
---|---|
Package joiner provides implementations of the file.Joiner interface
|
Package joiner provides implementations of the file.Joiner interface |
Package loadsave provides lightweight persistence abstraction for manifest operations.
|
Package loadsave provides lightweight persistence abstraction for manifest operations. |
Package pipeline provides functionality for hashing pipelines needed to create different flavors of merkelised representations of arbitrary data.
|
Package pipeline provides functionality for hashing pipelines needed to create different flavors of merkelised representations of arbitrary data. |
Package splitter provides implementations of the file.Splitter interface
|
Package splitter provides implementations of the file.Splitter interface |