storage

package
v2.1.1+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 22, 2018 License: GPL-3.0 Imports: 32 Imported by: 0

Documentation

Overview

Copyright 2016 The eosclassic & go-ethereum Authors This file is part of the eosclassic library.

The eosclassic library is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

The eosclassic library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License along with the eosclassic library. If not, see <http://www.gnu.org/licenses/>.

Copyright 2018 The eosclassic & go-ethereum Authors This file is part of the eosclassic library.

The eosclassic library is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

The eosclassic library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License along with the eosclassic library. If not, see <http://www.gnu.org/licenses/>.

Index

Constants

View Source
const (
	ErrInit = iota
	ErrNotFound
	ErrIO
	ErrUnauthorized
	ErrInvalidValue
	ErrDataOverflow
	ErrNothingToReturn
	ErrCorruptData
	ErrInvalidSignature
	ErrNotSynced
	ErrPeriodDepth
	ErrCnt
)
View Source
const (
	DataChunk = 0
	TreeChunk = 1
)
View Source
const (
	BMTHash     = "BMT"
	SHA3Hash    = "SHA3" // http://golang.org/pkg/hash/#Hash
	DefaultHash = BMTHash
)
View Source
const (
	ChunkProcessors = 8
)
View Source
const KeyLength = 32
View Source
const MaxPO = 16

Variables

View Source
var (
	ErrChunkNotFound    = errors.New("chunk not found")
	ErrFetching         = errors.New("chunk still fetching")
	ErrChunkInvalid     = errors.New("invalid chunk")
	ErrChunkForward     = errors.New("cannot forward")
	ErrChunkUnavailable = errors.New("chunk unavailable")
	ErrChunkTimeout     = errors.New("timeout")
)
View Source
var ZeroAddr = Address(common.Hash{}.Bytes())

Functions

func BytesToU64

func BytesToU64(data []byte) uint64

func IsZeroAddr added in v1.8.15

func IsZeroAddr(addr Address) bool

func NewHasherStore added in v1.8.15

func NewHasherStore(chunkStore ChunkStore, hashFunc SwarmHasher, toEncrypt bool) *hasherStore

NewHasherStore creates a hasherStore object, which implements Putter and Getter interfaces. With the HasherStore you can put and get chunk data (which is just []byte) into a ChunkStore and the hasherStore will take core of encryption/decryption of data if necessary

func Proximity added in v1.8.15

func Proximity(one, other []byte) (ret int)

func PutChunks added in v1.8.15

func PutChunks(store *LocalStore, chunks ...*Chunk)

PutChunks adds chunks to localstore It waits for receive on the stored channel It logs but does not fail on delivery error

func U64ToBytes

func U64ToBytes(val uint64) []byte

Types

type Address added in v1.8.15

type Address []byte

func PyramidAppend added in v1.8.15

func PyramidAppend(ctx context.Context, addr Address, reader io.Reader, putter Putter, getter Getter) (Address, func(context.Context) error, error)

func PyramidSplit added in v1.8.15

func PyramidSplit(ctx context.Context, reader io.Reader, putter Putter, getter Getter) (Address, func(context.Context) error, error)

When splitting, data is given as a SectionReader, and the key is a hashSize long byte slice (Key), the root hash of the entire content will fill this once processing finishes. New chunks to store are store using the putter which the caller provides.

func TreeSplit added in v1.8.15

func TreeSplit(ctx context.Context, data io.Reader, size int64, putter Putter) (k Address, wait func(context.Context) error, err error)

When splitting, data is given as a SectionReader, and the key is a hashSize long byte slice (Key), the root hash of the entire content will fill this once processing finishes. New chunks to store are store using the putter which the caller provides.

func (Address) Hex added in v1.8.15

func (a Address) Hex() string

func (Address) Log added in v1.8.15

func (a Address) Log() string

func (Address) MarshalJSON added in v1.8.15

func (a Address) MarshalJSON() (out []byte, err error)

func (Address) Size added in v1.8.15

func (a Address) Size() uint

func (Address) String added in v1.8.15

func (a Address) String() string

func (*Address) UnmarshalJSON added in v1.8.15

func (a *Address) UnmarshalJSON(value []byte) error

type AddressCollection added in v1.8.15

type AddressCollection []Address

func NewAddressCollection added in v1.8.15

func NewAddressCollection(l int) AddressCollection

func (AddressCollection) Len added in v1.8.15

func (c AddressCollection) Len() int

func (AddressCollection) Less added in v1.8.15

func (c AddressCollection) Less(i, j int) bool

func (AddressCollection) Swap added in v1.8.15

func (c AddressCollection) Swap(i, j int)

type Chunk

type Chunk struct {
	Addr  Address // always
	SData []byte  // nil if request, to be supplied by dpa
	Size  int64   // size of the data covered by the subtree encoded in this chunk
	//Source   Peer           // peer
	C    chan bool // to signal data delivery by the dpa
	ReqC chan bool // to signal the request done
	// contains filtered or unexported fields
}

Chunk also serves as a request object passed to ChunkStores in case it is a retrieval request, Data is nil and Size is 0 Note that Size is not the size of the data chunk, which is Data.Size() but the size of the subtree encoded in the chunk 0 if request, to be supplied by the dpa

func GenerateRandomChunk added in v1.8.15

func GenerateRandomChunk(dataSize int64) *Chunk

func GenerateRandomChunks added in v1.8.15

func GenerateRandomChunks(dataSize int64, count int) (chunks []*Chunk)

func NewChunk

func NewChunk(addr Address, reqC chan bool) *Chunk

func (*Chunk) GetErrored added in v1.8.15

func (c *Chunk) GetErrored() error

func (*Chunk) SetErrored added in v1.8.15

func (c *Chunk) SetErrored(err error)

func (*Chunk) String

func (c *Chunk) String() string

String() for pretty printing

func (*Chunk) WaitToStore added in v1.8.15

func (c *Chunk) WaitToStore() error

type ChunkData added in v1.8.15

type ChunkData []byte

func (ChunkData) Data added in v1.8.15

func (c ChunkData) Data() []byte

func (ChunkData) Size added in v1.8.15

func (c ChunkData) Size() int64

NOTE: this returns invalid data if chunk is encrypted

type ChunkStore

type ChunkStore interface {
	Put(context.Context, *Chunk) // effectively there is no error even if there is an error
	Get(context.Context, Address) (*Chunk, error)
	Close()
}

ChunkStore interface is implemented by :

- MemStore: a memory cache - DbStore: local disk/db store - LocalStore: a combination (sequence of) memStore and dbStore - NetStore: cloud storage abstraction layer - FakeChunkStore: dummy store which doesn't store anything just implements the interface

type ChunkValidator added in v1.8.15

type ChunkValidator interface {
	Validate(addr Address, data []byte) bool
}

type ChunkerParams

type ChunkerParams struct {
	// contains filtered or unexported fields
}

type ContentAddressValidator added in v1.8.15

type ContentAddressValidator struct {
	Hasher SwarmHasher
}

Provides method for validation of content address in chunks Holds the corresponding hasher to create the address

func NewContentAddressValidator added in v1.8.15

func NewContentAddressValidator(hasher SwarmHasher) *ContentAddressValidator

Constructor

func (*ContentAddressValidator) Validate added in v1.8.15

func (v *ContentAddressValidator) Validate(addr Address, data []byte) bool

Validate that the given key is a valid content address for the given data

type DBAPI added in v1.8.15

type DBAPI struct {
	// contains filtered or unexported fields
}

wrapper of db-s to provide mockable custom local chunk store access to syncer

func NewDBAPI added in v1.8.15

func NewDBAPI(loc *LocalStore) *DBAPI

func (*DBAPI) CurrentBucketStorageIndex added in v1.8.15

func (d *DBAPI) CurrentBucketStorageIndex(po uint8) uint64

current storage counter of chunk db

func (*DBAPI) Get added in v1.8.15

func (d *DBAPI) Get(ctx context.Context, addr Address) (*Chunk, error)

to obtain the chunks from address or request db entry only

func (*DBAPI) GetOrCreateRequest added in v1.8.15

func (d *DBAPI) GetOrCreateRequest(ctx context.Context, addr Address) (*Chunk, bool)

to obtain the chunks from address or request db entry only

func (*DBAPI) Iterator added in v1.8.15

func (d *DBAPI) Iterator(from uint64, to uint64, po uint8, f func(Address, uint64) bool) error

iteration storage counter and proximity order

func (*DBAPI) Put added in v1.8.15

func (d *DBAPI) Put(ctx context.Context, chunk *Chunk)

to obtain the chunks from key or request db entry only

type FileStore added in v1.8.15

type FileStore struct {
	ChunkStore
	// contains filtered or unexported fields
}

func NewFileStore added in v1.8.15

func NewFileStore(store ChunkStore, params *FileStoreParams) *FileStore

func NewLocalFileStore added in v1.8.15

func NewLocalFileStore(datadir string, basekey []byte) (*FileStore, error)

for testing locally

func (*FileStore) HashSize added in v1.8.15

func (f *FileStore) HashSize() int

func (*FileStore) Retrieve added in v1.8.15

func (f *FileStore) Retrieve(ctx context.Context, addr Address) (reader *LazyChunkReader, isEncrypted bool)

Public API. Main entry point for document retrieval directly. Used by the FS-aware API and httpaccess Chunk retrieval blocks on netStore requests with a timeout so reader will report error if retrieval of chunks within requested range time out. It returns a reader with the chunk data and whether the content was encrypted

func (*FileStore) Store added in v1.8.15

func (f *FileStore) Store(ctx context.Context, data io.Reader, size int64, toEncrypt bool) (addr Address, wait func(context.Context) error, err error)

Public API. Main entry point for document storage directly. Used by the FS-aware API and httpaccess

type FileStoreParams added in v1.8.15

type FileStoreParams struct {
	Hash string
}

func NewFileStoreParams added in v1.8.15

func NewFileStoreParams() *FileStoreParams

type Getter added in v1.8.15

type Getter interface {
	Get(context.Context, Reference) (ChunkData, error)
}

Getter is an interface to retrieve a chunk's data by its reference

type HashWithLength

type HashWithLength struct {
	hash.Hash
}

func (*HashWithLength) ResetWithLength

func (h *HashWithLength) ResetWithLength(length []byte)

type Hasher

type Hasher func() hash.Hash

type JoinerParams added in v1.8.15

type JoinerParams struct {
	ChunkerParams
	// contains filtered or unexported fields
}

type LDBDatabase

type LDBDatabase struct {
	// contains filtered or unexported fields
}

func NewLDBDatabase

func NewLDBDatabase(file string) (*LDBDatabase, error)

func (*LDBDatabase) Close

func (db *LDBDatabase) Close()

func (*LDBDatabase) Delete

func (db *LDBDatabase) Delete(key []byte) error

func (*LDBDatabase) Get

func (db *LDBDatabase) Get(key []byte) ([]byte, error)

func (*LDBDatabase) LastKnownTD

func (db *LDBDatabase) LastKnownTD() []byte

func (*LDBDatabase) NewIterator

func (db *LDBDatabase) NewIterator() iterator.Iterator

func (*LDBDatabase) Put

func (db *LDBDatabase) Put(key []byte, value []byte)

func (*LDBDatabase) Write

func (db *LDBDatabase) Write(batch *leveldb.Batch) error

type LDBStore added in v1.8.15

type LDBStore struct {
	// contains filtered or unexported fields
}

func NewLDBStore added in v1.8.15

func NewLDBStore(params *LDBStoreParams) (s *LDBStore, err error)

TODO: Instead of passing the distance function, just pass the address from which distances are calculated to avoid the appearance of a pluggable distance metric and opportunities of bugs associated with providing a function different from the one that is actually used.

func NewMockDbStore added in v1.8.15

func NewMockDbStore(params *LDBStoreParams, mockStore *mock.NodeStore) (s *LDBStore, err error)

NewMockDbStore creates a new instance of DbStore with mockStore set to a provided value. If mockStore argument is nil, this function behaves exactly as NewDbStore.

func (*LDBStore) Cleanup added in v1.8.15

func (s *LDBStore) Cleanup()

func (*LDBStore) Close added in v1.8.15

func (s *LDBStore) Close()

func (*LDBStore) CurrentBucketStorageIndex added in v1.8.15

func (s *LDBStore) CurrentBucketStorageIndex(po uint8) uint64

func (*LDBStore) CurrentStorageIndex added in v1.8.15

func (s *LDBStore) CurrentStorageIndex() uint64

func (*LDBStore) Export added in v1.8.15

func (s *LDBStore) Export(out io.Writer) (int64, error)

Export writes all chunks from the store to a tar archive, returning the number of chunks written.

func (*LDBStore) Get added in v1.8.15

func (s *LDBStore) Get(ctx context.Context, addr Address) (chunk *Chunk, err error)

func (*LDBStore) Import added in v1.8.15

func (s *LDBStore) Import(in io.Reader) (int64, error)

of chunks read.

func (*LDBStore) Put added in v1.8.15

func (s *LDBStore) Put(ctx context.Context, chunk *Chunk)

func (*LDBStore) ReIndex added in v1.8.15

func (s *LDBStore) ReIndex()

func (*LDBStore) Size added in v1.8.15

func (s *LDBStore) Size() uint64

func (*LDBStore) SyncIterator added in v1.8.15

func (s *LDBStore) SyncIterator(since uint64, until uint64, po uint8, f func(Address, uint64) bool) error

SyncIterator(start, stop, po, f) calls f on each hash of a bin po from start to stop

type LDBStoreParams added in v1.8.15

type LDBStoreParams struct {
	*StoreParams
	Path string
	Po   func(Address) uint8
}

func NewLDBStoreParams added in v1.8.15

func NewLDBStoreParams(storeparams *StoreParams, path string) *LDBStoreParams

NewLDBStoreParams constructs LDBStoreParams with the specified values.

type LazyChunkReader

type LazyChunkReader struct {
	Ctx context.Context
	// contains filtered or unexported fields
}

LazyChunkReader implements LazySectionReader

func TreeJoin added in v1.8.15

func TreeJoin(ctx context.Context, addr Address, getter Getter, depth int) *LazyChunkReader

Join reconstructs original content based on a root key. When joining, the caller gets returned a Lazy SectionReader, which is seekable and implements on-demand fetching of chunks as and where it is read. New chunks to retrieve are coming from the getter, which the caller provides. If an error is encountered during joining, it appears as a reader error. The SectionReader. As a result, partial reads from a document are possible even if other parts are corrupt or lost. The chunks are not meant to be validated by the chunker when joining. This is because it is left to the DPA to decide which sources are trusted.

func (*LazyChunkReader) Context

func (r *LazyChunkReader) Context() context.Context

func (*LazyChunkReader) Read

func (r *LazyChunkReader) Read(b []byte) (read int, err error)

Read keeps a cursor so cannot be called simulateously, see ReadAt

func (*LazyChunkReader) ReadAt

func (r *LazyChunkReader) ReadAt(b []byte, off int64) (read int, err error)

read at can be called numerous times concurrent reads are allowed Size() needs to be called synchronously on the LazyChunkReader first

func (*LazyChunkReader) Seek

func (r *LazyChunkReader) Seek(offset int64, whence int) (int64, error)

func (*LazyChunkReader) Size

func (r *LazyChunkReader) Size(ctx context.Context, quitC chan bool) (n int64, err error)

Size is meant to be called on the LazySectionReader

type LazySectionReader

type LazySectionReader interface {
	Context() context.Context
	Size(context.Context, chan bool) (int64, error)
	io.Seeker
	io.Reader
	io.ReaderAt
}

Size, Seek, Read, ReadAt

type LazyTestSectionReader

type LazyTestSectionReader struct {
	*io.SectionReader
}

func (*LazyTestSectionReader) Context

func (r *LazyTestSectionReader) Context() context.Context

func (*LazyTestSectionReader) Size

type LocalStore

type LocalStore struct {
	Validators []ChunkValidator

	DbStore *LDBStore
	// contains filtered or unexported fields
}

LocalStore is a combination of inmemory db over a disk persisted db implements a Get/Put with fallback (caching) logic using any 2 ChunkStores

func NewLocalStore

func NewLocalStore(params *LocalStoreParams, mockStore *mock.NodeStore) (*LocalStore, error)

This constructor uses MemStore and DbStore as components

func NewTestLocalStoreForAddr added in v1.8.15

func NewTestLocalStoreForAddr(params *LocalStoreParams) (*LocalStore, error)

func (*LocalStore) Close

func (ls *LocalStore) Close()

Close the local store

func (*LocalStore) Get

func (ls *LocalStore) Get(ctx context.Context, addr Address) (chunk *Chunk, err error)

Get(chunk *Chunk) looks up a chunk in the local stores This method is blocking until the chunk is retrieved so additional timeout may be needed to wrap this call if ChunkStores are remote and can have long latency

func (*LocalStore) GetOrCreateRequest added in v1.8.15

func (ls *LocalStore) GetOrCreateRequest(ctx context.Context, addr Address) (chunk *Chunk, created bool)

retrieve logic common for local and network chunk retrieval requests

func (*LocalStore) Put

func (ls *LocalStore) Put(ctx context.Context, chunk *Chunk)

Put is responsible for doing validation and storage of the chunk by using configured ChunkValidators, MemStore and LDBStore. If the chunk is not valid, its GetErrored function will return ErrChunkInvalid. This method will check if the chunk is already in the MemStore and it will return it if it is. If there is an error from the MemStore.Get, it will be returned by calling GetErrored on the chunk. This method is responsible for closing Chunk.ReqC channel when the chunk is stored in memstore. After the LDBStore.Put, it is ensured that the MemStore contains the chunk with the same data, but nil ReqC channel.

func (*LocalStore) RequestsCacheLen added in v1.8.15

func (ls *LocalStore) RequestsCacheLen() int

RequestsCacheLen returns the current number of outgoing requests stored in the cache

type LocalStoreParams added in v1.8.15

type LocalStoreParams struct {
	*StoreParams
	ChunkDbPath string
	Validators  []ChunkValidator `toml:"-"`
}

func NewDefaultLocalStoreParams added in v1.8.15

func NewDefaultLocalStoreParams() *LocalStoreParams

func (*LocalStoreParams) Init added in v1.8.15

func (p *LocalStoreParams) Init(path string)

this can only finally be set after all config options (file, cmd line, env vars) have been evaluated

type MapChunkStore added in v1.8.15

type MapChunkStore struct {
	// contains filtered or unexported fields
}

MapChunkStore is a very simple ChunkStore implementation to store chunks in a map in memory.

func NewMapChunkStore added in v1.8.15

func NewMapChunkStore() *MapChunkStore

func (*MapChunkStore) Close added in v1.8.15

func (m *MapChunkStore) Close()

func (*MapChunkStore) Get added in v1.8.15

func (m *MapChunkStore) Get(ctx context.Context, addr Address) (*Chunk, error)

func (*MapChunkStore) Put added in v1.8.15

func (m *MapChunkStore) Put(ctx context.Context, chunk *Chunk)

type MemStore

type MemStore struct {
	// contains filtered or unexported fields
}

func NewMemStore

func NewMemStore(params *StoreParams, _ *LDBStore) (m *MemStore)

NewMemStore is instantiating a MemStore cache. We are keeping a record of all outgoing requests for chunks, that should later be delivered by peer nodes, in the `requests` LRU cache. We are also keeping all frequently requested chunks in the `cache` LRU cache.

`requests` LRU cache capacity should ideally never be reached, this is why for the time being it should be initialised with the same value as the LDBStore capacity.

func (*MemStore) Close

func (s *MemStore) Close()

func (*MemStore) Get

func (m *MemStore) Get(ctx context.Context, addr Address) (*Chunk, error)

func (*MemStore) Put

func (m *MemStore) Put(ctx context.Context, c *Chunk)

type NetStore

type NetStore struct {
	// contains filtered or unexported fields
}

NetStore implements the ChunkStore interface, this chunk access layer assumed 2 chunk stores local storage eg. LocalStore and network storage eg., NetStore access by calling network is blocking with a timeout

func NewNetStore

func NewNetStore(localStore *LocalStore, retrieve func(ctx context.Context, chunk *Chunk) error) *NetStore

func (*NetStore) Close

func (ns *NetStore) Close()

Close chunk store

func (*NetStore) Get

func (ns *NetStore) Get(ctx context.Context, addr Address) (chunk *Chunk, err error)

Get is the entrypoint for local retrieve requests waits for response or times out

Get uses get method to retrieve request, but retries if the ErrChunkNotFound is returned by get, until the netStoreRetryTimeout is reached.

func (*NetStore) GetWithTimeout added in v1.8.15

func (ns *NetStore) GetWithTimeout(ctx context.Context, addr Address, timeout time.Duration) (chunk *Chunk, err error)

GetWithTimeout makes a single retrieval attempt for a chunk with a explicit timeout parameter

func (*NetStore) Put

func (ns *NetStore) Put(ctx context.Context, chunk *Chunk)

Put is the entrypoint for local store requests coming from storeLoop

type Peer

type Peer interface{}

Peer is the recorded as Source on the chunk should probably not be here? but network should wrap chunk object

type Putter added in v1.8.15

type Putter interface {
	Put(context.Context, ChunkData) (Reference, error)
	// RefSize returns the length of the Reference created by this Putter
	RefSize() int64
	// Close is to indicate that no more chunk data will be Put on this Putter
	Close()
	// Wait returns if all data has been store and the Close() was called.
	Wait(context.Context) error
}

Putter is responsible to store data and create a reference for it

type PyramidChunker

type PyramidChunker struct {
	// contains filtered or unexported fields
}

func NewPyramidSplitter added in v1.8.15

func NewPyramidSplitter(params *PyramidSplitterParams) (pc *PyramidChunker)

func (*PyramidChunker) Append

func (pc *PyramidChunker) Append(ctx context.Context) (k Address, wait func(context.Context) error, err error)

func (*PyramidChunker) Join

func (pc *PyramidChunker) Join(addr Address, getter Getter, depth int) LazySectionReader

func (*PyramidChunker) Split

func (pc *PyramidChunker) Split(ctx context.Context) (k Address, wait func(context.Context) error, err error)

type PyramidSplitterParams added in v1.8.15

type PyramidSplitterParams struct {
	SplitterParams
	// contains filtered or unexported fields
}

func NewPyramidSplitterParams added in v1.8.15

func NewPyramidSplitterParams(addr Address, reader io.Reader, putter Putter, getter Getter, chunkSize int64) *PyramidSplitterParams

type Reference added in v1.8.15

type Reference []byte

type SplitterParams added in v1.8.15

type SplitterParams struct {
	ChunkerParams
	// contains filtered or unexported fields
}

type StoreParams

type StoreParams struct {
	Hash                       SwarmHasher `toml:"-"`
	DbCapacity                 uint64
	CacheCapacity              uint
	ChunkRequestsCacheCapacity uint
	BaseKey                    []byte
}

func NewDefaultStoreParams

func NewDefaultStoreParams() *StoreParams

func NewStoreParams added in v1.8.15

func NewStoreParams(ldbCap uint64, cacheCap uint, requestsCap uint, hash SwarmHasher, basekey []byte) *StoreParams

type SwarmHash

type SwarmHash interface {
	hash.Hash
	ResetWithLength([]byte)
}

type SwarmHasher

type SwarmHasher func() SwarmHash

func MakeHashFunc

func MakeHashFunc(hash string) SwarmHasher

type TreeChunker

type TreeChunker struct {
	// contains filtered or unexported fields
}

func NewTreeJoiner added in v1.8.15

func NewTreeJoiner(params *JoinerParams) *TreeChunker

func NewTreeSplitter added in v1.8.15

func NewTreeSplitter(params *TreeSplitterParams) *TreeChunker

func (*TreeChunker) Append

func (tc *TreeChunker) Append() (Address, func(), error)

func (*TreeChunker) Join

func (tc *TreeChunker) Join(ctx context.Context) *LazyChunkReader

func (*TreeChunker) Split

func (tc *TreeChunker) Split(ctx context.Context) (k Address, wait func(context.Context) error, err error)

type TreeEntry

type TreeEntry struct {
	// contains filtered or unexported fields
}

Entry to create a tree node

func NewTreeEntry

func NewTreeEntry(pyramid *PyramidChunker) *TreeEntry

type TreeSplitterParams added in v1.8.15

type TreeSplitterParams struct {
	SplitterParams
	// contains filtered or unexported fields
}

Directories

Path Synopsis
Package mock defines types that are used by different implementations of mock storages.
Package mock defines types that are used by different implementations of mock storages.
db
Package db implements a mock store that keeps all chunk data in LevelDB database.
Package db implements a mock store that keeps all chunk data in LevelDB database.
mem
Package mem implements a mock store that keeps all chunk data in memory.
Package mem implements a mock store that keeps all chunk data in memory.
rpc
Package rpc implements an RPC client that connect to a centralized mock store.
Package rpc implements an RPC client that connect to a centralized mock store.
test
Package test provides functions that are used for testing GlobalStorer implementations.
Package test provides functions that are used for testing GlobalStorer implementations.
Package mru defines Mutable resource updates.
Package mru defines Mutable resource updates.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL