Documentation ¶
Overview ¶
Copyright 2016 The go-ethereum Authors This file is part of the go-ethereum library.
The go-ethereum library is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
The go-ethereum library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
Copyright 2018 The go-ethereum Authors This file is part of the go-ethereum library.
The go-ethereum library is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
The go-ethereum library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
Index ¶
- Constants
- Variables
- func BytesToU64(data []byte) uint64
- func IsZeroAddr(addr Address) bool
- func NewHasherStore(chunkStore ChunkStore, hashFunc SwarmHasher, toEncrypt bool) *hasherStore
- func Proximity(one, other []byte) (ret int)
- func PutChunks(store *LocalStore, chunks ...*Chunk)
- func U64ToBytes(val uint64) []byte
- type Address
- type AddressCollection
- type Chunk
- type ChunkData
- type ChunkStore
- type ChunkValidator
- type ChunkerParams
- type ContentAddressValidator
- type DBAPI
- func (d *DBAPI) CurrentBucketStorageIndex(po uint8) uint64
- func (d *DBAPI) Get(addr Address) (*Chunk, error)
- func (d *DBAPI) GetOrCreateRequest(addr Address) (*Chunk, bool)
- func (d *DBAPI) Iterator(from uint64, to uint64, po uint8, f func(Address, uint64) bool) error
- func (d *DBAPI) Put(chunk *Chunk)
- type FileStore
- type FileStoreParams
- type Getter
- type HashWithLength
- type Hasher
- type JoinerParams
- type LDBDatabase
- func (db *LDBDatabase) Close()
- func (db *LDBDatabase) Delete(key []byte) error
- func (db *LDBDatabase) Get(key []byte) ([]byte, error)
- func (db *LDBDatabase) LastKnownTD() []byte
- func (db *LDBDatabase) NewIterator() iterator.Iterator
- func (db *LDBDatabase) Put(key []byte, value []byte)
- func (db *LDBDatabase) Write(batch *leveldb.Batch) error
- type LDBStore
- func (s *LDBStore) Cleanup()
- func (s *LDBStore) Close()
- func (s *LDBStore) CurrentBucketStorageIndex(po uint8) uint64
- func (s *LDBStore) CurrentStorageIndex() uint64
- func (s *LDBStore) Export(out io.Writer) (int64, error)
- func (s *LDBStore) Get(addr Address) (chunk *Chunk, err error)
- func (s *LDBStore) Import(in io.Reader) (int64, error)
- func (s *LDBStore) Put(chunk *Chunk)
- func (s *LDBStore) ReIndex()
- func (s *LDBStore) Size() uint64
- func (s *LDBStore) SyncIterator(since uint64, until uint64, po uint8, f func(Address, uint64) bool) error
- type LDBStoreParams
- type LazyChunkReader
- type LazySectionReader
- type LazyTestSectionReader
- type LocalStore
- type LocalStoreParams
- type MapChunkStore
- type MemStore
- type NetStore
- type Peer
- type Putter
- type PyramidChunker
- type PyramidSplitterParams
- type Reference
- type SplitterParams
- type StoreParams
- type SwarmHash
- type SwarmHasher
- type TreeChunker
- type TreeEntry
- type TreeSplitterParams
Constants ¶
const ( ErrInit = iota ErrNotFound ErrIO ErrInvalidValue ErrDataOverflow ErrNothingToReturn ErrCorruptData ErrInvalidSignature ErrNotSynced ErrPeriodDepth ErrCnt )
const ( DataChunk = 0 TreeChunk = 1 )
const ( BMTHash = "BMT" SHA3Hash = "SHA3" // http://golang.org/pkg/hash/#Hash DefaultHash = BMTHash )
const (
ChunkProcessors = 8
)
const (
DefaultChunkSize int64 = 4096
)
const KeyLength = 32
const MaxPO = 16
Variables ¶
var ( ErrChunkNotFound = errors.New("chunk not found") ErrFetching = errors.New("chunk still fetching") ErrChunkInvalid = errors.New("invalid chunk") ErrChunkForward = errors.New("cannot forward") ErrChunkTimeout = errors.New("timeout") )
var ZeroAddr = Address(common.Hash{}.Bytes())
Functions ¶
func BytesToU64 ¶
func IsZeroAddr ¶
func NewHasherStore ¶
func NewHasherStore(chunkStore ChunkStore, hashFunc SwarmHasher, toEncrypt bool) *hasherStore
NewHasherStore creates a hasherStore object, which implements Putter and Getter interfaces. With the HasherStore you can put and get chunk data (which is just []byte) into a ChunkStore and the hasherStore will take core of encryption/decryption of data if necessary
func PutChunks ¶
func PutChunks(store *LocalStore, chunks ...*Chunk)
PutChunks adds chunks to localstore It waits for receive on the stored channel It logs but does not fail on delivery error
func U64ToBytes ¶
Types ¶
type Address ¶
type Address []byte
func PyramidAppend ¶
func PyramidSplit ¶
When splitting, data is given as a SectionReader, and the key is a hashSize long byte slice (Key), the root hash of the entire content will fill this once processing finishes. New chunks to store are store using the putter which the caller provides.
func TreeSplit ¶
When splitting, data is given as a SectionReader, and the key is a hashSize long byte slice (Key), the root hash of the entire content will fill this once processing finishes. New chunks to store are store using the putter which the caller provides.
func (Address) MarshalJSON ¶
func (*Address) UnmarshalJSON ¶
type AddressCollection ¶
type AddressCollection []Address
func NewAddressCollection ¶
func NewAddressCollection(l int) AddressCollection
func (AddressCollection) Len ¶
func (c AddressCollection) Len() int
func (AddressCollection) Less ¶
func (c AddressCollection) Less(i, j int) bool
func (AddressCollection) Swap ¶
func (c AddressCollection) Swap(i, j int)
type Chunk ¶
type Chunk struct { Addr Address // always SData []byte // nil if request, to be supplied by dpa Size int64 // size of the data covered by the subtree encoded in this chunk //Source Peer // peer C chan bool // to signal data delivery by the dpa ReqC chan bool // to signal the request done // contains filtered or unexported fields }
Chunk also serves as a request object passed to ChunkStores in case it is a retrieval request, Data is nil and Size is 0 Note that Size is not the size of the data chunk, which is Data.Size() but the size of the subtree encoded in the chunk 0 if request, to be supplied by the dpa
func GenerateRandomChunk ¶
func GenerateRandomChunks ¶
func (*Chunk) GetErrored ¶
func (*Chunk) SetErrored ¶
func (*Chunk) WaitToStore ¶
type ChunkStore ¶
type ChunkStore interface { Put(*Chunk) // effectively there is no error even if there is an error Get(Address) (*Chunk, error) Close() }
ChunkStore interface is implemented by :
- MemStore: a memory cache - DbStore: local disk/db store - LocalStore: a combination (sequence of) memStore and dbStore - NetStore: cloud storage abstraction layer - FakeChunkStore: dummy store which doesn't store anything just implements the interface
type ChunkValidator ¶
type ChunkerParams ¶
type ChunkerParams struct {
// contains filtered or unexported fields
}
type ContentAddressValidator ¶
type ContentAddressValidator struct {
Hasher SwarmHasher
}
Provides method for validation of content address in chunks Holds the corresponding hasher to create the address
func NewContentAddressValidator ¶
func NewContentAddressValidator(hasher SwarmHasher) *ContentAddressValidator
Constructor
type DBAPI ¶
type DBAPI struct {
// contains filtered or unexported fields
}
wrapper of db-s to provide mockable custom local chunk store access to syncer
func NewDBAPI ¶
func NewDBAPI(loc *LocalStore) *DBAPI
func (*DBAPI) CurrentBucketStorageIndex ¶
current storage counter of chunk db
func (*DBAPI) GetOrCreateRequest ¶
to obtain the chunks from address or request db entry only
type FileStore ¶
type FileStore struct { ChunkStore // contains filtered or unexported fields }
func NewFileStore ¶
func NewFileStore(store ChunkStore, params *FileStoreParams) *FileStore
func NewLocalFileStore ¶
for testing locally
func (*FileStore) Retrieve ¶
func (f *FileStore) Retrieve(addr Address) (reader *LazyChunkReader, isEncrypted bool)
Public API. Main entry point for document retrieval directly. Used by the FS-aware API and httpaccess Chunk retrieval blocks on netStore requests with a timeout so reader will report error if retrieval of chunks within requested range time out. It returns a reader with the chunk data and whether the content was encrypted
type FileStoreParams ¶
type FileStoreParams struct {
Hash string
}
func NewFileStoreParams ¶
func NewFileStoreParams() *FileStoreParams
type HashWithLength ¶
func (*HashWithLength) ResetWithLength ¶
func (h *HashWithLength) ResetWithLength(length []byte)
type JoinerParams ¶
type JoinerParams struct { ChunkerParams // contains filtered or unexported fields }
type LDBDatabase ¶
type LDBDatabase struct {
// contains filtered or unexported fields
}
func NewLDBDatabase ¶
func NewLDBDatabase(file string) (*LDBDatabase, error)
func (*LDBDatabase) Close ¶
func (db *LDBDatabase) Close()
func (*LDBDatabase) Delete ¶
func (db *LDBDatabase) Delete(key []byte) error
func (*LDBDatabase) LastKnownTD ¶
func (db *LDBDatabase) LastKnownTD() []byte
func (*LDBDatabase) NewIterator ¶
func (db *LDBDatabase) NewIterator() iterator.Iterator
func (*LDBDatabase) Put ¶
func (db *LDBDatabase) Put(key []byte, value []byte)
type LDBStore ¶
type LDBStore struct {
// contains filtered or unexported fields
}
func NewLDBStore ¶
func NewLDBStore(params *LDBStoreParams) (s *LDBStore, err error)
TODO: Instead of passing the distance function, just pass the address from which distances are calculated to avoid the appearance of a pluggable distance metric and opportunities of bugs associated with providing a function different from the one that is actually used.
func NewMockDbStore ¶
func NewMockDbStore(params *LDBStoreParams, mockStore *mock.NodeStore) (s *LDBStore, err error)
NewMockDbStore creates a new instance of DbStore with mockStore set to a provided value. If mockStore argument is nil, this function behaves exactly as NewDbStore.
func (*LDBStore) CurrentBucketStorageIndex ¶
func (*LDBStore) CurrentStorageIndex ¶
type LDBStoreParams ¶
type LDBStoreParams struct { *StoreParams Path string Po func(Address) uint8 }
func NewLDBStoreParams ¶
func NewLDBStoreParams(storeparams *StoreParams, path string) *LDBStoreParams
NewLDBStoreParams constructs LDBStoreParams with the specified values.
type LazyChunkReader ¶
type LazyChunkReader struct {
// contains filtered or unexported fields
}
LazyChunkReader implements LazySectionReader
func TreeJoin ¶
func TreeJoin(addr Address, getter Getter, depth int) *LazyChunkReader
Join reconstructs original content based on a root key. When joining, the caller gets returned a Lazy SectionReader, which is seekable and implements on-demand fetching of chunks as and where it is read. New chunks to retrieve are coming from the getter, which the caller provides. If an error is encountered during joining, it appears as a reader error. The SectionReader. As a result, partial reads from a document are possible even if other parts are corrupt or lost. The chunks are not meant to be validated by the chunker when joining. This is because it is left to the DPA to decide which sources are trusted.
func (*LazyChunkReader) Read ¶
func (r *LazyChunkReader) Read(b []byte) (read int, err error)
Read keeps a cursor so cannot be called simulateously, see ReadAt
func (*LazyChunkReader) ReadAt ¶
func (r *LazyChunkReader) ReadAt(b []byte, off int64) (read int, err error)
read at can be called numerous times concurrent reads are allowed Size() needs to be called synchronously on the LazyChunkReader first
type LazySectionReader ¶
Size, Seek, Read, ReadAt
type LazyTestSectionReader ¶
type LazyTestSectionReader struct {
*io.SectionReader
}
type LocalStore ¶
type LocalStore struct { Validators []ChunkValidator DbStore *LDBStore // contains filtered or unexported fields }
LocalStore is a combination of inmemory db over a disk persisted db implements a Get/Put with fallback (caching) logic using any 2 ChunkStores
func NewLocalStore ¶
func NewLocalStore(params *LocalStoreParams, mockStore *mock.NodeStore) (*LocalStore, error)
This constructor uses MemStore and DbStore as components
func NewTestLocalStoreForAddr ¶
func NewTestLocalStoreForAddr(params *LocalStoreParams) (*LocalStore, error)
func (*LocalStore) Get ¶
func (ls *LocalStore) Get(addr Address) (chunk *Chunk, err error)
Get(chunk *Chunk) looks up a chunk in the local stores This method is blocking until the chunk is retrieved so additional timeout may be needed to wrap this call if ChunkStores are remote and can have long latency
func (*LocalStore) GetOrCreateRequest ¶
func (ls *LocalStore) GetOrCreateRequest(addr Address) (chunk *Chunk, created bool)
retrieve logic common for local and network chunk retrieval requests
func (*LocalStore) Put ¶
func (ls *LocalStore) Put(chunk *Chunk)
Put is responsible for doing validation and storage of the chunk by using configured ChunkValidators, MemStore and LDBStore. If the chunk is not valid, its GetErrored function will return ErrChunkInvalid. This method will check if the chunk is already in the MemStore and it will return it if it is. If there is an error from the MemStore.Get, it will be returned by calling GetErrored on the chunk. This method is responsible for closing Chunk.ReqC channel when the chunk is stored in memstore. After the LDBStore.Put, it is ensured that the MemStore contains the chunk with the same data, but nil ReqC channel.
func (*LocalStore) RequestsCacheLen ¶
func (ls *LocalStore) RequestsCacheLen() int
RequestsCacheLen returns the current number of outgoing requests stored in the cache
type LocalStoreParams ¶
type LocalStoreParams struct { *StoreParams ChunkDbPath string Validators []ChunkValidator `toml:"-"` }
func NewDefaultLocalStoreParams ¶
func NewDefaultLocalStoreParams() *LocalStoreParams
func (*LocalStoreParams) Init ¶
func (p *LocalStoreParams) Init(path string)
this can only finally be set after all config options (file, cmd line, env vars) have been evaluated
type MapChunkStore ¶
type MapChunkStore struct {
// contains filtered or unexported fields
}
MapChunkStore is a very simple ChunkStore implementation to store chunks in a map in memory.
func NewMapChunkStore ¶
func NewMapChunkStore() *MapChunkStore
func (*MapChunkStore) Close ¶
func (m *MapChunkStore) Close()
func (*MapChunkStore) Put ¶
func (m *MapChunkStore) Put(chunk *Chunk)
type MemStore ¶
type MemStore struct {
// contains filtered or unexported fields
}
func NewMemStore ¶
func NewMemStore(params *StoreParams, _ *LDBStore) (m *MemStore)
NewMemStore is instantiating a MemStore cache. We are keeping a record of all outgoing requests for chunks, that should later be delivered by peer nodes, in the `requests` LRU cache. We are also keeping all frequently requested chunks in the `cache` LRU cache.
`requests` LRU cache capacity should ideally never be reached, this is why for the time being it should be initialised with the same value as the LDBStore capacity.
type NetStore ¶
type NetStore struct {
// contains filtered or unexported fields
}
NetStore implements the ChunkStore interface, this chunk access layer assumed 2 chunk stores local storage eg. LocalStore and network storage eg., NetStore access by calling network is blocking with a timeout
func NewNetStore ¶
func NewNetStore(localStore *LocalStore, retrieve func(chunk *Chunk) error) *NetStore
func (*NetStore) Get ¶
Get is the entrypoint for local retrieve requests waits for response or times out
Get uses get method to retrieve request, but retries if the ErrChunkNotFound is returned by get, until the netStoreRetryTimeout is reached.
func (*NetStore) GetWithTimeout ¶
GetWithTimeout makes a single retrieval attempt for a chunk with a explicit timeout parameter
type Peer ¶
type Peer interface{}
Peer is the recorded as Source on the chunk should probably not be here? but network should wrap chunk object
type Putter ¶
type Putter interface { Put(ChunkData) (Reference, error) // RefSize returns the length of the Reference created by this Putter RefSize() int64 // Close is to indicate that no more chunk data will be Put on this Putter Close() // Wait returns if all data has been store and the Close() was called. Wait() }
Putter is responsible to store data and create a reference for it
type PyramidChunker ¶
type PyramidChunker struct {
// contains filtered or unexported fields
}
func NewPyramidSplitter ¶
func NewPyramidSplitter(params *PyramidSplitterParams) (pc *PyramidChunker)
func (*PyramidChunker) Append ¶
func (pc *PyramidChunker) Append() (k Address, wait func(), err error)
func (*PyramidChunker) Join ¶
func (pc *PyramidChunker) Join(addr Address, getter Getter, depth int) LazySectionReader
func (*PyramidChunker) Split ¶
func (pc *PyramidChunker) Split() (k Address, wait func(), err error)
type PyramidSplitterParams ¶
type PyramidSplitterParams struct { SplitterParams // contains filtered or unexported fields }
type SplitterParams ¶
type SplitterParams struct { ChunkerParams // contains filtered or unexported fields }
type StoreParams ¶
type StoreParams struct { Hash SwarmHasher `toml:"-"` DbCapacity uint64 CacheCapacity uint ChunkRequestsCacheCapacity uint BaseKey []byte }
func NewDefaultStoreParams ¶
func NewDefaultStoreParams() *StoreParams
func NewStoreParams ¶
func NewStoreParams(ldbCap uint64, cacheCap uint, requestsCap uint, hash SwarmHasher, basekey []byte) *StoreParams
type SwarmHasher ¶
type SwarmHasher func() SwarmHash
func MakeHashFunc ¶
func MakeHashFunc(hash string) SwarmHasher
type TreeChunker ¶
type TreeChunker struct {
// contains filtered or unexported fields
}
func NewTreeJoiner ¶
func NewTreeJoiner(params *JoinerParams) *TreeChunker
func NewTreeSplitter ¶
func NewTreeSplitter(params *TreeSplitterParams) *TreeChunker
func (*TreeChunker) Append ¶
func (tc *TreeChunker) Append() (Address, func(), error)
func (*TreeChunker) Join ¶
func (tc *TreeChunker) Join() *LazyChunkReader
func (*TreeChunker) Split ¶
func (tc *TreeChunker) Split() (k Address, wait func(), err error)
type TreeEntry ¶
type TreeEntry struct {
// contains filtered or unexported fields
}
Entry to create a tree node
func NewTreeEntry ¶
func NewTreeEntry(pyramid *PyramidChunker) *TreeEntry
type TreeSplitterParams ¶
type TreeSplitterParams struct { SplitterParams // contains filtered or unexported fields }
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
Package mock defines types that are used by different implementations of mock storages.
|
Package mock defines types that are used by different implementations of mock storages. |
db
Package db implements a mock store that keeps all chunk data in LevelDB database.
|
Package db implements a mock store that keeps all chunk data in LevelDB database. |
mem
Package mem implements a mock store that keeps all chunk data in memory.
|
Package mem implements a mock store that keeps all chunk data in memory. |
rpc
Package rpc implements an RPC client that connect to a centralized mock store.
|
Package rpc implements an RPC client that connect to a centralized mock store. |
test
Package test provides functions that are used for testing GlobalStorer implementations.
|
Package test provides functions that are used for testing GlobalStorer implementations. |