Documentation ¶
Overview ¶
Package localstore provides disk storage layer for Swarm Chunk persistence. It uses swarm/shed abstractions.
The main type is DB which manages the storage by providing methods to access and add Chunks and to manage their status.
Modes are abstractions that do specific changes to Chunks. There are three mode types:
- ModeGet, for Chunk access
- ModePut, for adding Chunks to the database
- ModeSet, for changing Chunk statuses
Every mode type has a corresponding type (Getter, Putter and Setter) that provides adequate method to perform the opperation and that type should be injected into localstore consumers instead the whole DB. This provides more clear insight which operations consumer is performing on the database.
Getters, Putters and Setters accept different get, put and set modes to perform different actions. For example, ModeGet has two different variables ModeGetRequest and ModeGetSync and two different Getters can be constructed with them that are used when the chunk is requested or when the chunk is synced as this two events are differently changing the database.
Subscription methods are implemented for a specific purpose of continuous iterations over Chunks that should be provided to Push and Pull syncing.
DB implements an internal garbage collector that removes only synced Chunks from the database based on their most recent access time.
Internally, DB stores Chunk data and any required information, such as store and access timestamps in different shed indexes that can be iterated on by garbage collector or subscriptions.
Index ¶
- Constants
- Variables
- type DB
- func (db *DB) Close() error
- func (db *DB) ComputeReserveSize(startPO uint8) (uint64, error)
- func (db *DB) DebugIndices() (indexInfo map[string]int, err error)
- func (db *DB) Export(w io.Writer) (count int64, err error)
- func (db *DB) Get(ctx context.Context, mode storage.ModeGet, addr swarm.Address) (ch swarm.Chunk, err error)
- func (db *DB) GetMulti(ctx context.Context, mode storage.ModeGet, addrs ...swarm.Address) (chunks []swarm.Chunk, err error)
- func (db *DB) Has(ctx context.Context, addr swarm.Address) (bool, error)
- func (db *DB) HasMulti(ctx context.Context, addrs ...swarm.Address) ([]bool, error)
- func (db *DB) Import(ctx context.Context, r io.Reader) (count int64, err error)
- func (db *DB) LastPullSubscriptionBinID(bin uint8) (id uint64, err error)
- func (db *DB) Metrics() []prometheus.Collector
- func (db *DB) Put(ctx context.Context, mode storage.ModePut, chs ...swarm.Chunk) (exist []bool, err error)
- func (db *DB) ReserveCapacity() uint64
- func (db *DB) ReserveSample(ctx context.Context, anchor []byte, storageRadius uint8, consensusTime uint64) (storage.Sample, error)
- func (db *DB) Set(ctx context.Context, mode storage.ModeSet, addrs ...swarm.Address) (err error)
- func (db *DB) SubscribePull(ctx context.Context, bin uint8, since, until uint64) (c <-chan storage.Descriptor, closed <-chan struct{}, stop func())
- func (db *DB) SubscribePush(ctx context.Context, skipf func([]byte) bool) (c <-chan swarm.Chunk, reset, stop func())
- func (db *DB) UnreserveBatch(id []byte, radius uint8) (evicted uint64, err error)
- type Options
Constants ¶
const DBSchemaBatchIndex = "batch-index"
DBSchemaBatchIndex is the bee schema identifier for batch index.
const DBSchemaCatharsis = "catharsis"
DBSchemaCatharsis is the bee schema identifier for resetting gcIndex.
const DBSchemaCode = "code"
DBSchemaCode is the first bee schema identifier.
const DBSchemaDeadPush = "dead-push"
DBSchemaBatchIndex is the bee schema identifier for dead-push.
const DBSchemaSharky = "sharky"
DBSchemaSharky is the bee schema identifier for sharky.
const DBSchemaYuj = "yuj"
DBSchemaYuj is the bee schema identifier for storage incentives initial iteration.
Variables ¶
var ( ErrOverwriteImmutable = errors.New("index already exists - double issuance on immutable batch") ErrOverwrite = errors.New("index already exists with newer timestamp - double issuance on batch") )
var DBSchemaCurrent = DBSchemaCatharsis
DBSchemaCurrent represents the DB schema we want to use. The actual/current DB schema might differ until migrations are run.
var ( // ErrInvalidMode is retuned when an unknown Mode // is provided to the function. ErrInvalidMode = errors.New("invalid mode") )
Functions ¶
This section is empty.
Types ¶
type DB ¶
type DB struct {
// contains filtered or unexported fields
}
DB is the local store implementation and holds database related objects.
func New ¶
func New(path string, baseKey []byte, ss storage.StateStorer, o *Options, logger log.Logger) (db *DB, err error)
New returns a new DB. All fields and indexes are initialized and possible conflicts with schema from existing database is checked. One goroutine for writing batches is created.
func (*DB) ComputeReserveSize ¶ added in v1.8.2
ComputeReserveSize iterates on the pull index to count all chunks starting at some proximity order with an generated address whose PO is used as a starting prefix by the index.
func (*DB) DebugIndices ¶
DebugIndices returns the index sizes for all indexes in localstore the returned map keys are the index name, values are the number of elements in the index
func (*DB) Export ¶
Export writes a tar structured data to the writer of all chunks in the retrieval data index. It returns the number of chunks exported.
func (*DB) Get ¶
func (db *DB) Get(ctx context.Context, mode storage.ModeGet, addr swarm.Address) (ch swarm.Chunk, err error)
Get returns a chunk from the database. If the chunk is not found storage.ErrNotFound will be returned. All required indexes will be updated required by the Getter Mode. Get is required to implement chunk.Store interface.
func (*DB) GetMulti ¶
func (db *DB) GetMulti(ctx context.Context, mode storage.ModeGet, addrs ...swarm.Address) (chunks []swarm.Chunk, err error)
GetMulti returns chunks from the database. If one of the chunks is not found storage.ErrNotFound will be returned. All required indexes will be updated required by the Getter Mode. GetMulti is required to implement chunk.Store interface.
func (*DB) HasMulti ¶
HasMulti returns a slice of booleans which represent if the provided chunks are stored in database.
func (*DB) Import ¶
Import reads a tar structured data from the reader and stores chunks in the database. It returns the number of chunks imported.
func (*DB) LastPullSubscriptionBinID ¶
LastPullSubscriptionBinID returns chunk bin id of the latest Chunk in pull syncing index for a provided bin. If there are no chunks in that bin, 0 value is returned.
func (*DB) Metrics ¶
func (db *DB) Metrics() []prometheus.Collector
func (*DB) Put ¶
func (db *DB) Put(ctx context.Context, mode storage.ModePut, chs ...swarm.Chunk) (exist []bool, err error)
Put stores Chunks to database and depending on the Putter mode, it updates required indexes. Put is required to implement storage.Store interface.
func (*DB) ReserveCapacity ¶ added in v1.8.0
ReserveCapacity returns the configured capacity
func (*DB) ReserveSample ¶ added in v1.9.0
func (db *DB) ReserveSample( ctx context.Context, anchor []byte, storageRadius uint8, consensusTime uint64, ) (storage.Sample, error)
ReserveSample generates the sample of reserve storage of a node required for the storage incentives agent to participate in the lottery round. In order to generate this sample we need to iterate through all the chunks in the node's reserve and calculate the transformed hashes of all the chunks using the anchor as the salt. In order to generate the transformed hashes, we will use the std hmac keyed-hash implementation by using the anchor as the key. Nodes need to calculate the sample in the most optimal way and there are time restrictions. The lottery round is a time based round, so nodes participating in the round need to perform this calculation within the round limits. In order to optimize this we use a simple pipeline pattern: Iterate chunk addresses -> Get the chunk data and calculate transformed hash -> Assemble the sample
func (*DB) Set ¶
Set updates database indexes for chunks represented by provided addresses. Set is required to implement chunk.Store interface.
func (*DB) SubscribePull ¶
func (db *DB) SubscribePull(ctx context.Context, bin uint8, since, until uint64) (c <-chan storage.Descriptor, closed <-chan struct{}, stop func())
SubscribePull returns a channel that provides chunk addresses and stored times from pull syncing index. Pull syncing index can be only subscribed to a particular proximity order bin. If since is not 0, the iteration will start from the since item (the item with binID == since). If until is not 0, only chunks stored up to this id will be sent to the channel, and the returned channel will be closed. The since-until interval is closed on since side, and closed on until side: [since,until]. Returned stop function will terminate current and further iterations without errors, and also close the returned channel. Make sure that you check the second returned parameter from the channel to stop iteration when its value is false.
func (*DB) SubscribePush ¶
func (db *DB) SubscribePush(ctx context.Context, skipf func([]byte) bool) (c <-chan swarm.Chunk, reset, stop func())
SubscribePush returns a channel that provides storage chunks with ordering from push syncing index. Returned stop function will terminate current and further iterations, and also it will close the returned channel without any errors. Make sure that you check the second returned parameter from the channel to stop iteration when its value is false.
func (*DB) UnreserveBatch ¶ added in v0.6.0
UnreserveBatch atomically unpins chunks of a batch in proximity order upto and including po. Unpinning will result in all chunks with pincounter 0 to be put in the gc index so if a chunk was only pinned by the reserve, unreserving it will make it gc-able.
type Options ¶
type Options struct { // Capacity is a limit that triggers garbage collection when // number of items in gcIndex equals or exceeds it. Capacity uint64 // ReserveCapacity is the capacity of the reserve. ReserveCapacity uint64 // UnreserveFunc is an iterator needed to facilitate reserve // eviction once ReserveCapacity is reached. UnreserveFunc func(postage.UnreserveIteratorFn) error // OpenFilesLimit defines the upper bound of open files that the // the localstore should maintain at any point of time. It is // passed on to the shed constructor. OpenFilesLimit uint64 // BlockCacheCapacity defines the block cache capacity and is passed // on to shed. BlockCacheCapacity uint64 // WriteBuffer defines the size of writer buffer and is passed on to shed. WriteBufferSize uint64 // DisableSeeksCompaction toggles the seek driven compactions feature on leveldb // and is passed on to shed. DisableSeeksCompaction bool // Stamp validator for reserve sampler ValidStamp postage.ValidStampFn // MetricsPrefix defines a prefix for metrics names. MetricsPrefix string Tags *tags.Tags }
Options struct holds optional parameters for configuring DB.
Source Files ¶
- disaster_recovery.go
- doc.go
- export.go
- gc.go
- localstore.go
- metrics.go
- migration.go
- migration_batch_index.go
- migration_dead_push.go
- migration_gc_reset.go
- migration_sharky.go
- migration_yuj.go
- mode_get.go
- mode_get_multi.go
- mode_has.go
- mode_put.go
- mode_set.go
- pin.go
- reserve.go
- sampler.go
- schema.go
- subscription_pull.go
- subscription_push.go