localstore

package
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 19, 2021 License: BSD-3-Clause Imports: 24 Imported by: 0

Documentation

Overview

Package localstore provides disk storage layer for Swarm Chunk persistence. It uses swarm/shed abstractions.

The main type is DB which manages the storage by providing methods to access and add Chunks and to manage their status.

Modes are abstractions that do specific changes to Chunks. There are three mode types:

  • ModeGet, for Chunk access
  • ModePut, for adding Chunks to the database
  • ModeSet, for changing Chunk statuses

Every mode type has a corresponding type (Getter, Putter and Setter) that provides adequate method to perform the opperation and that type should be injected into localstore consumers instead the whole DB. This provides more clear insight which operations consumer is performing on the database.

Getters, Putters and Setters accept different get, put and set modes to perform different actions. For example, ModeGet has two different variables ModeGetRequest and ModeGetSync and two different Getters can be constructed with them that are used when the chunk is requested or when the chunk is synced as this two events are differently changing the database.

Subscription methods are implemented for a specific purpose of continuous iterations over Chunks that should be provided to Push and Pull syncing.

DB implements an internal garbage collector that removes only synced Chunks from the database based on their most recent access time.

Internally, DB stores Chunk data and any required information, such as store and access timestamps in different shed indexes that can be iterated on by garbage collector or subscriptions.

Index

Constants

View Source
const DBSchemaBatchIndex = "batch-index"

DBSchemaBatchIndex is the bee schema identifier for batch index.

View Source
const DBSchemaCode = "code"

DBSchemaCode is the first bee schema identifier.

View Source
const DBSchemaYuj = "yuj"

DBSchemaYuj is the bee schema identifier for storage incentives initial iteration.

Variables

View Source
var DBSchemaCurrent = DBSchemaBatchIndex

DBSchemaCurrent represents the DB schema we want to use. The actual/current DB schema might differ until migrations are run.

View Source
var (
	// ErrInvalidMode is retuned when an unknown Mode
	// is provided to the function.
	ErrInvalidMode = errors.New("invalid mode")
)
View Source
var (
	ErrOverwrite = errors.New("index already exists - double issuance on immutable batch")
)

Functions

This section is empty.

Types

type DB

type DB struct {
	// contains filtered or unexported fields
}

DB is the local store implementation and holds database related objects.

func New

func New(path string, baseKey []byte, ss storage.StateStorer, o *Options, logger logging.Logger) (db *DB, err error)

New returns a new DB. All fields and indexes are initialized and possible conflicts with schema from existing database is checked. One goroutine for writing batches is created.

func (*DB) Close

func (db *DB) Close() (err error)

Close closes the underlying database.

func (*DB) DebugIndices

func (db *DB) DebugIndices() (indexInfo map[string]int, err error)

DebugIndices returns the index sizes for all indexes in localstore the returned map keys are the index name, values are the number of elements in the index

func (*DB) Export

func (db *DB) Export(w io.Writer) (count int64, err error)

Export writes a tar structured data to the writer of all chunks in the retrieval data index. It returns the number of chunks exported.

func (*DB) Get

func (db *DB) Get(ctx context.Context, mode storage.ModeGet, addr swarm.Address) (ch swarm.Chunk, err error)

Get returns a chunk from the database. If the chunk is not found storage.ErrNotFound will be returned. All required indexes will be updated required by the Getter Mode. Get is required to implement chunk.Store interface.

func (*DB) GetMulti

func (db *DB) GetMulti(ctx context.Context, mode storage.ModeGet, addrs ...swarm.Address) (chunks []swarm.Chunk, err error)

GetMulti returns chunks from the database. If one of the chunks is not found storage.ErrNotFound will be returned. All required indexes will be updated required by the Getter Mode. GetMulti is required to implement chunk.Store interface.

func (*DB) Has

func (db *DB) Has(ctx context.Context, addr swarm.Address) (bool, error)

Has returns true if the chunk is stored in database.

func (*DB) HasMulti

func (db *DB) HasMulti(ctx context.Context, addrs ...swarm.Address) ([]bool, error)

HasMulti returns a slice of booleans which represent if the provided chunks are stored in database.

func (*DB) Import

func (db *DB) Import(ctx context.Context, r io.Reader) (count int64, err error)

Import reads a tar structured data from the reader and stores chunks in the database. It returns the number of chunks imported.

func (*DB) LastPullSubscriptionBinID

func (db *DB) LastPullSubscriptionBinID(bin uint8) (id uint64, err error)

LastPullSubscriptionBinID returns chunk bin id of the latest Chunk in pull syncing index for a provided bin. If there are no chunks in that bin, 0 value is returned.

func (*DB) Metrics

func (db *DB) Metrics() []prometheus.Collector

func (*DB) Put

func (db *DB) Put(ctx context.Context, mode storage.ModePut, chs ...swarm.Chunk) (exist []bool, err error)

Put stores Chunks to database and depending on the Putter mode, it updates required indexes. Put is required to implement storage.Store interface.

func (*DB) Set

func (db *DB) Set(ctx context.Context, mode storage.ModeSet, addrs ...swarm.Address) (err error)

Set updates database indexes for chunks represented by provided addresses. Set is required to implement chunk.Store interface.

func (*DB) SubscribePull

func (db *DB) SubscribePull(ctx context.Context, bin uint8, since, until uint64) (c <-chan storage.Descriptor, closed <-chan struct{}, stop func())

SubscribePull returns a channel that provides chunk addresses and stored times from pull syncing index. Pull syncing index can be only subscribed to a particular proximity order bin. If since is not 0, the iteration will start from the since item (the item with binID == since). If until is not 0, only chunks stored up to this id will be sent to the channel, and the returned channel will be closed. The since-until interval is closed on since side, and closed on until side: [since,until]. Returned stop function will terminate current and further iterations without errors, and also close the returned channel. Make sure that you check the second returned parameter from the channel to stop iteration when its value is false.

func (*DB) SubscribePush

func (db *DB) SubscribePush(ctx context.Context) (c <-chan swarm.Chunk, stop func())

SubscribePush returns a channel that provides storage chunks with ordering from push syncing index. Returned stop function will terminate current and further iterations, and also it will close the returned channel without any errors. Make sure that you check the second returned parameter from the channel to stop iteration when its value is false.

func (*DB) UnreserveBatch added in v0.6.0

func (db *DB) UnreserveBatch(id []byte, radius uint8) (evicted uint64, err error)

UnreserveBatch atomically unpins chunks of a batch in proximity order upto and including po. Unpinning will result in all chunks with pincounter 0 to be put in the gc index so if a chunk was only pinned by the reserve, unreserving it will make it gc-able.

type Options

type Options struct {
	// Capacity is a limit that triggers garbage collection when
	// number of items in gcIndex equals or exceeds it.
	Capacity uint64
	// ReserveCapacity is the capacity of the reserve.
	ReserveCapacity uint64
	// UnreserveFunc is an iterator needed to facilitate reserve
	// eviction once ReserveCapacity is reached.
	UnreserveFunc func(postage.UnreserveIteratorFn) error
	// OpenFilesLimit defines the upper bound of open files that the
	// the localstore should maintain at any point of time. It is
	// passed on to the shed constructor.
	OpenFilesLimit uint64
	// BlockCacheCapacity defines the block cache capacity and is passed
	// on to shed.
	BlockCacheCapacity uint64
	// WriteBuffer defines the size of writer buffer and is passed on to shed.
	WriteBufferSize uint64
	// DisableSeeksCompaction toggles the seek driven compactions feature on leveldb
	// and is passed on to shed.
	DisableSeeksCompaction bool

	// MetricsPrefix defines a prefix for metrics names.
	MetricsPrefix string
	Tags          *tags.Tags
}

Options struct holds optional parameters for configuring DB.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL