Documentation ¶
Overview ¶
Package localstore provides disk storage layer for Swarm Chunk persistence. It uses swarm/shed abstractions on top of github.com/syndtr/goleveldb LevelDB implementation.
The main type is DB which manages the storage by providing methods to access and add Chunks and to manage their status.
Modes are abstractions that do specific changes to Chunks. There are three mode types:
- ModeGet, for Chunk access
- ModePut, for adding Chunks to the database
- ModeSet, for changing Chunk statuses
Every mode type has a corresponding type (Getter, Putter and Setter) that provides adequate method to perform the opperation and that type should be injected into localstore consumers instead the whole DB. This provides more clear insight which operations consumer is performing on the database.
Getters, Putters and Setters accept different get, put and set modes to perform different actions. For example, ModeGet has two different variables ModeGetRequest and ModeGetSync and two different Getters can be constructed with them that are used when the chunk is requested or when the chunk is synced as this two events are differently changing the database.
Subscription methods are implemented for a specific purpose of continuous iterations over Chunks that should be provided to Push and Pull syncing.
DB implements an internal garbage collector that removes only synced Chunks from the database based on their most recent access time.
Internally, DB stores Chunk data and any required information, such as store and access timestamps in different shed indexes that can be iterated on by garbage collector or subscriptions.
Index ¶
- Constants
- Variables
- func IsLegacyDatabase(datadir string) bool
- type DB
- func (db *DB) Close() (err error)
- func (db *DB) Export(w io.Writer) (count int64, err error)
- func (db *DB) Get(ctx context.Context, mode chunk.ModeGet, addr chunk.Address) (ch chunk.Chunk, err error)
- func (db *DB) Has(ctx context.Context, addr chunk.Address) (bool, error)
- func (db *DB) Import(r io.Reader, legacy bool) (count int64, err error)
- func (db *DB) LastPullSubscriptionBinID(bin uint8) (id uint64, err error)
- func (db *DB) Put(ctx context.Context, mode chunk.ModePut, ch chunk.Chunk) (exists bool, err error)
- func (db *DB) Set(ctx context.Context, mode chunk.ModeSet, addr chunk.Address) (err error)
- func (db *DB) SubscribePull(ctx context.Context, bin uint8, since, until uint64) (c <-chan chunk.Descriptor, stop func())
- func (db *DB) SubscribePush(ctx context.Context) (c <-chan chunk.Chunk, stop func())
- type Options
Constants ¶
const CurrentDbSchema = DbSchemaSanctuary
The DB schema we want to use. The actual/current DB schema might differ until migrations are run.
const DbSchemaHalloween = "halloween"
"halloween" is here because we had a screw in the garbage collector index. Because of that we had to rebuild the GC index to get rid of erroneous entries and that takes a long time. This schema is used for bookkeeping, so rebuild index will run just once.
const DbSchemaNone = ""
There was a time when we had no schema at all.
const DbSchemaPurity = "purity"
"purity" is the first formal schema of LevelDB we release together with Swarm 0.3.5
const DbSchemaSanctuary = "sanctuary"
Variables ¶
var ( // ErrInvalidMode is retuned when an unknown Mode // is provided to the function. ErrInvalidMode = errors.New("invalid mode") // ErrAddressLockTimeout is returned when the same chunk // is updated in parallel and one of the updates // takes longer then the configured timeout duration. ErrAddressLockTimeout = errors.New("address lock timeout") )
Functions ¶
func IsLegacyDatabase ¶
returns true if legacy database is in the datadir
Types ¶
type DB ¶
type DB struct {
// contains filtered or unexported fields
}
DB is the local store implementation and holds database related objects.
func New ¶
New returns a new DB. All fields and indexes are initialized and possible conflicts with schema from existing database is checked. One goroutine for writing batches is created.
func (*DB) Export ¶
Export writes a tar structured data to the writer of all chunks in the retrieval data index. It returns the number of chunks exported.
func (*DB) Get ¶
func (db *DB) Get(ctx context.Context, mode chunk.ModeGet, addr chunk.Address) (ch chunk.Chunk, err error)
Get returns a chunk from the database. If the chunk is not found chunk.ErrChunkNotFound will be returned. All required indexes will be updated required by the Getter Mode. Get is required to implement chunk.Store interface.
func (*DB) Import ¶
Import reads a tar structured data from the reader and stores chunks in the database. It returns the number of chunks imported.
func (*DB) LastPullSubscriptionBinID ¶
LastPullSubscriptionBinID returns chunk bin id of the latest Chunk in pull syncing index for a provided bin. If there are no chunks in that bin, 0 value is returned.
func (*DB) Put ¶
Put stores the Chunk to database and depending on the Putter mode, it updates required indexes. Put is required to implement chunk.Store interface.
func (*DB) Set ¶
Set updates database indexes for a specific chunk represented by the address. Set is required to implement chunk.Store interface.
func (*DB) SubscribePull ¶
func (db *DB) SubscribePull(ctx context.Context, bin uint8, since, until uint64) (c <-chan chunk.Descriptor, stop func())
SubscribePull returns a channel that provides chunk addresses and stored times from pull syncing index. Pull syncing index can be only subscribed to a particular proximity order bin. If since is not 0, the iteration will start from the first item stored after that id. If until is not 0, only chunks stored up to this id will be sent to the channel, and the returned channel will be closed. The since-until interval is open on since side, and closed on until side: (since,until] <=> [since+1,until]. Returned stop function will terminate current and further iterations without errors, and also close the returned channel. Make sure that you check the second returned parameter from the channel to stop iteration when its value is false.
func (*DB) SubscribePush ¶
SubscribePush returns a channel that provides storage chunks with ordering from push syncing index. Returned stop function will terminate current and further iterations, and also it will close the returned channel without any errors. Make sure that you check the second returned parameter from the channel to stop iteration when its value is false.
type Options ¶
type Options struct { // MockStore is a mock node store that is used to store // chunk data in a central store. It can be used to reduce // total storage space requirements in testing large number // of swarm nodes with chunk data deduplication provided by // the mock global store. MockStore *mock.NodeStore // Capacity is a limit that triggers garbage collection when // number of items in gcIndex equals or exceeds it. Capacity uint64 // MetricsPrefix defines a prefix for metrics names. MetricsPrefix string }
Options struct holds optional parameters for configuring DB.