das

package
v0.5.0-rc5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 15, 2022 License: Apache-2.0 Imports: 20 Imported by: 2

Documentation

Overview

Package das contains the most important functionality provided by celestia-node. It contains logic for running data availability sampling (DAS) routines on block headers in the network. DAS is the process of verifying the availability of block data by sampling chunks or shares of those blocks.

Package das can confirm the availability of block data in the network via the Availability interface which is implemented both in `full` and `light` mode. `Full` availability ensures the full reparation of a block's data square (meaning the instance will sample for enough shares to be able to fully repair the block's data square) while `light` availability samples for shares randomly until it is sufficiently likely that all block data is available as it is assumed that there are enough `light` availability instances active on the network doing sampling over the same block to collectively verify its availability.

The central component of this package is the `samplingCoordinator`. It launches parallel workers that perform DAS on new ExtendedHeaders in the network. The DASer kicks off this loop by loading its last DASed headers snapshot (`checkpoint`) and kicking off worker pool to DAS all headers between the checkpoint and the current network head. It subscribes to notifications about to new ExtendedHeaders, received via gossipsub. Newly found headers are being put into higher priority queue and will be sampled by the next available worker.

Index

Constants

This section is empty.

Variables

View Source
var ErrInvalidOption = fmt.Errorf("das: invalid option")

ErrInvalidOption is an error that is returned by Parameters.Validate when supplied with invalid values. This error will also be returned by NewDASer if supplied with an invalid option

Functions

This section is empty.

Types

type DASer

type DASer struct {
	// contains filtered or unexported fields
}

DASer continuously validates availability of data committed to headers.

func NewDASer

func NewDASer(
	da share.Availability,
	hsub header.Subscriber,
	getter header.Getter,
	dstore datastore.Datastore,
	bcast fraud.Broadcaster,
	options ...Option,
) (*DASer, error)

NewDASer creates a new DASer.

func (*DASer) InitMetrics added in v0.4.0

func (d *DASer) InitMetrics() error

func (*DASer) SamplingStats added in v0.3.1

func (d *DASer) SamplingStats(ctx context.Context) (SamplingStats, error)

SamplingStats returns the current statistics over the DA sampling process.

func (*DASer) Start

func (d *DASer) Start(ctx context.Context) error

Start initiates subscription for new ExtendedHeaders and spawns a sampling routine.

func (*DASer) Stop

func (d *DASer) Stop(ctx context.Context) error

Stop stops sampling.

type Option added in v0.5.0

type Option func(*DASer)

Option is the functional option that is applied to the daser instance to configure DASing parameters (the Parameters struct)

func WithBackgroundStoreInterval added in v0.5.0

func WithBackgroundStoreInterval(backgroundStoreInterval time.Duration) Option

WithBackgroundStoreInterval is a functional option to configure the daser's `backgroundStoreInterval` parameter Refer to WithSamplingRange documentation to see an example of how to use this

func WithConcurrencyLimit added in v0.5.0

func WithConcurrencyLimit(concurrencyLimit int) Option

WithConcurrencyLimit is a functional option to configure the daser's `ConcurrencyLimit` parameter Refer to WithSamplingRange documentation to see an example of how to use this

func WithPriorityQueueSize added in v0.5.0

func WithPriorityQueueSize(priorityQueueSize int) Option

WithPriorityQueueSize is a functional option to configure the daser's `priorityQueuSize` parameter Refer to WithSamplingRange documentation to see an example of how to use this

func WithSampleFrom added in v0.5.0

func WithSampleFrom(sampleFrom uint64) Option

WithSampleFrom is a functional option to configure the daser's `SampleFrom` parameter Refer to WithSamplingRange documentation to see an example of how to use this

func WithSamplingRange added in v0.5.0

func WithSamplingRange(samplingRange uint64) Option

WithSamplingRange is a functional option to configure the daser's `SamplingRange` parameter

Usage:
```
	WithSamplingRange(10)(daser)
```

or

```
	option := WithSamplingRange(10)
	// shenanigans to create daser
	option(daser)

```

type Parameters added in v0.5.0

type Parameters struct {
	//  SamplingRange is the maximum amount of headers processed in one job.
	SamplingRange uint64

	// ConcurrencyLimit defines the maximum amount of sampling workers running in parallel.
	ConcurrencyLimit int

	// BackgroundStoreInterval is the period of time for background checkpointStore to perform a
	// checkpoint backup.
	BackgroundStoreInterval time.Duration

	// PriorityQueueSize defines the size limit of the priority queue
	PriorityQueueSize int

	// SampleFrom is the height sampling will start from
	SampleFrom uint64
}

Parameters is the set of parameters that must be configured for the daser

func DefaultParameters added in v0.5.0

func DefaultParameters() Parameters

DefaultParameters returns the default configuration values for the daser parameters

func (*Parameters) Validate added in v0.5.0

func (p *Parameters) Validate() error

Validate validates the values in Parameters

All parameters must be positive and non-zero, except:
	BackgroundStoreInterval = 0 disables background storer,
	PriorityQueueSize = 0 disables prioritization of recently produced blocks for sampling

type SamplingStats added in v0.3.1

type SamplingStats struct {
	// all headers before SampledChainHead were successfully sampled
	SampledChainHead uint64 `json:"head_of_sampled_chain"`
	// all headers before CatchupHead were submitted to sampling workers
	CatchupHead uint64 `json:"head_of_catchup"`
	// NetworkHead is the height of the most recent header in the network
	NetworkHead uint64 `json:"network_head_height"`
	// Failed contains all skipped header's heights with corresponding try count
	Failed map[uint64]int `json:"failed,omitempty"`
	// Workers has information about each currently running worker stats
	Workers []WorkerStats `json:"workers,omitempty"`
	// Concurrency currently running parallel workers
	Concurrency int `json:"concurrency"`
	// CatchUpDone indicates whether all known headers are sampled
	CatchUpDone bool `json:"catch_up_done"`
	// IsRunning tracks whether the DASer service is running
	IsRunning bool `json:"is_running"`
}

SamplingStats collects information about the DASer process. Currently, there are only two sampling routines: the main sampling routine which performs sampling over current network headers, and the `catchUp` routine which performs sampling over past headers from the last sampled checkpoint.

type WorkerStats added in v0.3.1

type WorkerStats struct {
	Curr uint64 `json:"current"`
	From uint64 `json:"from"`
	To   uint64 `json:"to"`

	ErrMsg string `json:"error,omitempty"`
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL