Documentation ¶
Overview ¶
Package ethash implements the ethash proof-of-work consensus engine.
Index ¶
- Variables
- func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int
- func MakeCache(block uint64, dir string)
- func MakeDataset(block uint64, dir string)
- func SeedHash(block uint64) []byte
- type Config
- type Ethash
- func (ethash *Ethash) APIs(chain consensus.ChainReader) []rpc.API
- func (ethash *Ethash) Author(header *types.Header) (common.Address, error)
- func (ethash *Ethash) CalcDifficulty(chain consensus.ChainReader, time uint64, parent *types.Header) *big.Int
- func (ethash *Ethash) Finalize(chain consensus.ChainReader, header *types.Header, state *state.StateDB, ...) (*types.Block, error)
- func (ethash *Ethash) Hashrate() float64
- func (ethash *Ethash) Prepare(chain consensus.ChainReader, header *types.Header) error
- func (ethash *Ethash) Seal(chain consensus.ChainReader, block *types.Block, stop <-chan struct{}) (*types.Block, error)
- func (ethash *Ethash) SetThreads(threads int)
- func (ethash *Ethash) Threads() int
- func (ethash *Ethash) VerifyHeader(chain consensus.ChainReader, header *types.Header, seal bool) error
- func (ethash *Ethash) VerifyHeaders(chain consensus.ChainReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error)
- func (ethash *Ethash) VerifySeal(chain consensus.ChainReader, header *types.Header) error
- type Mode
Constants ¶
This section is empty.
Variables ¶
var ErrInvalidDumpMagic = errors.New("invalid dump magic")
Functions ¶
func CalcDifficulty ¶
CalcDifficulty is the difficulty adjustment algorithm. It returns the difficulty that a new block should have when created at time given the parent block's time and difficulty.
func MakeDataset ¶
MakeDataset generates a new ethash dataset and optionally stores it to disk.
Types ¶
type Config ¶
type Config struct { CacheDir string CachesInMem int CachesOnDisk int DatasetDir string DatasetsInMem int DatasetsOnDisk int PowMode Mode }
Config are the configuration parameters of the ethash.
type Ethash ¶
type Ethash struct {
// contains filtered or unexported fields
}
Ethash is a consensus engine based on proof-of-work implementing the ethash algorithm.
func NewFakeDelayer ¶
NewFakeDelayer creates a ethash consensus engine with a fake PoW scheme that accepts all blocks as valid, but delays verifications by some time, though they still have to conform to the Ethereum consensus rules.
func NewFakeFailer ¶
NewFakeFailer creates a ethash consensus engine with a fake PoW scheme that accepts all blocks as valid apart from the single one specified, though they still have to conform to the Ethereum consensus rules.
func NewFaker ¶
func NewFaker() *Ethash
NewFaker creates a ethash consensus engine with a fake PoW scheme that accepts all blocks' seal as valid, though they still have to conform to the Ethereum consensus rules.
func NewFullFaker ¶
func NewFullFaker() *Ethash
NewFullFaker creates an ethash consensus engine with a full fake scheme that accepts all blocks as valid, without checking any consensus rules whatsoever.
func NewShared ¶
func NewShared() *Ethash
NewShared creates a full sized ethash PoW shared between all requesters running in the same process.
func NewTester ¶
func NewTester() *Ethash
NewTester creates a small sized ethash PoW scheme useful only for testing purposes.
func (*Ethash) APIs ¶
func (ethash *Ethash) APIs(chain consensus.ChainReader) []rpc.API
APIs implements consensus.Engine, returning the user facing RPC APIs. Currently that is empty.
func (*Ethash) Author ¶
Author implements consensus.Engine, returning the header's coinbase as the proof-of-work verified author of the block.
func (*Ethash) CalcDifficulty ¶
func (ethash *Ethash) CalcDifficulty(chain consensus.ChainReader, time uint64, parent *types.Header) *big.Int
CalcDifficulty is the difficulty adjustment algorithm. It returns the difficulty that a new block should have when created at time given the parent block's time and difficulty.
func (*Ethash) Finalize ¶
func (ethash *Ethash) Finalize(chain consensus.ChainReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, receipts []*types.Receipt) (*types.Block, error)
Finalize implements consensus.Engine, accumulating the block rewards, setting the final state and assembling the block.
func (*Ethash) Hashrate ¶
Hashrate implements PoW, returning the measured rate of the search invocations per second over the last minute.
func (*Ethash) Prepare ¶
Prepare implements consensus.Engine, initializing the difficulty field of a header to conform to the ethash protocol. The changes are done inline.
func (*Ethash) Seal ¶
func (ethash *Ethash) Seal(chain consensus.ChainReader, block *types.Block, stop <-chan struct{}) (*types.Block, error)
Seal implements consensus.Engine, attempting to find a nonce that satisfies the block's difficulty requirements.
func (*Ethash) SetThreads ¶
SetThreads updates the number of mining threads currently enabled. Calling this method does not start mining, only sets the thread count. If zero is specified, the miner will use all cores of the machine. Setting a thread count below zero is allowed and will cause the miner to idle, without any work being done.
func (*Ethash) Threads ¶
Threads returns the number of mining threads currently enabled. This doesn't necessarily mean that mining is running!
func (*Ethash) VerifyHeader ¶
func (ethash *Ethash) VerifyHeader(chain consensus.ChainReader, header *types.Header, seal bool) error
VerifyHeader checks whether a header conforms to the consensus rules of the stock Ethereum ethash engine.
func (*Ethash) VerifyHeaders ¶
func (ethash *Ethash) VerifyHeaders(chain consensus.ChainReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error)
VerifyHeaders is similar to VerifyHeader, but verifies a batch of headers concurrently. The method returns a quit channel to abort the operations and a results channel to retrieve the async verifications.
func (*Ethash) VerifySeal ¶
VerifySeal implements consensus.Engine, checking whether the given block satisfies the PoW difficulty requirements.