hotstuff

package
v0.28.14 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 21, 2022 License: AGPL-3.0 Imports: 9 Imported by: 30

README

Flow's HotStuff

We use a BFT consensus algorithm with deterministic finality in Flow for

  • Consensus Nodes: deciding on the content of blocks
  • Cluster of Collector Nodes: batching transactions into collections.

Flow uses a derivative of HotStuff (see paper HotStuff: BFT Consensus in the Lens of Blockchain (version 6) for the original algorithm). Our implementation is a mixture of Chained HotStuff and Event-Driven HotStuff with a few additional, minor modifications:

  • We employ the rules for locking on blocks and block finalization from Event-Driven HotStuff. Specifically, we lock the newest block which has an (direct or indirect) 2-chain built on top. We finalize a block when it has a 3-chain built on top where with the first two chain links being direct.
  • The way we progress though views follows the Chained HotStuff algorithm. A replica only votes or proposes for its current view.
  • Flow's HotStuff does not implement ViewChange messages (akin to the approach taken in Streamlet: Textbook Streamlined Blockchains)
  • Flow uses a decentralized random beacon (based on Dfinity's proposal). The random beacon is run by Flow's consensus nodes and integrated into the consensus voting process.
Node weights

In Flow, there are multiple HotStuff instances running in parallel. Specifically, the consensus nodes form a HotStuff committee and each collector cluster is its own committee. In the following, we refer to an authorized set of nodes, who run a particular HotStuff instance as a (HotStuff) committee.

  • Flow allows nodes to have different weights, reflecting how much they are trusted by the protocol. The weight of a node can change over time due to stake changes or discovering protocol violations. A super-majority of nodes is defined as a subset of the consensus committee, where the nodes have more than 2/3 of the entire committee's accumulated weight.
  • The messages from zero-weighted nodes are ignored by all committee members.

Architecture

Determining block validity

In addition to HotStuff's requirements on block validity, the Flow protocol adds additional requirements. For example, it is illegal to repeatedly include the same payload entities (e.g. collections, challenges, etc) in the same fork. Generally, payload entities expire. However within the expiry horizon, all ancestors of a block need to be known to verify that payload entities are not repeated.

We exclude the entire logic for determining payload validity from the HotStuff core implementation. This functionality is encapsulated in the Chain Compliance Layer (CCL) which precedes HotStuff. The CCL forwards a block to the HotStuff core logic only if

  • the block's payload is valid,
  • the block is connected to the most recently finalized block, and
  • all ancestors have previously been processed by HotStuff.

If ancestors of a block are missing, the CCL caches the respective block and (iteratively) requests missing ancestors.

Payload generation

Payloads are generated outside of the HotStuff core logic. HotStuff only incorporates the payload root hash into the block header.

Structure of votes

In Flow's HotStuff implementation, votes are used for two purposes:

  1. Prove that a super-majority of committee nodes considers the respective block a valid extension of the chain. Therefore, nodes include a StakingSignature (BLS with curve BLS12-381) in their vote.
  2. Construct a Source of Randomness as described in Dfinity's Random Beacon. Therefore, consensus nodes include a RandomBeaconSignature (also BLS with curve BLS12-381, used in a threshold signature scheme) in their vote.

When the primary collects the votes, it verifies the StakingSignature and the RandomBeaconSignature. If either signature is invalid, the entire vote is discarded. From all valid votes, the StakingSignatures and the RandomBeaconSignatures are aggregated separately. (note: Only the aggregation of the RandomBeaconSignature into a threshold signature is currently implemented. Aggregation of the BLS StakingSignatures will be added later.)

  • Following version 6 of the HotStuff paper, replicas forward their votes for block b to the leader of the next view, i.e. the primary for view b.View + 1.
  • A proposer will attach its own vote for its proposal in the block proposal message (instead of signing the block proposal for authenticity and separately sending a vote).
Primary section

For primary section, we use a randomized, weight-proportional selection.

Component Overview

HotStuff's core logic is broken down into multiple components. The figure below illustrates the dependencies of the core components and information flow between these components.

  • HotStuffEngine and ChainComplianceLayer do not contain any core HotStuff logic. They are responsible for relaying HotStuff messages and validating block payloads respectively.
  • EventLoop buffers all incoming events, so EventHandler can process one event at a time in a single thread.
  • EventHandler orchestrates all HotStuff components and implements HotStuff's state machine. The event handler is designed to be executed single-threaded.
  • Communicator relays outgoing HotStuff messages (votes and block proposals)
  • Pacemaker is a basic PaceMaker to ensure liveness by keeping the majority of committee replicas in the same view
  • Forks maintains an in-memory representation of all blocks b, whose view is larger or equal to the view of the latest finalized block (known to this specific replica). As blocks with missing ancestors are cached outside of HotStuff (by the Chain Compliance Layer), all blocks stored in Forks are guaranteed to be connected to the genesis block (or the trusted checkpoint from which the replica started). Internally, Forks consists of multiple layers:
    • LevelledForest: A blockchain forms a Tree. When removing all blocks with views strictly smaller than the last finalized block, the chain decomposes into multiple disconnected trees. In graph theory, such structure is a forest. To separate general graph-theoretical concepts from the concrete block-chain application, LevelledForest refers to blocks as graph vertices and to a block's view number as level. The LevelledForest is an in-memory data structure to store and maintain a levelled forest. It provides functions to add vertices, query vertices by their ID (block's hash), query vertices by level, query the children of a vertex, and prune vertices by level (remove them from memory).
    • Finalizer: the finalizer uses the LevelledForest to store blocks. The Finalizer tracks the locked and finalized blocks. Furthermore, it evaluates whether a block is safe to vote for.
    • ForkChoice: implements the fork choice rule. Currently, NewestForkChoice always uses the quorum certificate (QC) with the largest view to build a new block (i.e. the fork a super-majority voted for most recently).
    • Forks is the highest level. It bundles the Finalizer and ForkChoice into one component.
  • Validator validates the HotStuff-relevant aspects of
    • QC: total weight of all signers is more than 2/3 of committee weight, validity of signatures, view number is strictly monotonously increasing
    • block proposal: from designated primary for the block's respective view, contains proposer's vote for its own block, QC in block is valid
    • vote: validity of signature, voter is has positive weight
  • VoteAggregator caches votes on a per-block basis and builds QC if enough votes have been accumulated.
  • Voter tracks the view of the latest vote and determines whether or not to vote for a block (by calling forks.IsSafeBlock)
  • Committee maintains the list of all authorized network members and their respective weight on a per-block basis. Furthermore, the committee contains the primary selection algorithm.
  • BlockProducer constructs the payload of a block, after the HotStuff core logic has decided which fork to extend

Implementation

We have translated the Chained HotStuff protocol into a state machine shown below. The state machine is implemented in EventHandler.

PaceMaker

The HotStuff state machine interacts with the PaceMaker, which triggers view changes. Conceptually, the PaceMaker interfaces with the EventHandler in two different modes:

  • [asynchronous] On timeouts, the PaceMaker will emit a timeout event, which is processed as any other event (such as incoming blocks or votes) through the EventLoop.
  • [synchronous] When progress is made following the happy-path business logic, the EventHandler will inform the PaceMaker about completing the respective processing step via a direct method call (see PaceMaker interface). If the PaceMaker changed the view in response, it returns a NewViewEvent which will be synchronously processed by the EventHandler.

Flow's PaceMaker is simple (compared to Libra's PaceMaker for instance). It solely relies on increasing the timeouts exponentially if no progress is made and exponentially decreasing timeouts on the happy path. The timeout values are limited by lower and upper-bounds to ensure that the PaceMaker can change from large to small timeouts in a reasonable number of views. The specific values for lower and upper timeout bounds are protocol-specified; we envision the bounds to be on the order of 1sec (lower bound) and multiple days (upper bound).

Progress, from the perspective of the PaceMaker is defined as entering view V for which the replica knows a QC with V = QC.view + 1. In other words, we transition into the next view due to reaching quorum in the last view.

A central, non-trivial functionality of the PaceMaker is to skip views. Specifically, given a QC with view qc.view, the Pacemaker will skip ahead to view qc.view + 1 if currentView ≤ qc.view.

Code structure

All relevant code implementing the core HotStuff business logic is contained in /consensus/hotstuff/ (folder containing this README). When starting to look into the code, we suggest starting with /consensus/hotstuff/event_loop.go and /consensus/hotstuff/event_handler.go.

Folder structure

All files in the /consensus/hotstuff/ folder, except for event_loop.go and follower_loop.go, are interfaces for HotStuff-related components. The concrete implementations for all HotStuff-relevant components are in corresponding sub-folders. For completeness, we list the component implemented in each subfolder below:

  • /consensus/hotstuff/blockproducer builds a block proposal for a specified QC, interfaces with the logic for assembling a block payload, combines all relevant fields into a new block proposal.
  • /consensus/hotstuff/committee maintains the list of all authorized network members and their respective weight on a per-block basis; contains the primary selection algorithm.
  • /consensus/hotstuff/eventhandler orchestrates all HotStuff components and implements the HotStuff state machine. The event handler is designed to be executed single-threaded.
  • /consensus/hotstuff/follower This component is only used by nodes that are not participating in the HotStuff committee. As Flow has dedicated node roles with specialized network functions, only a subset of nodes run the full HotStuff protocol. Nevertheless, all nodes need to be able to act on blocks being finalized. The approach we have taken for Flow is that block proposals are broadcast to all nodes (including non-committee nodes). Non-committee nodes locally determine block finality by applying HotStuff's finality rules. The HotStuff Follower contains the functionality to consume block proposals and trigger downstream processing of finalized blocks. The Follower does not actively participate in HotStuff.
  • /consensus/hotstuff/forks maintains an in-memory representation of all blocks b, whose view is larger or equal to the view of the latest finalized block (known to this specific HotStuff replica). It tracks the last finalized block, the currently locked block, evaluates whether it is safe to vote for a given block and provides a Fork-Choice when the replica is primary.
  • /consensus/hotstuff/helper contains broadly-used helper functions for testing
  • /consensus/hotstuff/integration integration tests for verifying correct interaction of multiple HotStuff replicas
  • /consensus/hotstuff/model contains the HotStuff data models, including block proposal, QC, etc. Many HotStuff data models are built on top of basic data models defined in /model/flow/.
  • /consensus/hotstuff/notifications: All relevant events within the HotStuff logic are exported though a notification system. While the notifications are not used HotStuff-internally, they notify other components within the same node of relevant progress and are used for collecting HotStuff metrics.
  • /consensus/hotstuff/pacemaker contains the implementation of Flow's basic PaceMaker, as described above.
  • /consensus/hotstuff/persister for performance reasons, the implementation maintains the consensus state largely in-memory. The persister stores the last entered view and the view of the latest voted block persistenlty on disk. This allows recovery after a crash without the risk of equivocation.
  • /consensus/hotstuff/runner helper code for starting and shutting down the HotStuff logic safely in a multithreaded environment.
  • /consensus/hotstuff/validator holds the logic for validating the HotStuff-relevant aspects of blocks, QCs, and votes
  • /consensus/hotstuff/verification contains integration of Flow's cryptographic primitives (signing and signature verification)
  • /consensus/hotstuff/voteaggregator caches votes on a per-block basis and builds a QC if enough votes have been accumulated.
  • /consensus/hotstuff/voter tracks the view of the latest vote and determines whether or not to vote for a block

Pending Extensions

  • BLS Aggregation of the StakingSignatures
  • include Epochs
  • upgrade PaceMaker to include Timeout Certificates
  • refactor crypto integration (code in verification and dependent modules) for better auditability

Telemetry

The HotStuff state machine exposes some details about its internal progress as notification through the hotstuff.Consumer. The following figure depicts at which points notifications are emitted.

We have implemented a telemetry system (hotstuff.notifications.TelemetryConsumer) which implements the Consumer interface. The TelemetryConsumer tracks all events as belonging together that were emitted during a path through the state machine. Each path through the state machine is identified by a unique id. Generally, the TelemetryConsumer could export the collected data to a variety of backends. For now, we export the data to a logger.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ComputeWeightThresholdForBuildingQC added in v0.25.0

func ComputeWeightThresholdForBuildingQC(totalWeight uint64) uint64

ComputeWeightThresholdForBuildingQC returns the weight that is minimally required for building a QC

Types

type BlockProducer

type BlockProducer interface {
	// MakeBlockProposal builds a new HotStuff block proposal using the given view and
	// the given quorum certificate for its parent.
	MakeBlockProposal(qc *flow.QuorumCertificate, view uint64) (*model.Proposal, error)
}

BlockProducer builds a new block proposal by building a new block payload with the the builder module, and uses VoteCollectorFactory to create a disposable VoteCollector for producing the proposal vote. BlockProducer assembles the new block proposal using the block payload, block header and the proposal vote.

type BlockSignatureData added in v0.23.9

type BlockSignatureData struct {
	StakingSigners               flow.IdentifierList
	RandomBeaconSigners          flow.IdentifierList
	AggregatedStakingSig         []byte // if BLS is used, this is equivalent to crypto.Signature
	AggregatedRandomBeaconSig    []byte // if BLS is used, this is equivalent to crypto.Signature
	ReconstructedRandomBeaconSig crypto.Signature
}

BlockSignatureData is an intermediate struct for Packer to pack the aggregated signature data into raw bytes or unpack from raw bytes.

type BlockSigner added in v0.23.9

type BlockSigner interface {
	// CreateVote returns a vote for the given block.
	// It returns:
	//  - (vote, nil) if vote is created
	//  - (nil , module.InvalidBlockError) if the block is invalid.
	CreateVote(*model.Block) (*model.Vote, error)
}

BlockSigner abstracts the implementation of how a signature of a block or a vote is produced and stored in a stateful crypto object for aggregation. The VoteAggregator implements both the VoteAggregator interface and the BlockSigner interface so that the EventHandler could use the VoteAggregator interface to sign a Block, and Voter/BlockProducer can use the BlockSigner interface to create vote. When `CreateVote` is called, it internally creates stateful VoteCollector object, which also has the ability to verify the block and generate the vote signature. The created vote collector will be added to the vote collectors map. These implementation details are abstracted to Voter/BlockProducer.

type BlockSignerDecoder added in v0.26.17

type BlockSignerDecoder interface {
	// DecodeSignerIDs decodes the signer indices from the given block header into full node IDs.
	// Expected Error returns during normal operations:
	//  * state.UnknownBlockError if block has not been ingested yet
	//    TODO: this sentinel could be changed to `ErrViewForUnknownEpoch` once we merge the active pacemaker
	//  * signature.InvalidSignerIndicesError if signer indices included in the header do
	//    not encode a valid subset of the consensus committee
	DecodeSignerIDs(header *flow.Header) (flow.IdentifierList, error)
}

BlockSignerDecoder defines how to convert the SignerIndices field within a particular block header to the identifiers of the nodes which signed the block.

type Committee

type Committee interface {
	// Identities returns a IdentityList with legitimate HotStuff participants for the specified block.
	// The returned list of HotStuff participants
	//   * contains nodes that are allowed to sign the specified block (legitimate participants with NON-ZERO WEIGHT)
	//   * is ordered in the canonical order
	//   * contains no duplicates.
	// TODO: document possible error returns -> https://github.com/dapperlabs/flow-go/issues/6327
	Identities(blockID flow.Identifier) (flow.IdentityList, error)

	// Identity returns the full Identity for specified HotStuff participant.
	// The node must be a legitimate HotStuff participant with NON-ZERO WEIGHT at the specified block.
	// ERROR conditions:
	//  * model.InvalidSignerError if participantID does NOT correspond to an authorized HotStuff participant at the specified block.
	Identity(blockID flow.Identifier, participantID flow.Identifier) (*flow.Identity, error)

	// LeaderForView returns the identity of the leader for a given view.
	// CAUTION: per liveness requirement of HotStuff, the leader must be fork-independent.
	//          Therefore, a node retains its proposer view slots even if it is slashed.
	//          Its proposal is simply considered invalid, as it is not from a legitimate participant.
	// Returns the following expected errors for invalid inputs:
	//  * epoch containing the requested view has not been set up (protocol.ErrNextEpochNotSetup)
	//  * epoch is too far in the past (leader.InvalidViewError)
	LeaderForView(view uint64) (flow.Identifier, error)

	// Self returns our own node identifier.
	// TODO: ultimately, the own identity of the node is necessary for signing.
	//       Ideally, we would move the method for checking whether an Identifier refers to this node to the signer.
	//       This would require some refactoring of EventHandler (postponed to later)
	Self() flow.Identifier

	// DKG returns the DKG info for the given block.
	DKG(blockID flow.Identifier) (DKG, error)
}

Committee accounts for the fact that we might have multiple HotStuff instances (collector committees and main consensus committee). Each HotStuff instance is supposed to have a dedicated Committee state. A Committee provides subset of the protocol.State, which is restricted to exactly those nodes that participate in the current HotStuff instance: the state of all legitimate HotStuff participants for the specified block. Legitimate HotStuff participants have NON-ZERO WEIGHT.

The intended use case is to support collectors running HotStuff within Flow. Specifically, the collectors produced their own blocks, independently of the Consensus Nodes (aka the main consensus). Given a collector block, some logic is required to find the main consensus block for determining the valid collector-HotStuff participants.

type Communicator

type Communicator interface {

	// SendVote sends a vote for the given parameters to the specified recipient.
	SendVote(blockID flow.Identifier, view uint64, sigData []byte, recipientID flow.Identifier) error

	// BroadcastProposal broadcasts the given block proposal to all actors of
	// the consensus process.
	BroadcastProposal(proposal *flow.Header) error

	// BroadcastProposalWithDelay broadcasts the given block proposal to all actors of
	// the consensus process.
	// delay is to hold the proposal before broadcasting it. Useful to control the block production rate.
	BroadcastProposalWithDelay(proposal *flow.Header, delay time.Duration) error
}

Communicator is the component that allows the HotStuff core algorithm to communicate with the other actors of the consensus process.

type Consumer

type Consumer interface {
	FinalizationConsumer

	// OnEventProcessed notifications are produced by the EventHandler when it is done processing
	// and hands control back to the EventLoop to wait for the next event.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnEventProcessed()

	// OnReceiveVote notifications are produced by the EventHandler when it starts processing a vote.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnReceiveVote(currentView uint64, vote *model.Vote)

	// OnReceiveProposal notifications are produced by the EventHandler when it starts processing a block.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnReceiveProposal(currentView uint64, proposal *model.Proposal)

	// OnEnteringView notifications are produced by the EventHandler when it enters a new view.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnEnteringView(viewNumber uint64, leader flow.Identifier)

	// OnQcTriggeredViewChange notifications are produced by PaceMaker when it moves to a new view
	// based on processing a QC. The arguments specify the qc (first argument), which triggered
	// the view change, and the newView to which the PaceMaker transitioned (second argument).
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnQcTriggeredViewChange(qc *flow.QuorumCertificate, newView uint64)

	// OnProposingBlock notifications are produced by the EventHandler when the replica, as
	// leader for the respective view, proposing a block.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnProposingBlock(proposal *model.Proposal)

	// OnVoting notifications are produced by the EventHandler when the replica votes for a block.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnVoting(vote *model.Vote)

	// OnQcConstructedFromVotes notifications are produced by the VoteAggregator
	// component, whenever it constructs a QC from votes.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnQcConstructedFromVotes(curView uint64, qc *flow.QuorumCertificate)

	// OnStartingTimeout notifications are produced by PaceMaker. Such a notification indicates that the
	// PaceMaker is now waiting for the system to (receive and) process blocks or votes.
	// The specific timeout type is contained in the TimerInfo.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnStartingTimeout(*model.TimerInfo)

	// OnReachedTimeout notifications are produced by PaceMaker. Such a notification indicates that the
	// PaceMaker's timeout was processed by the system. The specific timeout type is contained in the TimerInfo.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnReachedTimeout(timeout *model.TimerInfo)

	// OnQcIncorporated notifications are produced by ForkChoice
	// whenever a quorum certificate is incorporated into the consensus state.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnQcIncorporated(*flow.QuorumCertificate)

	// OnForkChoiceGenerated notifications are produced by ForkChoice whenever a fork choice is generated.
	// The arguments specify the view (first argument) of the block which is to be built and the
	// quorum certificate (second argument) that is supposed to be in the block.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnForkChoiceGenerated(uint64, *flow.QuorumCertificate)

	// OnDoubleVotingDetected notifications are produced by the Vote Aggregation logic
	// whenever a double voting (same voter voting for different blocks at the same view) was detected.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnDoubleVotingDetected(*model.Vote, *model.Vote)

	// OnInvalidVoteDetected notifications are produced by the Vote Aggregation logic
	// whenever an invalid vote was detected.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnInvalidVoteDetected(*model.Vote)

	// OnVoteForInvalidBlockDetected notifications are produced by the Vote Aggregation logic
	// whenever vote for invalid proposal was detected.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnVoteForInvalidBlockDetected(vote *model.Vote, invalidProposal *model.Proposal)
}

Consumer consumes outbound notifications produced by HotStuff and its components. Notifications are consensus-internal state changes which are potentially relevant to the larger node in which HotStuff is running. The notifications are emitted in the order in which the HotStuff algorithm makes the respective steps.

Implementations must:

  • be concurrency safe
  • be non-blocking
  • handle repetition of the same events (with some processing overhead).

type DKG

type DKG interface {
	protocol.DKG
}

type EventHandler

type EventHandler interface {

	// OnQCConstructed processes a valid qc constructed by internal vote aggregator.
	OnQCConstructed(qc *flow.QuorumCertificate) error

	// OnReceiveProposal processes a block proposal received from another HotStuff
	// consensus participant.
	OnReceiveProposal(proposal *model.Proposal) error

	// OnLocalTimeout will check if there was a local timeout.
	OnLocalTimeout() error

	// TimeoutChannel returs a channel that sends a signal on timeout.
	TimeoutChannel() <-chan time.Time

	// Start starts the event handler.
	Start() error
}

EventHandler runs a state machine to process proposals, QC and local timeouts.

type EventLoop

type EventLoop interface {
	module.HotStuff

	// SubmitTrustedQC accepts QC for processing. QC will be dispatched on worker thread.
	// CAUTION: QC is trusted (_not_ validated again), as it's built by ourselves.
	SubmitTrustedQC(qc *flow.QuorumCertificate)
}

EventLoop performs buffer and processing of incoming proposals and QCs.

type FinalizationConsumer

type FinalizationConsumer interface {

	// OnBlockIncorporated notifications are produced by the Finalization Logic
	// whenever a block is incorporated into the consensus state.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnBlockIncorporated(*model.Block)

	// OnFinalizedBlock notifications are produced by the Finalization Logic whenever
	// a block has been finalized. They are emitted in the order the blocks are finalized.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnFinalizedBlock(*model.Block)

	// OnDoubleProposeDetected notifications are produced by the Finalization Logic
	// whenever a double block proposal (equivocation) was detected.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnDoubleProposeDetected(*model.Block, *model.Block)
}

FinalizationConsumer consumes outbound notifications produced by the finalization logic. Notifications represent finalization-specific state changes which are potentially relevant to the larger node. The notifications are emitted in the order in which the finalization algorithm makes the respective steps.

Implementations must:

  • be concurrency safe
  • be non-blocking
  • handle repetition of the same events (with some processing overhead).

type FollowerLogic

type FollowerLogic interface {
	// FinalizedBlock returns the latest finalized block
	FinalizedBlock() *model.Block

	// AddBlock processes a block proposal
	AddBlock(proposal *model.Proposal) error
}

FollowerLogic runs a state machine to process proposals

type FollowerLoop

type FollowerLoop struct {
	// contains filtered or unexported fields
}

FollowerLoop implements interface FollowerLoop

func NewFollowerLoop

func NewFollowerLoop(log zerolog.Logger, followerLogic FollowerLogic) (*FollowerLoop, error)

NewFollowerLoop creates an instance of EventLoop

func (*FollowerLoop) Done

func (fl *FollowerLoop) Done() <-chan struct{}

Done implements interface module.ReadyDoneAware

func (*FollowerLoop) Ready

func (fl *FollowerLoop) Ready() <-chan struct{}

Ready implements interface module.ReadyDoneAware Method call will starts the FollowerLoop's internal processing loop. Multiple calls are handled gracefully and the follower will only start once.

func (*FollowerLoop) SubmitProposal

func (fl *FollowerLoop) SubmitProposal(proposalHeader *flow.Header, parentView uint64) <-chan struct{}

SubmitProposal feeds a new block proposal (header) into the FollowerLoop. This method blocks until the proposal is accepted to the event queue.

Block proposals must be submitted in order, i.e. a proposal's parent must have been previously processed by the FollowerLoop.

type Forks

type Forks interface {
	ForksReader

	// AddBlock adds the block to Forks. This might cause an update of the finalized block
	// and pruning of older blocks.
	// Handles duplicated addition of blocks (at the potential cost of additional computation time).
	// PREREQUISITE:
	// Forks must be able to connect `block` to its latest finalized block
	// (without missing interim ancestors). Otherwise, an error is raised.
	// When the new block causes the conflicting finalized blocks, it will return
	// Might error with ByzantineThresholdExceededError (e.g. if finalizing conflicting forks)
	AddBlock(block *model.Block) error

	// AddQC adds a quorum certificate to Forks.
	// Will error in case the block referenced by the qc is unknown.
	// Might error with ByzantineThresholdExceededError (e.g. if two conflicting QCs for the
	// same view are found)
	AddQC(qc *flow.QuorumCertificate) error

	// MakeForkChoice prompts the ForkChoice to generate a fork choice for the
	// current view `curView`. The fork choice is a qc that should be used for
	// building the primaries block.
	// It returns a qc and the block that the qc is pointing to.
	//
	// PREREQUISITE:
	// ForkChoice cannot generate ForkChoices retroactively for past views.
	// If used correctly, MakeForkChoice should only ever have processed QCs
	// whose view is smaller than curView, for the following reason:
	// Processing a QC with view v should result in the PaceMaker being in
	// view v+1 or larger. Hence, given that the current View is curView,
	// all QCs should have view < curView.
	// To prevent accidental misusage, ForkChoices will error if `curView`
	// is smaller than the view of any qc ForkChoice has seen.
	// Note that tracking the view of the newest qc is for safety purposes
	// and _independent_ of the fork-choice rule.
	MakeForkChoice(curView uint64) (*flow.QuorumCertificate, *model.Block, error)
}

Forks encapsulated Finalization Logic and ForkChoice rule in one component. Forks maintains an in-memory data-structure of all blocks whose view-number is larger or equal to the latest finalized block. The latest finalized block is defined as the finalized block with the largest view number. When adding blocks, Forks automatically updates its internal state (including finalized blocks). Furthermore, blocks whose view number is smaller than the latest finalized block are pruned automatically.

PREREQUISITES: Forks expects that only blocks are added that can be connected to its latest finalized block (without missing interim ancestors). If this condition is violated, Forks will raise an error and ignore the block.

type ForksReader

type ForksReader interface {

	// IsSafeBlock returns true if block is safe to vote for
	// (according to the definition in https://arxiv.org/abs/1803.05069v6).
	//
	// In the current architecture, the block is stored _before_ evaluating its safety.
	// Consequently, IsSafeBlock accepts only known, valid blocks. Should a block be
	// unknown (not previously added to Forks) or violate some consistency requirements,
	// IsSafeBlock errors. All errors are fatal.
	IsSafeBlock(block *model.Block) bool

	// GetBlocksForView returns all BlockProposals at the given view number.
	GetBlocksForView(view uint64) []*model.Block

	// GetBlock returns (BlockProposal, true) if the block with the specified
	// id was found (nil, false) otherwise.
	GetBlock(id flow.Identifier) (*model.Block, bool)

	// FinalizedView returns the largest view number where a finalized block is known
	FinalizedView() uint64

	// FinalizedBlock returns the finalized block with the largest view number
	FinalizedBlock() *model.Block
}

ForksReader only reads the forks' state

type OnQCCreated added in v0.23.9

type OnQCCreated func(*flow.QuorumCertificate)

OnQCCreated is a callback which will be used by VoteCollector to submit a QC when it's able to create it

type PaceMaker

type PaceMaker interface {

	// CurView returns the current view.
	CurView() uint64

	// UpdateCurViewWithQC will check if the given QC will allow PaceMaker to fast
	// forward to QC.view+1. If PaceMaker incremented the current View, a NewViewEvent will be returned.
	UpdateCurViewWithQC(qc *flow.QuorumCertificate) (*model.NewViewEvent, bool)

	// UpdateCurViewWithBlock will check if the given block will allow PaceMaker to fast forward
	// to the BlockProposal's view. If yes, the PaceMaker will update it's internal value for
	// CurView and return a NewViewEvent.
	//
	// The parameter `nextPrimary` indicates to the PaceMaker whether or not this replica is the
	// primary for the NEXT view taking block.view as reference.
	// True corresponds to this replica being the next primary.
	UpdateCurViewWithBlock(block *model.Block, isLeaderForNextView bool) (*model.NewViewEvent, bool)

	// TimeoutChannel returns the timeout channel for the CURRENTLY ACTIVE timeout.
	// Each time the pace maker starts a new timeout, this channel is replaced.
	TimeoutChannel() <-chan time.Time

	// OnTimeout is called when a timeout, which was previously created by the PaceMaker, has
	// looped through the event loop. When used correctly, OnTimeout will always return
	// a NewViewEvent.
	// It is the responsibility of the calling code to ensure that NO STALE timeouts are
	// delivered to the PaceMaker.
	OnTimeout() *model.NewViewEvent

	// Start starts the PaceMaker (i.e. the timeout for the configured starting value for view).
	Start()

	// BlockRateDelay
	BlockRateDelay() time.Duration
}

PaceMaker for HotStuff. The component is passive in that it only reacts to method calls. The PaceMaker does not perform state transitions on its own. Timeouts are emitted through channels. Each timeout has its own dedicated channel, which is garbage collected after the respective state has been passed. It is the EventHandler's responsibility to pick up timeouts from the currently active TimeoutChannel process them first and subsequently inform the PaceMaker about processing the timeout. Specifically, the intended usage pattern for the TimeoutChannels is as follows:

• Each time the PaceMaker starts a new timeout, it created a new TimeoutChannel

• The channel for the CURRENTLY ACTIVE timeout is returned by PaceMaker.TimeoutChannel()

  • Each time the EventHandler processes an event, the EventHandler might call into PaceMaker potentially resulting in a state transition and the PaceMaker starting a new timeout

  • Hence, after processing any event, EventHandler should retrieve the current TimeoutChannel from the PaceMaker.

For Example:

for {
	timeoutChannel := el.eventHandler.TimeoutChannel()
	select {
	   case <-timeoutChannel:
	    	el.eventHandler.OnLocalTimeout()
	   case <other events>
	}
}

type Packer added in v0.23.9

type Packer interface {
	// Pack serializes the provided BlockSignatureData into a precursor format of a QC.
	// blockID is the block that the aggregated signature is for.
	// sig is the aggregated signature data.
	// Expected error returns during normal operations:
	//  * none; all errors are symptoms of inconsistent input data or corrupted internal state.
	Pack(blockID flow.Identifier, sig *BlockSignatureData) (signerIndices []byte, sigData []byte, err error)

	// Unpack de-serializes the provided signature data.
	// sig is the aggregated signature data
	// It returns:
	//  - (sigData, nil) if successfully unpacked the signature data
	//  - (nil, model.InvalidFormatError) if failed to unpack the signature data
	Unpack(signerIdentities flow.IdentityList, sigData []byte) (*BlockSignatureData, error)
}

Packer packs aggregated signature data into raw bytes to be used in block header.

type Persister

type Persister interface {

	// GetStarted will retrieve the last started view.
	GetStarted() (uint64, error)

	// GetVoted will retrieve the last voted view.
	GetVoted() (uint64, error)

	// PutStarted persists the last started view.
	PutStarted(view uint64) error

	// PutVoted persists the last voted view.
	PutVoted(view uint64) error
}

Persister is responsible for persisting state we need to bootstrap after a restart or crash.

type QCCreatedConsumer added in v0.23.9

type QCCreatedConsumer interface {
	// OnQcConstructedFromVotes notifications are produced by the VoteAggregator
	// component, whenever it constructs a QC from votes.
	// Prerequisites:
	// Implementation must be concurrency safe; Non-blocking;
	// and must handle repetition of the same events (with some processing overhead).
	OnQcConstructedFromVotes(*flow.QuorumCertificate)
}

QCCreatedConsumer consumes outbound notifications produced by HotStuff and its components. Notifications are consensus-internal state changes which are potentially relevant to the larger node in which HotStuff is running. The notifications are emitted in the order in which the HotStuff algorithm makes the respective steps.

Implementations must:

  • be concurrency safe
  • be non-blocking
  • handle repetition of the same events (with some processing overhead).

type RandomBeaconInspector added in v0.23.9

type RandomBeaconInspector interface {
	// Verify verifies the signature share under the signer's public key and the message agreed upon.
	// The function is thread-safe and wait-free (i.e. allowing arbitrary many routines to
	// execute the business logic, without interfering with each other).
	// It allows concurrent verification of the given signature.
	// Returns :
	//  - model.InvalidSignerError if signerIndex is invalid
	//  - model.ErrInvalidSignature if signerIndex is valid but signature is cryptographically invalid
	//  - other error if there is an unexpected exception.
	Verify(signerIndex int, share crypto.Signature) error

	// TrustedAdd adds a share to the internal signature shares store.
	// There is no pre-check of the signature's validity _before_ adding it.
	// It is the caller's responsibility to make sure the signature was previously verified.
	// Nevertheless, the implementation guarantees safety (only correct threshold signatures
	// are returned) through a post-check (verifying the threshold signature
	// _after_ reconstruction before returning it).
	// The function is thread-safe but locks its internal state, thereby permitting only
	// one routine at a time to add a signature.
	// Returns:
	//  - (true, nil) if the signature has been added, and enough shares have been collected.
	//  - (false, nil) if the signature has been added, but not enough shares were collected.
	//  - (false, error) if there is any exception adding the signature share.
	//      - model.InvalidSignerError if signerIndex is invalid (out of the valid range)
	//  	- model.DuplicatedSignerError if the signer has been already added
	//      - other error if there is an unexpected exception.
	TrustedAdd(signerIndex int, share crypto.Signature) (enoughshares bool, exception error)

	// EnoughShares indicates whether enough shares have been accumulated in order to reconstruct
	// a group signature. The function is thread-safe.
	EnoughShares() bool

	// Reconstruct reconstructs the group signature. The function is thread-safe but locks
	// its internal state, thereby permitting only one routine at a time.
	//
	// Returns:
	// - (signature, nil) if no error occurred
	// - (nil, model.InsufficientSignaturesError) if not enough shares were collected
	// - (nil, model.InvalidSignatureIncluded) if at least one collected share does not serialize to a valid BLS signature,
	//    or if the constructed signature failed to verify against the group public key and stored message. This post-verification
	//    is required  for safety, as `TrustedAdd` allows adding invalid signatures.
	// - (nil, error) for any other unexpected error.
	Reconstruct() (crypto.Signature, error)
}

RandomBeaconInspector encapsulates all methods needed by a Hotstuff leader to validate the beacon votes and reconstruct a beacon signature. The random beacon methods are based on a threshold signature scheme.

type RandomBeaconReconstructor added in v0.23.9

type RandomBeaconReconstructor interface {
	// Verify verifies the signature share under the signer's public key and the message agreed upon.
	// The function is thread-safe and wait-free (i.e. allowing arbitrary many routines to
	// execute the business logic, without interfering with each other).
	// It allows concurrent verification of the given signature.
	// Returns :
	//  - model.InvalidSignerError if signerIndex is invalid
	//  - model.ErrInvalidSignature if signerID is valid but signature is cryptographically invalid
	//  - other error if there is an unexpected exception.
	Verify(signerID flow.Identifier, sig crypto.Signature) error

	// TrustedAdd adds a share to the internal signature shares store.
	// There is no pre-check of the signature's validity _before_ adding it.
	// It is the caller's responsibility to make sure the signature was previously verified.
	// Nevertheless, the implementation guarantees safety (only correct threshold signatures
	// are returned) through a post-check (verifying the threshold signature
	// _after_ reconstruction before returning it).
	// The function is thread-safe but locks its internal state, thereby permitting only
	// one routine at a time to add a signature.
	// Returns:
	//  - (true, nil) if the signature has been added, and enough shares have been collected.
	//  - (false, nil) if the signature has been added, but not enough shares were collected.
	//  - (false, error) if there is any exception adding the signature share.
	//      - model.InvalidSignerError if signerIndex is invalid (out of the valid range)
	//  	- model.DuplicatedSignerError if the signer has been already added
	//      - other error if there is an unexpected exception.
	TrustedAdd(signerID flow.Identifier, sig crypto.Signature) (EnoughShares bool, err error)

	// EnoughShares indicates whether enough shares have been accumulated in order to reconstruct
	// a group signature. The function is thread-safe.
	EnoughShares() bool

	// Reconstruct reconstructs the group signature. The function is thread-safe but locks
	// its internal state, thereby permitting only one routine at a time.
	//
	// Returns:
	// - (signature, nil) if no error occurred
	// - (nil, model.InsufficientSignaturesError) if not enough shares were collected
	// - (nil, model.InvalidSignatureIncluded) if at least one collected share does not serialize to a valid BLS signature,
	//    or if the constructed signature failed to verify against the group public key and stored message. This post-verification
	//    is required  for safety, as `TrustedAdd` allows adding invalid signatures.
	// - (nil, error) for any other unexpected error.
	Reconstruct() (crypto.Signature, error)
}

RandomBeaconReconstructor encapsulates all methods needed by a Hotstuff leader to validate the beacon votes and reconstruct a beacon signature. The random beacon methods are based on a threshold signature scheme.

type Signer

type Signer interface {
	// CreateProposal creates a proposal for the given block. No error returns
	// are expected during normal operations (incl. presence of byz. actors).
	CreateProposal(block *model.Block) (*model.Proposal, error)

	// CreateVote creates a vote for the given block. No error returns are
	// expected during normal operations (incl. presence of byz. actors).
	CreateVote(block *model.Block) (*model.Vote, error)
}

Signer is responsible for creating votes, proposals for a given block.

type Validator

type Validator interface {

	// ValidateQC checks the validity of a QC for a given block.
	// During normal operations, the following error returns are expected:
	//  * model.InvalidBlockError if the QC is invalid
	ValidateQC(qc *flow.QuorumCertificate, block *model.Block) error

	// ValidateProposal checks the validity of a proposal.
	// During normal operations, the following error returns are expected:
	//  * model.InvalidBlockError if the block is invalid
	ValidateProposal(proposal *model.Proposal) error

	// ValidateVote checks the validity of a vote for a given block and
	// returns the full entity for the voter. During normal operations,
	// the following errors are expected:
	//  * model.InvalidVoteError for invalid votes
	ValidateVote(vote *model.Vote, block *model.Block) (*flow.Identity, error)
}

Validator provides functions to validate QC, proposals and votes.

type Verifier

type Verifier interface {

	// VerifyVote checks the cryptographic validity of a vote's `SigData` w.r.t.
	// the given block. It is the responsibility of the calling code to ensure
	// that `voter` is authorized to vote.
	// Return values:
	//  * nil if `sigData` is cryptographically valid
	//  * model.InvalidFormatError if the signature has an incompatible format.
	//  * model.ErrInvalidSignature is the signature is invalid
	//  * model.InvalidSignerError is only relevant for extended signature schemes,
	//    where special signing authority is only given to a _subset_ of consensus
	//    participants (e.g. random beacon). In case a participant signed despite not
	//    being authorized, an InvalidSignerError is returned.
	//  * unexpected errors should be treated as symptoms of bugs or uncovered
	//    edge cases in the logic (i.e. as fatal)
	VerifyVote(voter *flow.Identity, sigData []byte, block *model.Block) error

	// VerifyQC checks the cryptographic validity of a QC's `SigData` w.r.t. the
	// given block. It is the responsibility of the calling code to ensure that
	// all `signers` are authorized, without duplicates.
	// Return values:
	//  * nil if `sigData` is cryptographically valid
	//  * model.InvalidFormatError if `sigData` has an incompatible format
	//  * model.InsufficientSignaturesError if `signers` is empty.
	//    Depending on the order of checks in the higher-level logic this error might
	//    be an indicator of a external byzantine input or an internal bug.
	//  * model.ErrInvalidSignature if a signature is invalid
	//  * model.InvalidSignerError is only relevant for extended signature schemes,
	//    where special signing authority is only given to a _subset_ of consensus
	//    participants (e.g. random beacon). In case a participant signed despite not
	//    being authorized, an InvalidSignerError is returned.
	//  * unexpected errors should be treated as symptoms of bugs or uncovered
	//	  edge cases in the logic (i.e. as fatal)
	VerifyQC(signers flow.IdentityList, sigData []byte, block *model.Block) error
}

Verifier is the component responsible for the cryptographic integrity of votes, proposals and QC's against the block they are signing. Overall, there are two criteria for the validity of a vote and QC:

(1) the signer ID(s) must correspond to authorized consensus participants (2) the signature must be cryptographically valid.

Note that Verifier only implements (2). This API design allows to decouple (i) the common logic for checking that a super-majority of the consensus committee voted (ii) the handling of combined staking+RandomBeacon votes (consensus nodes) vs only staking votes (collector nodes)

On the one hand, this API design makes code less concise, as the two checks are now distributed over API boundaries. On the other hand, we can avoid repeated Identity lookups in the implementation, which increases performance.

type VerifyingVoteProcessor added in v0.23.9

type VerifyingVoteProcessor interface {
	VoteProcessor

	// Block returns which block that will be used to collector votes for. Transition to VerifyingVoteCollector can occur only
	// when we have received block proposal so this information has to be available.
	Block() *model.Block
}

VerifyingVoteProcessor is a VoteProcessor that attempts to construct a QC for the given block.

type VoteAggregator

type VoteAggregator interface {
	module.ReadyDoneAware
	module.Startable

	// AddVote verifies and aggregates a vote.
	// The voting block could either be known or unknown.
	// If the voting block is unknown, the vote won't be processed until AddBlock is called with the block.
	// This method can be called concurrently, votes will be queued and processed asynchronously.
	AddVote(vote *model.Vote)

	// AddBlock notifies the VoteAggregator that it should start processing votes for the given block.
	// AddBlock is a _synchronous_ call (logic is executed by the calling go routine). It also verifies
	// validity of the proposer's vote for its own block.
	// Expected error returns during normal operations:
	// * model.InvalidBlockError if the proposer's vote for its own block is invalid
	// * mempool.DecreasingPruningHeightError if the block's view has already been pruned
	AddBlock(block *model.Proposal) error

	// InvalidBlock notifies the VoteAggregator about an invalid proposal, so that it
	// can process votes for the invalid block and slash the voters. Expected error
	// returns during normal operations:
	// * mempool.DecreasingPruningHeightError if proposal's view has already been pruned
	InvalidBlock(block *model.Proposal) error

	// PruneUpToView deletes all votes _below_ to the given view, as well as
	// related indices. We only retain and process whose view is equal or larger
	// than `lowestRetainedView`. If `lowestRetainedView` is smaller than the
	// previous value, the previous value is kept and the method call is a NoOp.
	PruneUpToView(view uint64)
}

VoteAggregator verifies and aggregates votes to build QC. When enough votes have been collected, it builds a QC and send it to the EventLoop VoteAggregator also detects protocol violation, including invalid votes, double voting etc, and notifies a HotStuff consumer for slashing.

type VoteCollector added in v0.23.9

type VoteCollector interface {
	// ProcessBlock performs validation of block signature and processes block with respected collector.
	// Calling this function will mark conflicting collector as stale and change state of valid collectors
	// It returns nil if the block is valid.
	// It returns model.InvalidBlockError if block is invalid.
	// It returns other error if there is exception processing the block.
	ProcessBlock(block *model.Proposal) error

	// AddVote adds a vote to the collector
	// return error if the signature is invalid
	// When enough votes have been added to produce a QC, the QC will be created asynchronously, and
	// passed to EventLoop through a callback.
	AddVote(vote *model.Vote) error

	// RegisterVoteConsumer registers a VoteConsumer. Upon registration, the collector
	// feeds all cached votes into the consumer in the order they arrived.
	// CAUTION, VoteConsumer implementations must be
	//  * NON-BLOCKING and consume the votes without noteworthy delay, and
	//  * CONCURRENCY SAFE
	RegisterVoteConsumer(consumer VoteConsumer)

	// View returns the view that this instance is collecting votes for.
	// This method is useful when adding the newly created vote collector to vote collectors map.
	View() uint64

	// Status returns the status of the vote collector
	Status() VoteCollectorStatus
}

VoteCollector collects all votes for a specified view. On the happy path, it generates a QC when enough votes have been collected. The VoteCollector internally delegates the vote-format specific processing to the VoteProcessor.

type VoteCollectorStatus added in v0.23.9

type VoteCollectorStatus int

VoteCollectorStatus indicates the VoteCollector's status It has three different status.

const (
	// VoteCollectorStatusCaching is for the status when the block has not been received.
	// The vote collector in this status will cache all the votes without verifying them
	VoteCollectorStatusCaching VoteCollectorStatus = iota

	// VoteCollectorStatusVerifying is for the status when the block has been received,
	// and is able to process all votes for it.
	VoteCollectorStatusVerifying

	// VoteCollectorStatusInvalid is for the status when the block has been verified and
	// is invalid. All votes to this block will be collected to slash the voter.
	VoteCollectorStatusInvalid
)

func (VoteCollectorStatus) String added in v0.23.9

func (ps VoteCollectorStatus) String() string

type VoteCollectors added in v0.23.9

type VoteCollectors interface {
	module.ReadyDoneAware
	module.Startable

	// GetOrCreateCollector retrieves the hotstuff.VoteCollector for the specified
	// view or creates one if none exists.
	// When creating a vote collector, the view will be used to get epoch by view, then create the random beacon
	// signer object by epoch, because epoch determines DKG, which determines random beacon committee.
	// It returns:
	//  -  (collector, true, nil) if no collector can be found by the block ID, and a new collector was created.
	//  -  (collector, false, nil) if the collector can be found by the block ID
	//  -  (nil, false, error) if running into any exception creating the vote collector state machine
	// Expected error returns during normal operations:
	//  * mempool.DecreasingPruningHeightError
	GetOrCreateCollector(view uint64) (collector VoteCollector, created bool, err error)

	// PruneUpToView prunes the vote collectors with views _below_ the given value, i.e.
	// we only retain and process whose view is equal or larger than `lowestRetainedView`.
	// If `lowestRetainedView` is smaller than the previous value, the previous value is
	// kept and the method call is a NoOp.
	PruneUpToView(lowestRetainedView uint64)
}

VoteCollectors is an interface which allows VoteAggregator to interact with collectors structured by view and blockID. Implementations of this interface are responsible for state transitions of `VoteCollector`s and pruning of stale and outdated collectors by view.

type VoteConsumer added in v0.23.9

type VoteConsumer func(vote *model.Vote)

VoteConsumer consumes all votes for one specific view. It is registered with the `VoteCollector` for the respective view. Upon registration, the `VoteCollector` feeds votes into the consumer in the order they are received (already cached votes as well as votes received in the future). Only votes that pass de-duplication and equivocation detection are passed on. CAUTION, VoteConsumer implementations must be

  • NON-BLOCKING and consume the votes without noteworthy delay, and
  • CONCURRENCY SAFE

type VoteProcessor added in v0.23.9

type VoteProcessor interface {
	// Process performs processing of single vote. This function is safe to call from multiple goroutines.
	// Expected error returns during normal operations:
	// * VoteForIncompatibleBlockError - submitted vote for incompatible block
	// * VoteForIncompatibleViewError - submitted vote for incompatible view
	// * model.InvalidVoteError - submitted vote with invalid signature
	// * model.DuplicatedSignerError - vote from a signer whose vote was previously already processed
	// All other errors should be treated as exceptions.
	Process(vote *model.Vote) error

	// Status returns the status of the vote processor
	Status() VoteCollectorStatus
}

VoteProcessor processes votes. It implements the vote-format specific processing logic. Depending on their implementation, a VoteProcessor might drop votes or attempt to construct a QC.

type VoteProcessorFactory added in v0.23.9

type VoteProcessorFactory interface {
	// Create instantiates a VerifyingVoteProcessor for processing votes for a specific proposal.
	// Caller can be sure that proposal vote was successfully verified and processed.
	// Expected error returns during normal operations:
	// * model.InvalidBlockError - proposal has invalid proposer vote
	Create(log zerolog.Logger, proposal *model.Proposal) (VerifyingVoteProcessor, error)
}

VoteProcessorFactory is a factory that can be used to create a verifying vote processors for a specific proposal. Depending on factory implementation it will return processors for consensus or collection clusters

type Voter

type Voter interface {
	// ProduceVoteIfVotable takes a block and current view, and decides whether to vote for the block.
	// Returns:
	//  * (vote, nil): On the _first_ block for the current view that is safe to vote for.
	//    Subsequently, voter does _not_ vote for any other block with the same (or lower) view.
	//  * (nil, model.NoVoteError): If the voter decides that it does not want to vote for the given block.
	//    This is a sentinel error and _expected_ during normal operation.
	// All other errors are unexpected and potential symptoms of uncovered edge cases or corrupted internal state (fatal).
	ProduceVoteIfVotable(block *model.Block, curView uint64) (*model.Vote, error)
}

Voter produces votes for the given block according to voting rules.

type WeightedSignatureAggregator added in v0.23.9

type WeightedSignatureAggregator interface {
	// Verify verifies the signature under the stored public keys and message.
	// Expected errors during normal operations:
	//  - model.InvalidSignerError if signerID is invalid (not a consensus participant)
	//  - model.ErrInvalidSignature if signerID is valid but signature is cryptographically invalid
	Verify(signerID flow.Identifier, sig crypto.Signature) error

	// TrustedAdd adds a signature to the internal set of signatures and adds the signer's
	// weight to the total collected weight, iff the signature is _not_ a duplicate. The
	// total weight of all collected signatures (excluding duplicates) is returned regardless
	// of any returned error.
	// Expected errors during normal operations:
	//  - model.InvalidSignerError if signerID is invalid (not a consensus participant)
	//  - model.DuplicatedSignerError if the signer has been already added
	TrustedAdd(signerID flow.Identifier, sig crypto.Signature) (totalWeight uint64, exception error)

	// TotalWeight returns the total weight presented by the collected signatures.
	TotalWeight() uint64

	// Aggregate aggregates the signatures and returns the aggregated signature.
	// The function performs a final verification and errors if the aggregated
	// signature is not valid. This is required for the function safety since
	// `TrustedAdd` allows adding invalid signatures.
	// Expected errors during normal operations:
	//  - model.InsufficientSignaturesError if no signatures have been added yet
	//  - model.InvalidSignatureIncludedError if some signature(s), included via TrustedAdd, are invalid
	Aggregate() (flow.IdentifierList, []byte, error)
}

WeightedSignatureAggregator aggregates signatures of the same signature scheme and the same message from different signers. The public keys and message are agreed upon upfront. It is also recommended to only aggregate signatures generated with keys representing equivalent security-bit level. Furthermore, a weight [unsigned int64] is assigned to each signer ID. The WeightedSignatureAggregator internally tracks the total weight of all collected signatures. Implementations must be concurrency safe.

type Workerpool added in v0.23.9

type Workerpool interface {
	Workers

	// StopWait stops the worker pool and waits for all queued tasks to
	// complete.  No additional tasks may be submitted, but all pending tasks are
	// executed by workers before this function returns.
	StopWait()
}

Workerpool adds the functionality to terminate the workers to the Workers interface.

type Workers added in v0.23.9

type Workers interface {
	// Submit enqueues a function for a worker to execute. Submit will not block
	// regardless of the number of tasks submitted. Each task is immediately
	// given to an available worker or queued otherwise. Tasks are processed in
	// FiFO order.
	Submit(task func())
}

Workers queues and processes submitted tasks. We explicitly do not expose any functionality to terminate the worker pool.

Directories

Path Synopsis
(c) 2020 Dapper Labs - ALL RIGHTS RESERVED
(c) 2020 Dapper Labs - ALL RIGHTS RESERVED

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL