Documentation ¶
Index ¶
- type AggregatingSigner
- type AggregatingVerifier
- type BlockRequester
- type Builder
- type CacheMetrics
- type ChunkAssigner
- type ChunkVerifier
- type CleanerMetrics
- type ClusterRootQCVoter
- type ColdStuff
- type CollectionMetrics
- type ComplianceMetrics
- type ConsensusMetrics
- type Engine
- type EngineMetrics
- type ExecutionMetrics
- type Finalizer
- type HotStuff
- type HotStuffFollower
- type HotstuffMetrics
- type LedgerMetrics
- type Local
- type MempoolMetrics
- type Merger
- type Network
- type NetworkMetrics
- type PendingBlockBuffer
- type PendingClusterBlockBuffer
- type PingMetrics
- type ProviderMetrics
- type QCContractClient
- type ReadyDoneAware
- type ReceiptValidator
- type Requester
- type RuntimeMetrics
- type SealValidator
- type Signer
- type SyncCore
- type ThresholdSigner
- type ThresholdVerifier
- type Tracer
- type TransactionMetrics
- type VerificationMetrics
- type Verifier
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AggregatingSigner ¶
type AggregatingSigner interface { AggregatingVerifier Sign(msg []byte) (crypto.Signature, error) Aggregate(sigs []crypto.Signature) (crypto.Signature, error) }
AggregatingSigner is a signer that can sign a simple message and aggregate multiple signatures into a single aggregated signature.
type AggregatingVerifier ¶
type AggregatingVerifier interface { Verifier VerifyMany(msg []byte, sig crypto.Signature, keys []crypto.PublicKey) (bool, error) }
AggregatingVerifier can verify a message against a signature from either a single key or many keys.
type BlockRequester ¶
type BlockRequester interface { // RequestBlock indicates that the given block should be queued for retrieval. RequestBlock(blockID flow.Identifier) // RequestHeight indicates that the given block height should be queued for retrieval. RequestHeight(height uint64) // Manually Prune requests Prune(final *flow.Header) }
BlockRequester enables components to request particular blocks by ID from synchronization system.
type Builder ¶
type Builder interface { // BuildOn generates a new payload that is valid with respect to the parent // being built upon, with the view being provided by the consensus algorithm. // The builder stores the block and validates it against the protocol state // before returning it. // // NOTE: Since the block is stored within Builder, HotStuff MUST propose the // block once BuildOn succcessfully returns. BuildOn(parentID flow.Identifier, setter func(*flow.Header) error) (*flow.Header, error) }
Builder represents an abstracted block construction module that can be used in more than one consensus algorithm. The resulting block is consistent within itself and can be wrapped with additional consensus information such as QCs.
type CacheMetrics ¶
type ChunkAssigner ¶
type ChunkAssigner interface { // Assign generates the assignment Assign(result *flow.ExecutionResult, blockID flow.Identifier) (*chmodels.Assignment, error) }
ChunkAssigner presents an interface for assigning chunks to the verifier nodes
type ChunkVerifier ¶
type ChunkVerifier interface { // Verify verifies the given VerifiableChunk by executing it and checking the final state commitment // It returns a Spock Secret as a byte array, verification fault of the chunk, and an error. // Note: Verify should only be executed on non-system chunks. It returns an error if it is invoked on // system chunk. // TODO return challenges plus errors Verify(ch *verification.VerifiableChunkData) ([]byte, chmodels.ChunkFault, error) // VerifySystemChunk verifies a given VerifiableChunk corresponding to a system chunk. // by executing it and checking the final state commitment // It returns a Spock Secret as a byte array, verification fault of the chunk, and an error. // Note: Verify should only be executed on system chunks. It returns an error if it is invoked on // non-system chunks. SystemChunkVerify(ch *verification.VerifiableChunkData) ([]byte, chmodels.ChunkFault, error) }
ChunkVerifier provides functionality to verify chunks
type CleanerMetrics ¶
type ClusterRootQCVoter ¶
type ClusterRootQCVoter interface { // Vote handles the full procedure of generating a vote, submitting it to the // epoch smart contract, and verifying submission. Returns an error only if // there is a critical error that would make it impossible for the vote to be // submitted. Otherwise, exits when the vote has been successfully submitted. // // It is safe to run Vote multiple times within a single setup phase. Vote(context.Context, protocol.Epoch) error }
ClusterRootQCVoter is responsible for submitting a vote to the cluster QC contract to coordinate generation of a valid root quorum certificate for the next epoch.
type ColdStuff ¶
ColdStuff is the interface for accepting proposals, votes, and commits to the ColdStuff consensus algorithm.
NOTE: It re-uses as much of the HotStuff interface and models as possible to simplify swapping between the two.
type CollectionMetrics ¶
type CollectionMetrics interface { // TransactionIngested is called when a new transaction is ingested by the // node. It increments the total count of ingested transactions and starts // a tx->col span for the transaction. TransactionIngested(txID flow.Identifier) // ClusterBlockProposed is called when a new collection is proposed by us or // any other node in the cluster. ClusterBlockProposed(block *cluster.Block) // ClusterBlockFinalized is called when a collection is finalized. ClusterBlockFinalized(block *cluster.Block) }
type ComplianceMetrics ¶
type ConsensusMetrics ¶
type ConsensusMetrics interface { // StartCollectionToFinalized reports Metrics C1: Collection Received by CCL→ Collection Included in Finalized Block StartCollectionToFinalized(collectionID flow.Identifier) // FinishCollectionToFinalized reports Metrics C1: Collection Received by CCL→ Collection Included in Finalized Block FinishCollectionToFinalized(collectionID flow.Identifier) // StartBlockToSeal reports Metrics C4: Block Received by CCL → Block Seal in finalized block StartBlockToSeal(blockID flow.Identifier) // FinishBlockToSeal reports Metrics C4: Block Received by CCL → Block Seal in finalized block FinishBlockToSeal(blockID flow.Identifier) // EmergencySeal increments the number of seals that were created in emergency mode EmergencySeal() // OnReceiptProcessingDuration records the number of seconds spent processing a receipt OnReceiptProcessingDuration(duration time.Duration) // OnApprovalProcessingDuration records the number of seconds spent processing an approval OnApprovalProcessingDuration(duration time.Duration) // CheckSealingDuration records absolute time for the full sealing check by the consensus match engine CheckSealingDuration(duration time.Duration) }
type Engine ¶
type Engine interface { ReadyDoneAware network.Engine }
Engine is the interface all engines should implement in order to have a manageable lifecycle and recieve messages from the networking layer.
type EngineMetrics ¶
type ExecutionMetrics ¶
type ExecutionMetrics interface { LedgerMetrics RuntimeMetrics ProviderMetrics // StartBlockReceivedToExecuted starts a span to trace the duration of a block // from being received for execution to execution being finished StartBlockReceivedToExecuted(blockID flow.Identifier) // FinishBlockReceivedToExecuted finishes a span to trace the duration of a block // from being received for execution to execution being finished FinishBlockReceivedToExecuted(blockID flow.Identifier) // ExecutionGasUsedPerBlock reports gas used per block ExecutionGasUsedPerBlock(gas uint64) // ExecutionStateReadsPerBlock reports number of state access/read operations per block ExecutionStateReadsPerBlock(reads uint64) // ExecutionStorageStateCommitment reports the storage size of a state commitment in bytes ExecutionStorageStateCommitment(bytes int64) // ExecutionLastExecutedBlockHeight reports last executed block height ExecutionLastExecutedBlockHeight(height uint64) // ExecutionTotalExecutedTransactions adds num to the total number of executed transactions ExecutionTotalExecutedTransactions(numExecuted int) // ExecutionCollectionRequestSent reports when a request for a collection is sent to a collection node ExecutionCollectionRequestSent() // Unused ExecutionCollectionRequestRetried() // ExecutionSync reports when the state syncing is triggered or stopped. ExecutionSync(syncing bool) }
type Finalizer ¶
type Finalizer interface { // MakeValid will mark a block as having passed the consensus algorithm's // internal validation. MakeValid(blockID flow.Identifier) error // MakeFinal will declare a block and all of its ancestors as finalized, which // makes it an immutable part of the blockchain. Returning an error indicates // some fatal condition and will cause the finalization logic to terminate. MakeFinal(blockID flow.Identifier) error }
Finalizer is used by the consensus algorithm to inform other components for (such as the protocol state) about validity of block headers and finalization of blocks.
Since we have two different protocol states: one for the main consensus, the other for the collection cluster consensus, the Finalizer interface allows the two different protocol states to provide different implementations for updating its state when a block has been validated or finalized.
Why MakeValid and MakeFinal need to return an error? Updating the protocol state should always succeed when the data is consistent. However, in case the protocol state is corrupted, error should be returned and the consensus algorithm should halt. So the error returned from MakeValid and MakeFinal is for the protocol state to report exceptions.
type HotStuff ¶
type HotStuff interface { ReadyDoneAware // SubmitProposal submits a new block proposal to the HotStuff event loop. // This method blocks until the proposal is accepted to the event queue. // // Block proposals must be submitted in order and only if they extend a // block already known to HotStuff core. SubmitProposal(proposal *flow.Header, parentView uint64) // SubmitVote submits a new vote to the HotStuff event loop. // This method blocks until the vote is accepted to the event queue. // // Votes may be submitted in any order. SubmitVote(originID flow.Identifier, blockID flow.Identifier, view uint64, sigData []byte) }
HotStuff defines the interface to the core HotStuff algorithm. It includes a method to start the event loop, and utilities to submit block proposals and votes received from other replicas.
type HotStuffFollower ¶
type HotStuffFollower interface { ReadyDoneAware // SubmitProposal feeds a new block proposal into the HotStuffFollower. // This method blocks until the proposal is accepted to the event queue. // // Block proposals must be submitted in order, i.e. a proposal's parent must // have been previously processed by the HotStuffFollower. SubmitProposal(proposal *flow.Header, parentView uint64) }
HotStuffFollower is run by non-consensus nodes to observe the block chain and make local determination about block finalization. While the process of reaching consensus (while guaranteeing its safety and liveness) is very intricate, the criteria to confirm that consensus has been reached are relatively straight forward. Each non-consensus node can simply observe the blockchain and determine locally which blocks have been finalized without requiring additional information from the consensus nodes.
Specifically, the HotStuffFollower informs other components within the node about finalization of blocks. It consumes block proposals broadcasted by the consensus node, verifies the block header and locally evaluates the finalization rules.
Notes:
- HotStuffFollower does not handle disconnected blocks. Each block's parent must have been previously processed by the HotStuffFollower.
- HotStuffFollower internally prunes blocks below the last finalized view. When receiving a block proposal, it might not have the proposal's parent anymore. Nevertheless, HotStuffFollower needs the parent's view, which must be supplied in addition to the proposal.
type HotstuffMetrics ¶
type HotstuffMetrics interface { // HotStuffBusyDuration reports Metrics C6 HotStuff Busy Duration HotStuffBusyDuration(duration time.Duration, event string) // HotStuffIdleDuration reports Metrics C6 HotStuff Idle Duration HotStuffIdleDuration(duration time.Duration) // HotStuffWaitDuration reports Metrics C6 HotStuff Idle Duration HotStuffWaitDuration(duration time.Duration, event string) // SetCurView reports Metrics C8: Current View SetCurView(view uint64) // SetQCView reports Metrics C9: View of Newest Known QC SetQCView(view uint64) // CountSkipped reports the number of times we skipped ahead. CountSkipped() // CountTimeout reports the number of times we timed out. CountTimeout() // SetTimeout sets the current timeout duration SetTimeout(duration time.Duration) // CommitteeProcessingDuration measures the time which the HotStuff's core logic // spends in the hotstuff.Committee component, i.e. the time determining consensus // committee relations. CommitteeProcessingDuration(duration time.Duration) // SignerProcessingDuration measures the time which the HotStuff's core logic // spends in the hotstuff.Signer component, i.e. the with crypto-related operations. SignerProcessingDuration(duration time.Duration) // ValidatorProcessingDuration measures the time which the HotStuff's core logic // spends in the hotstuff.Validator component, i.e. the with verifying // consensus messages. ValidatorProcessingDuration(duration time.Duration) // PayloadProductionDuration measures the time which the HotStuff's core logic // spends in the module.Builder component, i.e. the with generating block payloads. PayloadProductionDuration(duration time.Duration) }
type LedgerMetrics ¶
type LedgerMetrics interface { // ForestApproxMemorySize records approximate memory usage of forest (all in-memory trees) ForestApproxMemorySize(bytes uint64) // ForestNumberOfTrees current number of trees in a forest (in memory) ForestNumberOfTrees(number uint64) // LatestTrieRegCount records the number of unique register allocated (the lastest created trie) LatestTrieRegCount(number uint64) // LatestTrieRegCountDiff records the difference between the number of unique register allocated of the latest created trie and parent trie LatestTrieRegCountDiff(number uint64) // LatestTrieMaxDepth records the maximum depth of the last created trie LatestTrieMaxDepth(number uint64) // LatestTrieMaxDepthDiff records the difference between the max depth of the latest created trie and parent trie LatestTrieMaxDepthDiff(number uint64) // UpdateCount increase a counter of performed updates UpdateCount() // ProofSize records a proof size ProofSize(bytes uint32) // UpdateValuesNumber accumulates number of updated values UpdateValuesNumber(number uint64) // UpdateValuesSize total size (in bytes) of updates values UpdateValuesSize(byte uint64) // UpdateDuration records absolute time for the update of a trie UpdateDuration(duration time.Duration) // UpdateDurationPerItem records update time for single value (total duration / number of updated values) UpdateDurationPerItem(duration time.Duration) // ReadValuesNumber accumulates number of read values ReadValuesNumber(number uint64) // ReadValuesSize total size (in bytes) of read values ReadValuesSize(byte uint64) // ReadDuration records absolute time for the read from a trie ReadDuration(duration time.Duration) // ReadDurationPerItem records read time for single value (total duration / number of read values) ReadDurationPerItem(duration time.Duration) // DiskSize records the amount of disk space used by the storage (in bytes) DiskSize(uint64) }
LedgerMetrics provides an interface to record Ledger Storage metrics. Ledger storage is non-linear (fork-aware) so certain metrics are averaged and computed before emitting for better visibility
type Local ¶
type Local interface { // NodeID returns the node ID of the local node. NodeID() flow.Identifier // Address returns the (listen) address of the local node. Address() string // Sign provides a signature oracle that given a message and hasher, it // generates and returns a signature over the message using the node's private key // as well as the input hasher Sign([]byte, hash.Hasher) (crypto.Signature, error) // NotMeFilter returns handy not-me filter for searching identity NotMeFilter() flow.IdentityFilter // SignFunc provides a signature oracle that given a message, a hasher, and a signing function, it // generates and returns a signature over the message using the node's private key // as well as the input hasher by invoking the given signing function. The overall idea of this function // is to not expose the private key to the caller. SignFunc([]byte, hash.Hasher, func(crypto.PrivateKey, []byte, hash.Hasher) (crypto.Signature, error)) (crypto.Signature, error) }
Local encapsulates the stable local node information.
type MempoolMetrics ¶
type Merger ¶
type Merger interface { Join(sigs ...crypto.Signature) ([]byte, error) Split(combined []byte) ([]crypto.Signature, error) }
Merger is responsible for combining two signatures, but it must be done in a cryptographically unaware way (agnostic of the byte structure of the signatures).
type Network ¶
type Network interface { // Register will subscribe to the channel with the given engine and // the engine will be notified with incoming messages on the channel. // The returned Conduit can be used to send messages to engines on other nodes subscribed to the same channel // On a single node, only one engine can be subscribed to a channel at any given time. Register(channel network.Channel, engine network.Engine) (network.Conduit, error) }
Network represents the network layer of the node. It allows processes that work across the peer-to-peer network to register themselves as an engine with a unique engine ID. The returned conduit allows the process to communicate to the same engine on other nodes across the network in a network-agnostic way.
type NetworkMetrics ¶
type NetworkMetrics interface { // NetworkMessageSent size in bytes and count of the network message sent NetworkMessageSent(sizeBytes int, topic string, messageType string) // NetworkMessageReceived size in bytes and count of the network message received NetworkMessageReceived(sizeBytes int, topic string, messageType string) // NetworkDuplicateMessagesDropped counts number of messages dropped due to duplicate detection NetworkDuplicateMessagesDropped(topic string, messageType string) // Message receive queue metrics // MessageAdded increments the metric tracking the number of messages in the queue with the given priority MessageAdded(priority int) // MessageRemoved decrements the metric tracking the number of messages in the queue with the given priority MessageRemoved(priority int) // QueueDuration tracks the time spent by a message with the given priority in the queue QueueDuration(duration time.Duration, priority int) // InboundProcessDuration tracks the time a queue worker blocked by an engine for processing an incoming message on specified topic (i.e., channel). InboundProcessDuration(topic string, duration time.Duration) // OutboundConnections updates the metric tracking the number of outbound connections of this node OutboundConnections(connectionCount uint) // InboundConnections updates the metric tracking the number of inbound connections of this node InboundConnections(connectionCount uint) }
Network Metrics
type PendingBlockBuffer ¶
type PendingBlockBuffer interface { Add(originID flow.Identifier, proposal *messages.BlockProposal) bool ByID(blockID flow.Identifier) (*flow.PendingBlock, bool) ByParentID(parentID flow.Identifier) ([]*flow.PendingBlock, bool) DropForParent(parentID flow.Identifier) PruneByHeight(height uint64) Size() uint }
PendingBlockBuffer defines an interface for a cache of pending blocks that cannot yet be processed because they do not connect to the rest of the chain state. They are indexed by parent ID to enable processing all of a parent's children once the parent is received.
type PendingClusterBlockBuffer ¶
type PendingClusterBlockBuffer interface { Add(originID flow.Identifier, proposal *messages.ClusterBlockProposal) bool ByID(blockID flow.Identifier) (*cluster.PendingBlock, bool) ByParentID(parentID flow.Identifier) ([]*cluster.PendingBlock, bool) DropForParent(parentID flow.Identifier) PruneByHeight(height uint64) Size() uint }
PendingClusterBlockBuffer is the same thing as PendingBlockBuffer, but for collection node cluster consensus.
type PingMetrics ¶
type ProviderMetrics ¶
type ProviderMetrics interface { // ChunkDataPackRequested is executed every time a chunk data pack request is arrived at execution node. // It increases the request counter by one. ChunkDataPackRequested() }
type QCContractClient ¶
type QCContractClient interface { // SubmitVote submits the given vote to the cluster QC aggregator smart // contract. This function returns only once the transaction has been // processed by the network. An error is returned if the transaction has // failed and should be re-submitted. SubmitVote(ctx context.Context, vote *model.Vote) error // Voted returns true if we have successfully submitted a vote to the // cluster QC aggregator smart contract for the current epoch. Voted(ctx context.Context) (bool, error) }
QCContractClient enables interacting with the cluster QC aggregator smart contract. This contract is deployed to the service account as part of a collection of smart contracts that facilitate and manage epoch transitions.
type ReadyDoneAware ¶
type ReadyDoneAware interface { Ready() <-chan struct{} Done() <-chan struct{} }
ReadyDoneAware provides easy interface to wait for module startup and shutdown
type ReceiptValidator ¶ added in v0.14.0
type ReceiptValidator interface {
Validate(receipts []*flow.ExecutionReceipt) error
}
ReceiptValidator is an interface which is used for validating receipts with respect to current protocol state. Returns the following errors: * nil - in case of success * sentinel engine.InvalidInputError when one receipt is not valid due to conflicting protocol state. This can happen when invalid receipt has been passed or there is not enough data to validate state. * exception in case of any other error, usually this is not expected.
type Requester ¶
type Requester interface { // EntityByID will request an entity through the request engine backing // the interface. The additional selector will be applied to the subset // of valid providers for the entity and allows finer-grained control // over which providers to request a given entity from. Use `filter.Any` // if no additional restrictions are required. EntityByID(entityID flow.Identifier, selector flow.IdentityFilter) // Force will force the dispatcher to send all possible batches immediately. // It can be used in cases where responsiveness is of utmost importance, at // the cost of additional network messages. Force() }
type RuntimeMetrics ¶
type RuntimeMetrics interface { // TransactionParsed reports the time spent parsing a single transaction TransactionParsed(dur time.Duration) // TransactionChecked reports the time spent checking a single transaction TransactionChecked(dur time.Duration) // TransactionInterpreted reports the time spent interpreting a single transaction TransactionInterpreted(dur time.Duration) }
type SealValidator ¶ added in v0.14.0
SealValidator checks seals with respect to current protocol state. Accepts `candidate` block with seals that needs to be verified for protocol state validity. Returns the following values: * last seal in `candidate` block - in case of success * engine.InvalidInputError - in case if `candidate` block violates protocol state. * exception in case of any other error, usually this is not expected. PREREQUISITE: The SealValidator can only process blocks which are attached to the main chain (without any missing ancestors). This is the case because:
- the Seal validator walks the chain backwards and requires the relevant ancestors to be known and validated
- the storage.Seals only holds seals for block that are attached to the main chain.
type Signer ¶
Signer is a simple cryptographic signer that can sign a simple message to generate a signature, and verify the signature against the message.
type SyncCore ¶
type SyncCore interface { // HandleBlock handles receiving a new block. It returns true if the block // should be passed along to the rest of the system for processing, or false // if it should be discarded. HandleBlock(header *flow.Header) bool // HandleHeight handles receiving a new highest finalized height from another node. HandleHeight(final *flow.Header, height uint64) // ScanPending scans all pending block statuses for blocks that should be // requested. It apportions requestable items into range and batch requests // according to configured maximums, giving precedence to range requests. ScanPending(final *flow.Header) ([]flow.Range, []flow.Batch) // WithinTolerance returns whether or not the given height is within configured // height tolerance, wrt the given local finalized header. WithinTolerance(final *flow.Header, height uint64) bool // RangeRequested updates sync state after a range is requested. RangeRequested(ran flow.Range) // BatchRequested updates sync state after a batch is requested. BatchRequested(batch flow.Batch) }
SyncCore represents state management for chain state synchronization.
type ThresholdSigner ¶
type ThresholdSigner interface { ThresholdVerifier Sign(msg []byte) (crypto.Signature, error) Combine(size uint, shares []crypto.Signature, indices []uint) (crypto.Signature, error) }
ThresholdSigner is a signer that can sign a message to generate a signature share and construct a threshold signature from the given shares.
type ThresholdVerifier ¶
type ThresholdVerifier interface { Verifier VerifyThreshold(msg []byte, sig crypto.Signature, key crypto.PublicKey) (bool, error) }
ThresholdVerifier can verify a message against a signature share from a single key or a threshold signature against many keys.
type Tracer ¶
type Tracer interface { ReadyDoneAware StartSpan(entity flow.Identifier, spanName trace.SpanName, opts ...opentracing.StartSpanOption) opentracing.Span FinishSpan(entity flow.Identifier, spanName trace.SpanName) GetSpan(entity flow.Identifier, spanName trace.SpanName) (opentracing.Span, bool) StartSpanFromContext( ctx context.Context, operationName trace.SpanName, opts ...opentracing.StartSpanOption, ) (opentracing.Span, context.Context) StartSpanFromParent( span opentracing.Span, operationName trace.SpanName, opts ...opentracing.StartSpanOption, ) opentracing.Span // RecordSpanFromParent records an span at finish time // start time will be computed by reducing time.Now() - duration RecordSpanFromParent( span opentracing.Span, operationName trace.SpanName, duration time.Duration, logs []opentracing.LogRecord, opts ...opentracing.StartSpanOption, ) }
Tracer interface for tracers in flow. Uses open tracing span definitions
type TransactionMetrics ¶
type TransactionMetrics interface { // TransactionReceived starts tracking of transaction execution/finalization/sealing TransactionReceived(txID flow.Identifier, when time.Time) // TransactionFinalized reports the time spent between the transaction being received and finalized. Reporting only // works if the transaction was earlier added as received. TransactionFinalized(txID flow.Identifier, when time.Time) // TransactionExecuted reports the time spent between the transaction being received and executed. Reporting only // works if the transaction was earlier added as received. TransactionExecuted(txID flow.Identifier, when time.Time) // TransactionExpired tracks number of expired transactions TransactionExpired(txID flow.Identifier) // TransactionSubmissionFailed should be called whenever we try to submit a transaction and it fails TransactionSubmissionFailed() }
type VerificationMetrics ¶
type VerificationMetrics interface { // Finder Engine // // OnExecutionReceiptReceived is called whenever a new execution receipt arrives // at Finder engine. It increments total number of received receipts. OnExecutionReceiptReceived() // OnExecutionResultSent is called whenever a new execution result is sent by // Finder engine to the match engine. It increments total number of sent execution results. OnExecutionResultSent() // Match Engine // // OnExecutionResultReceived is called whenever a new execution result is successfully received // by Match engine from Finder engine. // It increments the total number of received execution results. OnExecutionResultReceived() // OnVerifiableChunkSent is called on a successful submission of matched chunk // by Match engine to Verifier engine. // It increments the total number of chunks matched by match engine. OnVerifiableChunkSent() // OnChunkDataPackReceived is called on a receiving a chunk data pack by Match engine // It increments the total number of chunk data packs received. OnChunkDataPackReceived() // OnChunkDataPackRequested is called on requesting a chunk data pack by Match engine // It increments the total number of chunk data packs requested. OnChunkDataPackRequested() // Verifier Engine // // OnVerifiableChunkReceived is called whenever a verifiable chunk is received by Verifier engine // from Match engine.It increments the total number of sent verifiable chunks. OnVerifiableChunkReceived() // OnResultApproval is called whenever a result approval for is emitted to consensus nodes. // It increases the total number of result approvals. OnResultApproval() // LogVerifiableChunkSize is called whenever a verifiable chunk is shaped for a specific // chunk. It adds the size of the verifiable chunk to the histogram. A verifiable chunk is assumed // to capture all the resources needed to verify a chunk. // The purpose of this function is to track the overall chunk resources size on disk. // Todo wire this up to do monitoring (3183) LogVerifiableChunkSize(size float64) }
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
builder
|
|
finalizer
|
|
Package ingress implements accepting transactions into the system.
|
Package ingress implements accepting transactions into the system. |
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED
|
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED |
stdmap
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED
|
(c) 2019 Dapper Labs - ALL RIGHTS RESERVED |
Package mocks is a generated GoMock package.
|
Package mocks is a generated GoMock package. |