Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var (
ErrDisconnectedBatch = errors.New("batch must be a sequence of connected blocks")
)
Functions ¶
This section is empty.
Types ¶
type BlocksByID ¶
type BlocksByID map[flow.Identifier]*flow.Block
type Cache ¶
type Cache struct {
// contains filtered or unexported fields
}
Cache stores pending blocks received from other replicas, caches blocks by blockID, and maintains secondary indices to look up blocks by view or by parent ID. Additional indices are used to track proposal equivocation (multiple valid proposals for same block) and find blocks not only by parent but also by child. Resolves certified blocks when processing incoming batches. Concurrency safe.
func NewCache ¶
func NewCache(log zerolog.Logger, limit uint32, collector module.HeroCacheMetrics, onEquivocation OnEquivocation) *Cache
NewCache creates new instance of Cache
func (*Cache) AddBlocks ¶
func (c *Cache) AddBlocks(batch []*flow.Block) (certifiedBatch []*flow.Block, certifyingQC *flow.QuorumCertificate, err error)
AddBlocks atomically adds the given batch of blocks to the cache. We require that incoming batch is sorted in ascending height order and doesn't have skipped blocks; otherwise the cache returns a `ErrDisconnectedBatch` error. When receiving batch: [first, ..., last], we are only interested in the first and last blocks. All blocks before `last` are certified by construction (by the QC included in `last`). The following two cases are possible: - for first block:
- no parent available for first block.
- parent for first block available in cache allowing to certify it, we can certify one extra block(parent).
- for last block:
- no child available for last block, need to wait for child to certify it.
- child for last block available in cache allowing to certify it, we can certify one extra block(child).
All blocks from the batch are stored in the cache to provide deduplication. The function returns any new certified chain of blocks created by addition of the batch. Returns `certifiedBatch, certifyingQC` if the input batch has more than one block, and/or if either a child or parent of the batch is in the cache. The implementation correctly handles cases with `len(batch) == 1` or `len(batch) == 0`, where it returns `nil, nil` in the following cases:
- the input batch has exactly one block and neither its parent nor child is in the cache.
- the input batch is empty
If message equivocation was detected it will be reported using a notification. Concurrency safe.
Expected errors during normal operations:
- ErrDisconnectedBatch
func (*Cache) Peek ¶
func (c *Cache) Peek(blockID flow.Identifier) *flow.Block
Peek performs lookup of cached block by blockID. Concurrency safe
func (*Cache) PruneUpToView ¶
PruneUpToView sets the lowest view that we are accepting blocks for. Any blocks with view _strictly smaller_ that the given threshold are removed from the cache. Concurrency safe.
type HeroCacheDistributor ¶
type HeroCacheDistributor struct {
// contains filtered or unexported fields
}
HeroCacheDistributor implements herocache.Tracer and allows subscribers to receive events for ejected entries from cache using herocache.Tracer API. This structure is NOT concurrency safe.
func NewDistributor ¶
func NewDistributor() *HeroCacheDistributor
func (*HeroCacheDistributor) AddConsumer ¶
func (d *HeroCacheDistributor) AddConsumer(consumer OnEntityEjected)
AddConsumer adds subscriber for entity ejected events. Is NOT concurrency safe.
func (*HeroCacheDistributor) EntityEjectionDueToEmergency ¶
func (d *HeroCacheDistributor) EntityEjectionDueToEmergency(ejectedEntity flow.Entity)
func (*HeroCacheDistributor) EntityEjectionDueToFullCapacity ¶
func (d *HeroCacheDistributor) EntityEjectionDueToFullCapacity(ejectedEntity flow.Entity)