iss

package
v0.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 24, 2023 License: Apache-2.0 Imports: 30 Imported by: 0

README

Insanely Scalable SMR (ISS)

ISS is the first modular algorithm to make leader-driven total order broadcast scale in a robust way, published in March 2022. At its interface, ISS is a classic state machine replication (SMR) system that establishes a total order of client requests with typical liveness and safety properties, applicable to any replicated service, such as resilient databases or a blockchain ordering layer. It is a further development and a successor of the Mir-BFT protocol (not to be confused with the Mir library used to implement ISS, see the description of Mir).

ISS achieves scalability without requiring a primary node to periodically decide on the protocol configuration. It multiplexes multiple instances of a leader-driven consensus protocol which operate concurrently and (almost) independently. We abstract away the logic of the used consensus protocol and only define an interface - that we call "Sequenced Broadcast" (SB) - that such a consensus protocol must use to interact with ISS.

ISS maintains a contiguous log of (batches of) client requests at each node. Each position in the log corresponds to a unique sequence number and ISS agrees on the assignment of a unique request batch to each sequence number. Our goal is to introduce as much parallelism as possible in assigning batches to sequence numbers while avoiding request duplication, i.e., assigning the same request to more than one sequence number. To this end, ISS subdivides the log into non-overlapping segments. Each segment, representing a subset of the log's sequence numbers, corresponds to an independent consensus protocol instance that has its own leader and executes concurrently with other instances.

To prevent the leaders of two different segments from concurrently proposing the same request, and thus wasting resources, while also preventing malicious leaders from censoring (i.e., not proposing) certain requests, we adopt and generalize the partitioning of the request space introduced by Mir-BFT. At any point in time, ISS assigns a different subset of client requests (that we call a bucket) to each segment. ISS periodically changes this assignment, such that each request is guaranteed to eventually be assigned to a segment with a correct leader.

The figure below shows the high-level architecture of the ISS protocol. High-level architecture of the ISS protocol

Documentation

Overview

Package iss contains the implementation of the ISS protocol, the new generation of Mir. For the details of the protocol, see README.md in this directory. To use ISS, instantiate it by calling `iss.New` and use it as the Protocol module when instantiating a mir.Node. A default configuration (to pass, among other arguments, to `iss.New`) can be obtained from `issutil.DefaultParams`.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func DefaultModules

func DefaultModules(orig modules.Modules, moduleConfig *ModuleConfig) (modules.Modules, error)

DefaultModules takes a Modules object (as a value, not a pointer to it) and returns a pointer to a new Modules object with default ISS modules inserted in fields where no module has been specified.

func Event

func Event(destModule t.ModuleID, event *isspb.ISSEvent) *eventpb.Event

func HashOrigin

func HashOrigin(ownModuleID t.ModuleID, origin *isspb.ISSHashOrigin) *eventpb.HashOrigin

func InitialStateSnapshot

func InitialStateSnapshot(
	appState []byte,
	params *issconfig.ModuleParams,
) (*commonpb.StateSnapshot, error)

InitialStateSnapshot creates and returns a default initial snapshot that can be used to instantiate ISS. The created snapshot corresponds to epoch 0, without any committed transactions, and with the initial membership (found in params and used for epoch 0) repeated for all the params.ConfigOffset following epochs.

func LogEntryHashOrigin

func LogEntryHashOrigin(ownModuleID t.ModuleID, logEntrySN t.SeqNr) *eventpb.HashOrigin

func Message

func Message(msg *isspb.ISSMessage) *messagepb.Message

func PushCheckpoint

func PushCheckpoint(ownModuleID t.ModuleID) *eventpb.Event

func SigVerOrigin

func SigVerOrigin(ownModuleID t.ModuleID, origin *isspb.ISSSigVerOrigin) *eventpb.SigVerOrigin

func StableCheckpointMessage

func StableCheckpointMessage(stableCheckpoint *checkpoint.StableCheckpoint) *messagepb.Message

func StableCheckpointSigVerOrigin

func StableCheckpointSigVerOrigin(
	ownModuleID t.ModuleID,
	stableCheckpoint *checkpointpb.StableCheckpoint,
) *eventpb.SigVerOrigin

Types

type BucketGroup

type BucketGroup []*RequestBucket

BucketGroup represents a group of request buckets. It is used to represent both the set of all buckets used by ISS throughout the whole execution (across epochs), and subsets of it used to create request batches.

func NewBuckets

func NewBuckets(numBuckets int, logger logging.Logger) *BucketGroup

NewBuckets returns a new group of numBuckets initialized buckets. The logger will be used to output bucket-related debugging messages.

func (BucketGroup) CutBatch

func (buckets BucketGroup) CutBatch(maxBatchSize t.NumRequests) *requestpb.Batch

CutBatch assembles and returns a new request batch from requests in the bucket group, removing those requests from their respective buckets. The size of the returned batch will be min(buckets.TotalRequests(), maxBatchSize). If possible, requests are taken from every non-empty bucket in the group.

func (BucketGroup) Distribute

func (buckets BucketGroup) Distribute(leaders []t.NodeID, epoch t.EpochNr) map[t.NodeID][]int

Distribute takes a list of node IDs (representing the leaders of the given epoch) and assigns a list of bucket IDs to each of the node (leader) IDs, such that the ID of each bucket is assigned to a exactly one leader. Distribute guarantees that if some node is part of `leaders` for infinitely many consecutive epochs (i.e., infinitely many invocations of Distribute with the `epoch` parameter values increasing monotonically), the ID of each bucket in the group will be assigned to the node infinitely many times. Distribute also makes best effort to distribute the buckets evenly among the leaders. If `leaders` is empty, Distribute returns an empty map.

TODO: Update this to have a more sophisticated, livenes-ensuring implementation, to actually implement what is written above. An additional parameter with all the nodes (even non-leaders) might help there.

func (BucketGroup) Get

func (buckets BucketGroup) Get(bID int) *RequestBucket

Get returns the bucket with id bID.

func (BucketGroup) RequestBucket

func (buckets BucketGroup) RequestBucket(req *requestpb.HashedRequest) *RequestBucket

RequestBucket returns the bucket from this group to which the given request maps. Note that this depends on the whole bucket group (not just the request), as RequestBucket employs a hash function to evenly distribute requests among the buckets in the group. Thus, the same request may map to some bucket in one group and to a different bucket in a different group, even if the former bucket is part of the latter group.

func (BucketGroup) Select

func (buckets BucketGroup) Select(bucketIDs []int) BucketGroup

Select returns a subgroup of buckets consisting only of buckets from this group with the given IDs. Select does not make deep copies of the selected buckets and the buckets underlying both the original and the new group are the same. If any of the given IDs is not represented in this group, Select panics.

func (BucketGroup) TotalRequests

func (buckets BucketGroup) TotalRequests() t.NumRequests

TotalRequests returns the total number of requests in all buckets of this group.

type CommitLogEntry

type CommitLogEntry struct {
	// Sequence number at which this entry has been ordered.
	Sn t.SeqNr

	// The delivered availability certificate data.
	// TODO: Replace by actual certificate when deterministic serialization of certificates is implemented.
	CertData []byte

	// The digest (hash) of the entry.
	Digest []byte

	// A flag indicating whether this entry is an actual certificate (false)
	// or whether the orderer delivered a special abort value (true).
	Aborted bool

	// In case Aborted is true, this field indicates the ID of the node
	// that is suspected to be the reason for the orderer aborting (usually the leader).
	// This information can be used by the leader selection policy at epoch transition.
	Suspect t.NodeID
}

The CommitLogEntry type represents an entry of the commit log, the final output of the ordering process. Whenever an orderer delivers an availability certificate (or a special abort value), it is inserted to the commit log in form of a commitLogEntry.

type ISS

type ISS struct {

	// The ISS configuration parameters (e.g. Segment length, proposal frequency etc...)
	// passed to New() when creating an ISS protocol instance.
	Params *issconfig.ModuleParams

	// The logic for selecting leader nodes in each epoch.
	// For details see the documentation of the LeaderSelectionPolicy type.
	// ATTENTION: The leader selection policy is stateful!
	// Must not be nil.
	LeaderPolicy lsp.LeaderSelectionPolicy
	// contains filtered or unexported fields
}

The ISS type represents the ISS protocol module to be used when instantiating a node. The type should not be instantiated directly, but only properly initialized values returned from the New() function should be used.

func New

func New(

	ownID t.NodeID,

	moduleConfig *ModuleConfig,

	params *issconfig.ModuleParams,

	startingChkp *checkpoint.StableCheckpoint,

	hashImpl crypto.HashImpl,

	chkpVerifier checkpoint.Verifier,

	logger logging.Logger,

) (*ISS, error)

New returns a new initialized instance of the ISS protocol module to be used when instantiating a mir.Node.

func (*ISS) ApplyEvent

func (iss *ISS) ApplyEvent(event *eventpb.Event) (*events.EventList, error)

ApplyEvent receives one event and applies it to the ISS protocol state machine, potentially altering its state and producing a (potentially empty) list of more events to be applied to other modules.

func (*ISS) ApplyEvents

func (iss *ISS) ApplyEvents(eventsIn *events.EventList) (*events.EventList, error)

ApplyEvents receives a list of events, processes them sequentially, and returns a list of resulting events.

func (*ISS) ImplementsModule

func (iss *ISS) ImplementsModule()

The ImplementsModule method only serves the purpose of indicating that this is a Module and must not be called.

type ModuleConfig

type ModuleConfig struct {
	Self         t.ModuleID
	Net          t.ModuleID
	App          t.ModuleID
	Timer        t.ModuleID
	Availability t.ModuleID
	Checkpoint   t.ModuleID
	Ordering     t.ModuleID
}

ModuleConfig contains the names of modules ISS depends on. The corresponding modules are expected by ISS to be stored under these keys by the Node.

func DefaultModuleConfig

func DefaultModuleConfig() *ModuleConfig

type RequestBucket

type RequestBucket struct {

	// Numeric ID of the bucket.
	// A hash function maps each request to a bucket ID.
	ID int
	// contains filtered or unexported fields
}

RequestBucket represents a subset of received requests (called a Bucket in ISS) that retains the order in which the requests have been added. Each request deterministically maps to a single bucket. A hash function (BucketGroup.RequestBucket()) decides which bucket a request falls into.

The implementation of a bucket must support efficient additions (new requests), removals of the n oldest requests (cutting a batch), as well as removals of random requests (committing requests).

func (*RequestBucket) Add

Add adds a new request to the bucket if that request has not yet been added. Note that even requests that have been added and removed cannot be added again (except for the case of request resurrection, but this is done using the Resurrect() method). Returns true if the request was not in the bucket and has just been added, false if the request already has been added.

func (*RequestBucket) Contains

func (b *RequestBucket) Contains(req *requestpb.HashedRequest) bool

Contains returns true if the given request is in the bucket, false otherwise. Only requests that have been added but not removed count as contained in the bucket, as well as resurrected requests that have not been removed since resurrection.

func (*RequestBucket) Len

func (b *RequestBucket) Len() int

Len returns the number of requests in the bucket.

func (*RequestBucket) Remove

func (b *RequestBucket) Remove(req *requestpb.HashedRequest)

Remove removes a request from the bucket. Note that even after removal, the request cannot be added again using the Add() method. Moreover, removing a request from the bucket, even if the request is not present in the bucket, also prevents the request from being added in the future using Add(). It can be, however, returned to the bucket during request resurrection, see Resurrect().

func (*RequestBucket) RemoveFirst

func (b *RequestBucket) RemoveFirst(n int, acc []*requestpb.HashedRequest) []*requestpb.HashedRequest

RemoveFirst removes the first up to n requests from the bucket and appends them to the accumulator acc. Returns the resulting slice obtained by appending the Requests to acc.

func (*RequestBucket) Resurrect

func (b *RequestBucket) Resurrect(req *requestpb.HashedRequest)

Resurrect re-adds a previously removed request to the bucket, effectively undoing the removal of the request. The request is added to the "front" of the bucket, i.e., it will be the first request to be removed by RemoveFirst(). Request resurrection is performed when a leader proposes a batch from this bucket, but, for some reason, the batch is not committed in the same epoch. In such a case, the requests contained in the batch need to be resurrected (i.e., put back in their respective buckets) so they can be committed in a future epoch.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL