Documentation ¶
Overview ¶
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
Index ¶
- Constants
- Variables
- func CleanupRaft(dataFolder string, keep int) error
- func LastStateRaw(cfg *Config) (io.Reader, bool, error)
- func SnapshotSave(cfg *Config, newState state.State, pids []peer.ID) error
- type Config
- type Consensus
- func (cc *Consensus) AddPeer(pid peer.ID) error
- func (cc *Consensus) Clean() error
- func (cc *Consensus) Leader() (peer.ID, error)
- func (cc *Consensus) LogPin(pin api.Pin) error
- func (cc *Consensus) LogUnpin(pin api.Pin) error
- func (cc *Consensus) Peers() ([]peer.ID, error)
- func (cc *Consensus) Ready() <-chan struct{}
- func (cc *Consensus) RmPeer(pid peer.ID) error
- func (cc *Consensus) Rollback(state state.State) error
- func (cc *Consensus) SetClient(c *rpc.Client)
- func (cc *Consensus) Shutdown() error
- func (cc *Consensus) State() (state.State, error)
- func (cc *Consensus) WaitForSync() error
- type LogOp
- type LogOpType
Constants ¶
const ( LogOpPin = iota + 1 LogOpUnpin )
Type of consensus operation
Variables ¶
var ( DefaultDataSubFolder = "raft" DefaultWaitForLeaderTimeout = 15 * time.Second DefaultCommitRetries = 1 DefaultNetworkTimeout = 10 * time.Second DefaultCommitRetryDelay = 200 * time.Millisecond DefaultBackupsRotate = 6 )
Configuration defaults
var RaftLogCacheSize = 512
RaftLogCacheSize is the maximum number of logs to cache in-memory. This is used to reduce disk I/O for the recently committed entries.
var RaftMaxSnapshots = 5
RaftMaxSnapshots indicates how many snapshots to keep in the consensus data folder. TODO: Maybe include this in Config. Not sure how useful it is to touch this anyways.
Functions ¶
func CleanupRaft ¶
CleanupRaft moves the current data folder to a backup location
func LastStateRaw ¶
LastStateRaw returns the bytes of the last snapshot stored, its metadata, and a flag indicating whether any snapshot was found.
func SnapshotSave ¶
SnapshotSave saves the provided state to a snapshot in the raft data path. Old raft data is backed up and replaced by the new snapshot. pids contains the config-specified peer ids to include in the snapshot metadata if no snapshot exists from which to copy the raft metadata
Types ¶
type Config ¶
type Config struct { config.Saver // A folder to store Raft's data. DataFolder string // InitPeerset provides the list of initial cluster peers for new Raft // peers (with no prior state). It is ignored when Raft was already // initialized or when starting in staging mode. InitPeerset []peer.ID // LeaderTimeout specifies how long to wait for a leader before // failing an operation. WaitForLeaderTimeout time.Duration // NetworkTimeout specifies how long before a Raft network // operation is timed out NetworkTimeout time.Duration // CommitRetries specifies how many times we retry a failed commit until // we give up. CommitRetries int // How long to wait between retries CommitRetryDelay time.Duration // BackupsRotate specifies the maximum number of Raft's DataFolder // copies that we keep as backups (renaming) after cleanup. BackupsRotate int // A Hashicorp Raft's configuration object. RaftConfig *hraft.Config // contains filtered or unexported fields }
Config allows to configure the Raft Consensus component for ipfs-cluster. The component's configuration section is represented by ConfigJSON. Config implements the ComponentConfig interface.
func (*Config) GetDataFolder ¶
GetDataFolder returns the Raft data folder that we are using.
func (*Config) LoadJSON ¶
LoadJSON parses a json-encoded configuration (see jsonConfig). The Config will have default values for all fields not explicited in the given json object.
type Consensus ¶
type Consensus struct {
// contains filtered or unexported fields
}
Consensus handles the work of keeping a shared-state between the peers of an IPFS Cluster, as well as modifying that state and applying any updates in a thread-safe manner.
func NewConsensus ¶
func NewConsensus( host host.Host, cfg *Config, state state.State, staging bool, ) (*Consensus, error)
NewConsensus builds a new ClusterConsensus component using Raft. The state is used to initialize the Consensus system, so any information in it is discarded once the raft state is loaded. The singlePeer parameter controls whether this Raft peer is be expected to join a cluster or it should run on its own.
func (*Consensus) AddPeer ¶
AddPeer adds a new peer to participate in this consensus. It will forward the operation to the leader if this is not it.
func (*Consensus) Clean ¶
Clean removes all raft data from disk. Next time a full new peer will be bootstrapped.
func (*Consensus) Leader ¶
Leader returns the peerID of the Leader of the cluster. It returns an error when there is no leader.
func (*Consensus) LogPin ¶
LogPin submits a Cid to the shared state of the cluster. It will forward the operation to the leader if this is not it.
func (*Consensus) Peers ¶
Peers return the current list of peers in the consensus. The list will be sorted alphabetically.
func (*Consensus) Ready ¶
func (cc *Consensus) Ready() <-chan struct{}
Ready returns a channel which is signaled when the Consensus algorithm has finished bootstrapping and is ready to use
func (*Consensus) RmPeer ¶
RmPeer removes a peer from this consensus. It will forward the operation to the leader if this is not it.
func (*Consensus) Rollback ¶
Rollback replaces the current agreed-upon state with the state provided. Only the consensus leader can perform this operation.
func (*Consensus) Shutdown ¶
Shutdown stops the component so it will not process any more updates. The underlying consensus is permanently shutdown, along with the libp2p transport.
func (*Consensus) State ¶
State retrieves the current consensus State. It may error if no State has been agreed upon or the state is not consistent. The returned State is the last agreed-upon State known by this node.
func (*Consensus) WaitForSync ¶
WaitForSync waits for a leader and for the state to be up to date, then returns.