Documentation ¶
Overview ¶
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
Index ¶
- Constants
- Variables
- func LastStateRaw(cfg *Config) (io.Reader, bool, error)
- func SnapshotSave(cfg *Config, newState state.State, pid peer.ID) error
- type Config
- type Consensus
- func (cc *Consensus) AddPeer(pid peer.ID) error
- func (cc *Consensus) Clean() error
- func (cc *Consensus) Leader() (peer.ID, error)
- func (cc *Consensus) LogPin(pin api.Pin) error
- func (cc *Consensus) LogUnpin(pin api.Pin) error
- func (cc *Consensus) Peers() ([]peer.ID, error)
- func (cc *Consensus) Ready() <-chan struct{}
- func (cc *Consensus) RmPeer(pid peer.ID) error
- func (cc *Consensus) Rollback(state state.State) error
- func (cc *Consensus) SetClient(c *rpc.Client)
- func (cc *Consensus) Shutdown() error
- func (cc *Consensus) State() (state.State, error)
- func (cc *Consensus) WaitForSync() error
- type LogOp
- type LogOpType
Constants ¶
const ( LogOpPin = iota + 1 LogOpUnpin )
Type of consensus operation
Variables ¶
var ( DefaultDataSubFolder = "ipfs-cluster-data" DefaultWaitForLeaderTimeout = 15 * time.Second DefaultCommitRetries = 1 DefaultNetworkTimeout = 10 * time.Second DefaultCommitRetryDelay = 200 * time.Millisecond )
Configuration defaults
var RaftDataBackupKeep = 5
RaftDataBackupKeep indicates the number of data folders we keep around after consensus.Clean() has been called.
var RaftLogCacheSize = 512
RaftLogCacheSize is the maximum number of logs to cache in-memory. This is used to reduce disk I/O for the recently committed entries.
var RaftMaxSnapshots = 5
RaftMaxSnapshots indicates how many snapshots to keep in the consensus data folder. TODO: Maybe include this in Config. Not sure how useful it is to touch this anyways.
Functions ¶
func LastStateRaw ¶ added in v0.3.1
LastStateRaw returns the bytes of the last snapshot stored, its metadata, and a flag indicating whether any snapshot was found.
Types ¶
type Config ¶ added in v0.2.0
type Config struct { config.Saver // A Hashicorp Raft's configuration object. RaftConfig *hraft.Config // A folder to store Raft's data. DataFolder string // LeaderTimeout specifies how long to wait for a leader before // failing an operation. WaitForLeaderTimeout time.Duration // NetworkTimeout specifies how long before a Raft network // operation is timed out NetworkTimeout time.Duration // CommitRetries specifies how many times we retry a failed commit until // we give up. CommitRetries int // How long to wait between retries CommitRetryDelay time.Duration // contains filtered or unexported fields }
Config allows to configure the Raft Consensus component for ipfs-cluster. The component's configuration section is represented by ConfigJSON. Config implements the ComponentConfig interface.
func (*Config) ConfigKey ¶ added in v0.2.0
ConfigKey returns a human-friendly indentifier for this Config.
func (*Config) Default ¶ added in v0.2.0
Default initializes this configuration with working defaults.
func (*Config) LoadJSON ¶ added in v0.2.0
LoadJSON parses a json-encoded configuration (see jsonConfig). The Config will have default values for all fields not explicited in the given json object.
type Consensus ¶
type Consensus struct {
// contains filtered or unexported fields
}
Consensus handles the work of keeping a shared-state between the peers of an IPFS Cluster, as well as modifying that state and applying any updates in a thread-safe manner.
func NewConsensus ¶
func NewConsensus(clusterPeers []peer.ID, host host.Host, cfg *Config, state state.State) (*Consensus, error)
NewConsensus builds a new ClusterConsensus component. The state is used to initialize the Consensus system, so any information in it is discarded.
func (*Consensus) AddPeer ¶ added in v0.3.0
AddPeer adds a new peer to participate in this consensus. It will forward the operation to the leader if this is not it.
func (*Consensus) Clean ¶ added in v0.3.0
Clean removes all raft data from disk. Next time a full new peer will be bootstrapped.
func (*Consensus) Leader ¶
Leader returns the peerID of the Leader of the cluster. It returns an error when there is no leader.
func (*Consensus) LogPin ¶
LogPin submits a Cid to the shared state of the cluster. It will forward the operation to the leader if this is not it.
func (*Consensus) Peers ¶ added in v0.3.0
Peers return the current list of peers in the consensus. The list will be sorted alphabetically.
func (*Consensus) Ready ¶
func (cc *Consensus) Ready() <-chan struct{}
Ready returns a channel which is signaled when the Consensus algorithm has finished bootstrapping and is ready to use
func (*Consensus) RmPeer ¶ added in v0.3.0
RmPeer removes a peer from this consensus. It will forward the operation to the leader if this is not it.
func (*Consensus) Rollback ¶
Rollback replaces the current agreed-upon state with the state provided. Only the consensus leader can perform this operation.
func (*Consensus) Shutdown ¶
Shutdown stops the component so it will not process any more updates. The underlying consensus is permanently shutdown, along with the libp2p transport.
func (*Consensus) State ¶
State retrieves the current consensus State. It may error if no State has been agreed upon or the state is not consistent. The returned State is the last agreed-upon State known by this node.
func (*Consensus) WaitForSync ¶
WaitForSync waits for a leader and for the state to be up to date, then returns.