ipfscluster

package module
v0.0.11 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 28, 2017 License: MIT Imports: 24 Imported by: 0

README

ipfs-cluster

standard-readme compliant GoDoc Go Report Card Build Status Coverage Status

Collective pinning and composition for IPFS.

THIS SOFTWARE IS ALPHA

ipfs-cluster allows to replicate content (by pinning) in multiple IPFS nodes:

  • Works on top of the IPFS daemon by running one cluster peer per IPFS node (ipfs-cluster-service)
  • A replication_factor controls how many times a CID is pinned in the cluster
  • Re-pins stuff in a different place when a peer goes down
  • Provides an HTTP API and a command-line wrapper (ipfs-cluster-ctl)
  • Provides an IPFS daemon API Proxy which intercepts any "pin"/"unpin" requests and does cluster pinning instead
  • The IPFS Proxy allows to build cluster composition, with a cluster peer acting as an IPFS daemon for another higher-level cluster.
  • Peers share the state using Raft-based consensus. Uses the LibP2P stack (go-libp2p-raft, go-libp2p-rpc...)

Table of Contents

Maintainers and Roadmap

This project is captained by @hsanjuan. See the captain's log for a written summary of current status and upcoming features. You can also check out the project's Roadmap for a high level overview of what's coming and the project's Waffle Board to see what issues are being worked on at the moment.

Install

Pre-compiled binaries

You can download pre-compiled binaries for your platform from the dist.ipfs.io website:

Note that since IPFS Cluster is evolving fast, the these builds may not contain the latest features/bugfixes as they are updated only bi-weekly.

Docker

You can build or download an automated build of the ipfs-cluster docker container. This container runs both the IPFS daemon and ipfs-cluster-service and includes ipfs-cluster-ctl. To launch the latest published version on Docker run:

$ docker run ipfs/ipfs-cluster

To build the container manually you can:

$ docker build . -t ipfs-cluster

You can mount your local ipfs-cluster configuration and data folder by passing -v /data/ipfs-cluster your-local-ipfs-cluster-folder to Docker.

Install from sources

Installing from master is the best way to have the latest features and bugfixes. In order to install the ipfs-cluster-service the ipfs-cluster-ctl tools you will need Go installed in your system and the run the following commands:

$ go get -u -d github.com/ipfs/ipfs-cluster
$ cd $GOPATH/src/github.com/ipfs/ipfs-cluster
$ make install

This will install ipfs-cluster-service and ipfs-cluster-ctl in your $GOPATH/bin folder. See the usage below.

Usage

ipfs-cluster-service

For information on how to configure and launch an IPFS Cluster peer see the ipfs-cluster-service README.

ipfs-cluster-ctl

For information on how to manage and perform operations on an IPFS Cluster peer see the ipfs-cluster-ctl README.

Go

IPFS Cluster nodes can be launched directly from Go. The Cluster object provides methods to interact with the cluster and perform actions.

Documentation and examples on how to use IPFS Cluster from Go can be found in godoc.org/github.com/ipfs/ipfs-cluster.

Additional docs

You can find more information and detailed guides:

Note: please contribute to improve and add more documentation!

API

TODO: Swagger

This is a quick summary of API endpoints offered by the Rest API component (these may change before 1.0):

Method Endpoint Comment
GET /id Cluster peer information
GET /version Cluster version
GET /peers Cluster peers
POST /peers Add new peer
DELETE /peers/{peerID} Remove a peer
GET /pinlist List of pins in the consensus state
GET /pins Status of all tracked CIDs
POST /pins/sync Sync all
GET /pins/{cid} Status of single CID
POST /pins/{cid} Pin CID
DELETE /pins/{cid} Unpin CID
POST /pins/{cid}/sync Sync CID
POST /pins/{cid}/recover Recover CID

Architecture

The best place to get an overview of how cluster works, what components exist etc. is the architecture.md doc.

Contribute

PRs accepted.

Small note: If editing the README, please conform to the standard-readme specification.

License

MIT © Protocol Labs, Inc.

Documentation

Overview

Package ipfscluster implements a wrapper for the IPFS deamon which allows to orchestrate pinning operations among several IPFS nodes.

IPFS Cluster uses a go-libp2p-raft to keep a shared state between the different cluster peers. It also uses LibP2P to enable communication between its different components, which perform different tasks like managing the underlying IPFS daemons, or providing APIs for external control.

Index

Constants

View Source
const (
	DefaultConfigCrypto              = crypto.RSA
	DefaultConfigKeyLength           = 2048
	DefaultAPIAddr                   = "/ip4/127.0.0.1/tcp/9094"
	DefaultIPFSProxyAddr             = "/ip4/127.0.0.1/tcp/9095"
	DefaultIPFSNodeAddr              = "/ip4/127.0.0.1/tcp/5001"
	DefaultClusterAddr               = "/ip4/0.0.0.0/tcp/9096"
	DefaultStateSyncSeconds          = 60
	DefaultMonitoringIntervalSeconds = 15
)

Default parameters for the configuration

View Source
const Version = "0.0.11"

Version is the current cluster version. Version alignment between components, apis and tools ensures compatibility among them.

Variables

View Source
var Commit string

Commit is the current build commit of cluster. See Makefile

View Source
var RPCProtocol = protocol.ID("/ipfscluster/" + Version + "/rpc")

RPCProtocol is used to send libp2p messages between cluster peers

Functions

func SetFacilityLogLevel added in v0.0.3

func SetFacilityLogLevel(f, l string)

SetFacilityLogLevel sets the log level for a given module

Types

type API

type API interface {
	Component
}

API is a component which offers an API for Cluster. This is a base component.

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Cluster is the main IPFS cluster component. It provides the go-API for it and orchestrates the components that make up the system.

func NewCluster

func NewCluster(
	cfg *Config,
	api API,
	ipfs IPFSConnector,
	st state.State,
	tracker PinTracker,
	monitor PeerMonitor,
	allocator PinAllocator,
	informer Informer) (*Cluster, error)

NewCluster builds a new IPFS Cluster peer. It initializes a LibP2P host, creates and RPC Server and client and sets up all components.

The new cluster peer may still be performing initialization tasks when this call returns (consensus may still be bootstrapping). Use Cluster.Ready() if you need to wait until the peer is fully up.

func (*Cluster) Done added in v0.0.3

func (c *Cluster) Done() <-chan struct{}

Done provides a way to learn if the Peer has been shutdown (for example, because it has been removed from the Cluster)

func (*Cluster) ID

func (c *Cluster) ID() api.ID

ID returns information about the Cluster peer

func (*Cluster) Join added in v0.0.3

func (c *Cluster) Join(addr ma.Multiaddr) error

Join adds this peer to an existing cluster. The calling peer should be a single-peer cluster node. This is almost equivalent to calling PeerAdd on the destination cluster.

func (*Cluster) PeerAdd added in v0.0.3

func (c *Cluster) PeerAdd(addr ma.Multiaddr) (api.ID, error)

PeerAdd adds a new peer to this Cluster.

The new peer must be reachable. It will be added to the consensus and will receive the shared state (including the list of peers). The new peer should be a single-peer cluster, preferable without any relevant state.

func (*Cluster) PeerRemove added in v0.0.3

func (c *Cluster) PeerRemove(pid peer.ID) error

PeerRemove removes a peer from this Cluster.

The peer will be removed from the consensus peer set, it will be shut down after this happens.

func (*Cluster) Peers added in v0.0.3

func (c *Cluster) Peers() []api.ID

Peers returns the IDs of the members of this Cluster

func (*Cluster) Pin

func (c *Cluster) Pin(pin api.Pin) error

Pin makes the cluster Pin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state. Depending on the cluster pinning strategy, the PinTracker may then request the IPFS daemon to pin the Cid.

Pin returns an error if the operation could not be persisted to the global state. Pin does not reflect the success or failure of underlying IPFS daemon pinning operations.

func (*Cluster) Pins

func (c *Cluster) Pins() []api.Pin

Pins returns the list of Cids managed by Cluster and which are part of the current global state. This is the source of truth as to which pins are managed, but does not indicate if the item is successfully pinned.

func (*Cluster) Ready added in v0.0.3

func (c *Cluster) Ready() <-chan struct{}

Ready returns a channel which signals when this peer is fully initialized (including consensus).

func (*Cluster) Recover added in v0.0.3

func (c *Cluster) Recover(h *cid.Cid) (api.GlobalPinInfo, error)

Recover triggers a recover operation for a given Cid in all cluster peers.

func (*Cluster) RecoverLocal added in v0.0.3

func (c *Cluster) RecoverLocal(h *cid.Cid) (api.PinInfo, error)

RecoverLocal triggers a recover operation for a given Cid

func (*Cluster) Shutdown

func (c *Cluster) Shutdown() error

Shutdown stops the IPFS cluster components

func (*Cluster) StateSync

func (c *Cluster) StateSync() ([]api.PinInfo, error)

StateSync syncs the consensus state to the Pin Tracker, ensuring that every Cid that should be tracked is tracked. It returns PinInfo for Cids which were added or deleted.

func (*Cluster) Status

func (c *Cluster) Status(h *cid.Cid) (api.GlobalPinInfo, error)

Status returns the GlobalPinInfo for a given Cid. If an error happens, the GlobalPinInfo should contain as much information as could be fetched.

func (*Cluster) StatusAll added in v0.0.3

func (c *Cluster) StatusAll() ([]api.GlobalPinInfo, error)

StatusAll returns the GlobalPinInfo for all tracked Cids. If an error happens, the slice will contain as much information as could be fetched.

func (*Cluster) Sync added in v0.0.3

func (c *Cluster) Sync(h *cid.Cid) (api.GlobalPinInfo, error)

Sync triggers a LocalSyncCid() operation for a given Cid in all cluster peers.

func (*Cluster) SyncAll added in v0.0.3

func (c *Cluster) SyncAll() ([]api.GlobalPinInfo, error)

SyncAll triggers LocalSync() operations in all cluster peers.

func (*Cluster) SyncAllLocal added in v0.0.3

func (c *Cluster) SyncAllLocal() ([]api.PinInfo, error)

SyncAllLocal makes sure that the current state for all tracked items matches the state reported by the IPFS daemon.

SyncAllLocal returns the list of PinInfo that where updated because of the operation, along with those in error states.

func (*Cluster) SyncLocal added in v0.0.3

func (c *Cluster) SyncLocal(h *cid.Cid) (api.PinInfo, error)

SyncLocal performs a local sync operation for the given Cid. This will tell the tracker to verify the status of the Cid against the IPFS daemon. It returns the updated PinInfo for the Cid.

func (*Cluster) Unpin

func (c *Cluster) Unpin(h *cid.Cid) error

Unpin makes the cluster Unpin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state.

Unpin returns an error if the operation could not be persisted to the global state. Unpin does not reflect the success or failure of underlying IPFS daemon unpinning operations.

func (*Cluster) Version

func (c *Cluster) Version() string

Version returns the current IPFS Cluster version

type Component

type Component interface {
	SetClient(*rpc.Client)
	Shutdown() error
}

Component represents a piece of ipfscluster. Cluster components usually run their own goroutines (a http server for example). They communicate with the main Cluster component and other components (both local and remote), using an instance of rpc.Client.

type Config

type Config struct {
	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         peer.ID
	PrivateKey crypto.PrivKey

	// ClusterPeers is the list of peers in the Cluster. They are used
	// as the initial peers in the consensus. When bootstrapping a peer,
	// ClusterPeers will be filled in automatically for the next run upon
	// shutdown.
	ClusterPeers []ma.Multiaddr

	// Bootstrap peers multiaddresses. This peer will attempt to
	// join the clusters of the peers in this list after booting.
	// Leave empty for a single-peer-cluster.
	Bootstrap []ma.Multiaddr

	// Leave Cluster on shutdown. Politely informs other peers
	// of the departure and removes itself from the consensus
	// peer set. The Cluster size will be reduced by one.
	LeaveOnShutdown bool

	// Listen parameters for the Cluster libp2p Host. Used by
	// the RPC and Consensus components.
	ClusterAddr ma.Multiaddr

	// Listen parameters for the the Cluster HTTP API component.
	APIAddr ma.Multiaddr

	// Listen parameters for the IPFS Proxy. Used by the IPFS
	// connector component.
	IPFSProxyAddr ma.Multiaddr

	// Host/Port for the IPFS daemon.
	IPFSNodeAddr ma.Multiaddr

	// Storage folder for snapshots, log store etc. Used by
	// the Consensus component.
	ConsensusDataFolder string

	// Number of seconds between StateSync() operations
	StateSyncSeconds int

	// ReplicationFactor is the number of copies we keep for each pin
	ReplicationFactor int

	// MonitoringIntervalSeconds is the number of seconds that can
	// pass before a peer can be detected as down.
	MonitoringIntervalSeconds int

	// AllocationStrategy is used to decide on the
	// Informer/Allocator implementation to use.
	AllocationStrategy string
	// contains filtered or unexported fields
}

Config represents an ipfs-cluster configuration. It is used by Cluster components. An initialized version of it can be obtained with NewDefaultConfig().

func LoadConfig

func LoadConfig(path string) (*Config, error)

LoadConfig reads a JSON configuration file from the given path, parses it and returns a new Config object.

func NewDefaultConfig

func NewDefaultConfig() (*Config, error)

NewDefaultConfig returns a default configuration object with a randomly generated ID and private key.

func (*Config) Save

func (cfg *Config) Save(path string) error

Save stores a configuration as a JSON file in the given path. If no path is provided, it uses the path the configuration was loaded from.

func (*Config) ToJSONConfig

func (cfg *Config) ToJSONConfig() (j *JSONConfig, err error)

ToJSONConfig converts a Config object to its JSON representation which is focused on user presentation and easy understanding.

type Consensus

type Consensus interface {
	Component
	// Returns a channel to signal that the consensus
	// algoritm is ready
	Ready() <-chan struct{}
	// Logs a pin operation
	LogPin(c api.Pin) error
	// Logs an unpin operation
	LogUnpin(c api.Pin) error
	LogAddPeer(addr ma.Multiaddr) error
	LogRmPeer(p peer.ID) error
	State() (state.State, error)
	// Provide a node which is responsible to perform
	// specific tasks which must only run in 1 cluster peer
	Leader() (peer.ID, error)
	// Only returns when the consensus state has all log
	// updates applied to it
	WaitForSync() error
}

Consensus is a component which keeps a shared state in IPFS Cluster and triggers actions on updates to that state. Currently, Consensus needs to be able to elect/provide a Cluster Leader and the implementation is very tight to the Cluster main component.

type IPFSConnector

type IPFSConnector interface {
	Component
	ID() (api.IPFSID, error)
	Pin(*cid.Cid) error
	Unpin(*cid.Cid) error
	PinLsCid(*cid.Cid) (api.IPFSPinStatus, error)
	PinLs(typeFilter string) (map[string]api.IPFSPinStatus, error)
	// ConnectSwarms make sure this peer's IPFS daemon is connected to
	// other peers IPFS daemons.
	ConnectSwarms() error
	// ConfigKey returns the value for a configuration key.
	// Subobjects are reached with keypaths as "Parent/Child/GrandChild...".
	ConfigKey(keypath string) (interface{}, error)
	// RepoSize returns the current repository size as expressed
	// by "repo stat".
	RepoSize() (int, error)
}

IPFSConnector is a component which allows cluster to interact with an IPFS daemon. This is a base component.

type Informer added in v0.0.3

type Informer interface {
	Component
	Name() string
	GetMetric() api.Metric
}

Informer provides Metric information from a peer. The metrics produced by informers are then passed to a PinAllocator which will use them to determine where to pin content. The metric is agnostic to the rest of Cluster.

type JSONConfig

type JSONConfig struct {
	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         string `json:"id"`
	PrivateKey string `json:"private_key"`

	// ClusterPeers is the list of peers' multiaddresses in the Cluster.
	// They are used as the initial peers in the consensus. When
	// bootstrapping a peer, ClusterPeers will be filled in automatically.
	ClusterPeers []string `json:"cluster_peers"`

	// Bootstrap peers multiaddresses. This peer will attempt to
	// join the clusters of the peers in the list. ONLY when ClusterPeers
	// is empty. Otherwise it is ignored. Leave empty for a single-peer
	// cluster.
	Bootstrap []string `json:"bootstrap"`

	// Leave Cluster on shutdown. Politely informs other peers
	// of the departure and removes itself from the consensus
	// peer set. The Cluster size will be reduced by one.
	LeaveOnShutdown bool `json:"leave_on_shutdown"`

	// Listen address for the Cluster libp2p host. This is used for
	// interal RPC and Consensus communications between cluster peers.
	ClusterListenMultiaddress string `json:"cluster_multiaddress"`

	// Listen address for the the Cluster HTTP API component.
	// Tools like ipfs-cluster-ctl will connect to his endpoint to
	// manage cluster.
	APIListenMultiaddress string `json:"api_listen_multiaddress"`

	// Listen address for the IPFS Proxy, which forwards requests to
	// an IPFS daemon.
	IPFSProxyListenMultiaddress string `json:"ipfs_proxy_listen_multiaddress"`

	// API address for the IPFS daemon.
	IPFSNodeMultiaddress string `json:"ipfs_node_multiaddress"`

	// Storage folder for snapshots, log store etc. Used by
	// the Consensus component.
	ConsensusDataFolder string `json:"consensus_data_folder"`

	// Number of seconds between syncs of the consensus state to the
	// tracker state. Normally states are synced anyway, but this helps
	// when new nodes are joining the cluster
	StateSyncSeconds int `json:"state_sync_seconds"`

	// ReplicationFactor indicates the number of nodes that must pin content.
	// For exampe, a replication_factor of 2 will prompt cluster to choose
	// two nodes for each pinned hash. A replication_factor -1 will
	// use every available node for each pin.
	ReplicationFactor int `json:"replication_factor"`

	// Number of seconds between monitoring checks which detect
	// if a peer is down and consenquently trigger a rebalance
	MonitoringIntervalSeconds int `json:"monitoring_interval_seconds"`

	// AllocationStrategy is used to set how pins are allocated to
	// different Cluster peers. Currently supports "reposize" and "pincount"
	// values.
	AllocationStrategy string `json:"allocation_strategy"`
}

JSONConfig represents a Cluster configuration as it will look when it is saved using JSON. Most configuration keys are converted into simple types like strings, and key names aim to be self-explanatory for the user.

func (*JSONConfig) ToConfig

func (jcfg *JSONConfig) ToConfig() (c *Config, err error)

ToConfig converts a JSONConfig to its internal Config representation, where options are parsed into their native types.

type PeerMonitor added in v0.0.3

type PeerMonitor interface {
	Component
	// LogMetric stores a metric. Metrics are pushed reguarly from each peer
	// to the active PeerMonitor.
	LogMetric(api.Metric)
	// LastMetrics returns a map with the latest metrics of matching name
	// for the current cluster peers.
	LastMetrics(name string) []api.Metric
	// Alerts delivers alerts generated when this peer monitor detects
	// a problem (i.e. metrics not arriving as expected). Alerts are used to
	// trigger rebalancing operations.
	Alerts() <-chan api.Alert
}

PeerMonitor is a component in charge of monitoring the peers in the cluster and providing candidates to the PinAllocator when a pin request arrives.

type Peered

type Peered interface {
	AddPeer(p peer.ID)
	RmPeer(p peer.ID)
}

Peered represents a component which needs to be aware of the peers in the Cluster and of any changes to the peer set.

type PinAllocator added in v0.0.3

type PinAllocator interface {
	Component
	// Allocate returns the list of peers that should be assigned to
	// Pin content in oder of preference (from the most preferred to the
	// least). The "current" map contains valid metrics for peers
	// which are currently pinning the content. The candidates map
	// contains the metrics for all peers which are eligible for pinning
	// the content.
	Allocate(c *cid.Cid, current, candidates map[peer.ID]api.Metric) ([]peer.ID, error)
}

PinAllocator decides where to pin certain content. In order to make such decision, it receives the pin arguments, the peers which are currently allocated to the content and metrics available for all peers which could allocate the content.

type PinTracker

type PinTracker interface {
	Component
	// Track tells the tracker that a Cid is now under its supervision
	// The tracker may decide to perform an IPFS pin.
	Track(api.Pin) error
	// Untrack tells the tracker that a Cid is to be forgotten. The tracker
	// may perform an IPFS unpin operation.
	Untrack(*cid.Cid) error
	// StatusAll returns the list of pins with their local status.
	StatusAll() []api.PinInfo
	// Status returns the local status of a given Cid.
	Status(*cid.Cid) api.PinInfo
	// SyncAll makes sure that all tracked Cids reflect the real IPFS status.
	// It returns the list of pins which were updated by the call.
	SyncAll() ([]api.PinInfo, error)
	// Sync makes sure that the Cid status reflect the real IPFS status.
	// It returns the local status of the Cid.
	Sync(*cid.Cid) (api.PinInfo, error)
	// Recover retriggers a Pin/Unpin operation in Cids with error status.
	Recover(*cid.Cid) (api.PinInfo, error)
}

PinTracker represents a component which tracks the status of the pins in this cluster and ensures they are in sync with the IPFS daemon. This component should be thread safe.

type RPCAPI

type RPCAPI struct {
	// contains filtered or unexported fields
}

RPCAPI is a go-libp2p-gorpc service which provides the internal ipfs-cluster API, which enables components and cluster peers to communicate and request actions from each other.

The RPC API methods are usually redirects to the actual methods in the different components of ipfs-cluster, with very little added logic. Refer to documentation on those methods for details on their behaviour.

func (*RPCAPI) ConsensusLogAddPeer added in v0.0.3

func (rpcapi *RPCAPI) ConsensusLogAddPeer(in api.MultiaddrSerial, out *struct{}) error

ConsensusLogAddPeer runs Consensus.LogAddPeer().

func (*RPCAPI) ConsensusLogPin

func (rpcapi *RPCAPI) ConsensusLogPin(in api.PinSerial, out *struct{}) error

ConsensusLogPin runs Consensus.LogPin().

func (*RPCAPI) ConsensusLogRmPeer added in v0.0.3

func (rpcapi *RPCAPI) ConsensusLogRmPeer(in peer.ID, out *struct{}) error

ConsensusLogRmPeer runs Consensus.LogRmPeer().

func (*RPCAPI) ConsensusLogUnpin

func (rpcapi *RPCAPI) ConsensusLogUnpin(in api.PinSerial, out *struct{}) error

ConsensusLogUnpin runs Consensus.LogUnpin().

func (*RPCAPI) ID

func (rpcapi *RPCAPI) ID(in struct{}, out *api.IDSerial) error

ID runs Cluster.ID()

func (*RPCAPI) IPFSConfigKey added in v0.0.11

func (rpcapi *RPCAPI) IPFSConfigKey(in string, out *interface{}) error

IPFSConfigKey runs IPFSConnector.ConfigKey().

func (*RPCAPI) IPFSConnectSwarms added in v0.0.11

func (rpcapi *RPCAPI) IPFSConnectSwarms(in struct{}, out *struct{}) error

IPFSConnectSwarms runs IPFSConnector.ConnectSwarms().

func (*RPCAPI) IPFSPin

func (rpcapi *RPCAPI) IPFSPin(in api.PinSerial, out *struct{}) error

IPFSPin runs IPFSConnector.Pin().

func (*RPCAPI) IPFSPinLs added in v0.0.3

func (rpcapi *RPCAPI) IPFSPinLs(in string, out *map[string]api.IPFSPinStatus) error

IPFSPinLs runs IPFSConnector.PinLs().

func (*RPCAPI) IPFSPinLsCid added in v0.0.3

func (rpcapi *RPCAPI) IPFSPinLsCid(in api.PinSerial, out *api.IPFSPinStatus) error

IPFSPinLsCid runs IPFSConnector.PinLsCid().

func (*RPCAPI) IPFSRepoSize added in v0.0.11

func (rpcapi *RPCAPI) IPFSRepoSize(in struct{}, out *int) error

IPFSRepoSize runs IPFSConnector.RepoSize().

func (*RPCAPI) IPFSUnpin

func (rpcapi *RPCAPI) IPFSUnpin(in api.PinSerial, out *struct{}) error

IPFSUnpin runs IPFSConnector.Unpin().

func (*RPCAPI) Join added in v0.0.3

func (rpcapi *RPCAPI) Join(in api.MultiaddrSerial, out *struct{}) error

Join runs Cluster.Join().

func (*RPCAPI) PeerAdd added in v0.0.3

func (rpcapi *RPCAPI) PeerAdd(in api.MultiaddrSerial, out *api.IDSerial) error

PeerAdd runs Cluster.PeerAdd().

func (*RPCAPI) PeerManagerAddFromMultiaddrs added in v0.0.3

func (rpcapi *RPCAPI) PeerManagerAddFromMultiaddrs(in api.MultiaddrsSerial, out *struct{}) error

PeerManagerAddFromMultiaddrs runs peerManager.addFromMultiaddrs().

func (*RPCAPI) PeerManagerAddPeer added in v0.0.3

func (rpcapi *RPCAPI) PeerManagerAddPeer(in api.MultiaddrSerial, out *struct{}) error

PeerManagerAddPeer runs peerManager.addPeer().

func (*RPCAPI) PeerManagerPeers added in v0.0.3

func (rpcapi *RPCAPI) PeerManagerPeers(in struct{}, out *[]peer.ID) error

PeerManagerPeers runs peerManager.peers().

func (*RPCAPI) PeerManagerRmPeer added in v0.0.3

func (rpcapi *RPCAPI) PeerManagerRmPeer(in peer.ID, out *struct{}) error

PeerManagerRmPeer runs peerManager.rmPeer().

func (*RPCAPI) PeerManagerRmPeerShutdown added in v0.0.3

func (rpcapi *RPCAPI) PeerManagerRmPeerShutdown(in peer.ID, out *struct{}) error

PeerManagerRmPeerShutdown runs peerManager.rmPeer().

func (*RPCAPI) PeerMonitorLastMetrics added in v0.0.3

func (rpcapi *RPCAPI) PeerMonitorLastMetrics(in string, out *[]api.Metric) error

PeerMonitorLastMetrics runs PeerMonitor.LastMetrics().

func (*RPCAPI) PeerMonitorLogMetric added in v0.0.3

func (rpcapi *RPCAPI) PeerMonitorLogMetric(in api.Metric, out *struct{}) error

PeerMonitorLogMetric runs PeerMonitor.LogMetric().

func (*RPCAPI) PeerRemove added in v0.0.3

func (rpcapi *RPCAPI) PeerRemove(in peer.ID, out *struct{}) error

PeerRemove runs Cluster.PeerRm().

func (*RPCAPI) Peers added in v0.0.3

func (rpcapi *RPCAPI) Peers(in struct{}, out *[]api.IDSerial) error

Peers runs Cluster.Peers().

func (*RPCAPI) Pin

func (rpcapi *RPCAPI) Pin(in api.PinSerial, out *struct{}) error

Pin runs Cluster.Pin().

func (*RPCAPI) PinList

func (rpcapi *RPCAPI) PinList(in struct{}, out *[]api.PinSerial) error

PinList runs Cluster.Pins().

func (*RPCAPI) Recover added in v0.0.3

func (rpcapi *RPCAPI) Recover(in api.PinSerial, out *api.GlobalPinInfoSerial) error

Recover runs Cluster.Recover().

func (*RPCAPI) RemoteMultiaddrForPeer added in v0.0.3

func (rpcapi *RPCAPI) RemoteMultiaddrForPeer(in peer.ID, out *api.MultiaddrSerial) error

RemoteMultiaddrForPeer returns the multiaddr of a peer as seen by this peer. This is necessary for a peer to figure out which of its multiaddresses the peers are seeing (also when crossing NATs). It should be called from the peer the IN parameter indicates.

func (*RPCAPI) StateSync

func (rpcapi *RPCAPI) StateSync(in struct{}, out *[]api.PinInfoSerial) error

StateSync runs Cluster.StateSync().

func (*RPCAPI) Status

func (rpcapi *RPCAPI) Status(in api.PinSerial, out *api.GlobalPinInfoSerial) error

Status runs Cluster.Status().

func (*RPCAPI) StatusAll added in v0.0.3

func (rpcapi *RPCAPI) StatusAll(in struct{}, out *[]api.GlobalPinInfoSerial) error

StatusAll runs Cluster.StatusAll().

func (*RPCAPI) Sync added in v0.0.3

func (rpcapi *RPCAPI) Sync(in api.PinSerial, out *api.GlobalPinInfoSerial) error

Sync runs Cluster.Sync().

func (*RPCAPI) SyncAll added in v0.0.3

func (rpcapi *RPCAPI) SyncAll(in struct{}, out *[]api.GlobalPinInfoSerial) error

SyncAll runs Cluster.SyncAll().

func (*RPCAPI) SyncAllLocal added in v0.0.3

func (rpcapi *RPCAPI) SyncAllLocal(in struct{}, out *[]api.PinInfoSerial) error

SyncAllLocal runs Cluster.SyncAllLocal().

func (*RPCAPI) SyncLocal added in v0.0.3

func (rpcapi *RPCAPI) SyncLocal(in api.PinSerial, out *api.PinInfoSerial) error

SyncLocal runs Cluster.SyncLocal().

func (*RPCAPI) Track

func (rpcapi *RPCAPI) Track(in api.PinSerial, out *struct{}) error

Track runs PinTracker.Track().

func (*RPCAPI) TrackerRecover added in v0.0.3

func (rpcapi *RPCAPI) TrackerRecover(in api.PinSerial, out *api.PinInfoSerial) error

TrackerRecover runs PinTracker.Recover().

func (*RPCAPI) TrackerStatus

func (rpcapi *RPCAPI) TrackerStatus(in api.PinSerial, out *api.PinInfoSerial) error

TrackerStatus runs PinTracker.Status().

func (*RPCAPI) TrackerStatusAll added in v0.0.3

func (rpcapi *RPCAPI) TrackerStatusAll(in struct{}, out *[]api.PinInfoSerial) error

TrackerStatusAll runs PinTracker.StatusAll().

func (*RPCAPI) Unpin

func (rpcapi *RPCAPI) Unpin(in api.PinSerial, out *struct{}) error

Unpin runs Cluster.Unpin().

func (*RPCAPI) Untrack

func (rpcapi *RPCAPI) Untrack(in api.PinSerial, out *struct{}) error

Untrack runs PinTracker.Untrack().

func (*RPCAPI) Version

func (rpcapi *RPCAPI) Version(in struct{}, out *api.Version) error

Version runs Cluster.Version().

Directories

Path Synopsis
allocator
ascendalloc
Package ascendalloc implements an ipfscluster.Allocator returns allocations based on sorting the metrics in ascending order.
Package ascendalloc implements an ipfscluster.Allocator returns allocations based on sorting the metrics in ascending order.
api
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
restapi
Package restapi implements an IPFS Cluster API component.
Package restapi implements an IPFS Cluster API component.
consensus
raft
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
informer
disk
Package disk implements an ipfs-cluster informer which determines the current RepoSize of the ipfs daemon datastore and returns it as an api.Metric.
Package disk implements an ipfs-cluster informer which determines the current RepoSize of the ipfs daemon datastore and returns it as an api.Metric.
numpin
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
ipfsconn
ipfshttp
Package ipfshttp implements an IPFS Cluster IPFSConnector component.
Package ipfshttp implements an IPFS Cluster IPFSConnector component.
monitor
basic
Package basic implements a basic PeerMonitor component for IPFS Cluster.
Package basic implements a basic PeerMonitor component for IPFS Cluster.
pintracker
maptracker
Package maptracker implements a PinTracker component for IPFS Cluster.
Package maptracker implements a PinTracker component for IPFS Cluster.
Package state holds the interface that any state implementation for IPFS Cluster must satisfy.
Package state holds the interface that any state implementation for IPFS Cluster must satisfy.
mapstate
Package mapstate implements the State interface for IPFS Cluster by using a map to keep track of the consensus-shared state.
Package mapstate implements the State interface for IPFS Cluster by using a map to keep track of the consensus-shared state.
Package test offers testing utilities to ipfs-cluster like mocks
Package test offers testing utilities to ipfs-cluster like mocks

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL