modules

package
v1.4.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 24, 2020 License: MIT Imports: 34 Imported by: 88

README

Modules

The modules package is the top-level package for all modules. It contains the interface for each module, sub-packages which implement said modules and other shared constants and code which needs to be accessible within all sub-packages.

Top-Level Modules

Subsystems

Consensus

Key Files

TODO

  • fill out module explanation
Explorer

Key Files

TODO

  • fill out module explanation
Gateway

Key Files

TODO

  • fill out module explanation
Host

Key Files

TODO

  • fill out module explanation
Miner

Key Files

TODO

  • fill out module explanation
Renter

Key Files

TODO

  • fill out module explanation
Transaction Pool

Key Files

TODO

  • fill out module explanation
Wallet

Key Files

TODO

  • fill out subsystem explanation
Alert System

Key Files

The Alert System provides the Alerter interface and an implementation of the interface which can be used by modules which need to be able to register alerts in case of irregularities during runtime. An Alert provides the following information:

  • Message: Some information about the issue
  • Cause: The cause for the issue if it is known
  • Module: The name of the module that registered the alert
  • Severity: The severity level associated with the alert

The following levels of severity are currently available:

  • Unknown: This should never be used and is a safeguard against developer errors
  • Warning: Warns the user about potential issues which might require preventive actions
  • Error: Alerts the user of an issue that requires immediate action to prevent further issues like loss of data
  • Critical: Indicates that a critical error is imminent. e.g. lack of funds causing contracts to get lost
Dependencies

Key Files

TODO

  • fill out subsystem explanation
Negotiate

Key Files

TODO

  • fill out subsystem explanation
Network Addresses

Key Files

TODO

  • fill out subsystem explanation
Packing

Key Files

The smallest amount of data that can be uploaded to the Sia network is 4 MiB. This limitation can be overcome by packing multiple files together. The upload batching commands can pack a bunch of small files into the same sector, producing a unique skylink for each file.

Batch uploads work much the same as uploads, except that a JSON manifest is provided which pairs a list of source files to their destination siapaths. Every file in the manifest must be smaller than 4 MiB. The packing algorithm attempts to optimally pack the list of files into as few chunks as possible, where each chunk is 4 MiB in size.

Siad Configuration

Key Files

TODO

  • fill out subsystem explanation

Key Files -skylink.go

The skylink is a format for linking to data sectors stored on the Sia network. In addition to pointing to a data sector, the skylink contains a lossy offset an length that point to a data segment within the sector, allowing multiple small files to be packed into a single sector.

All told, there are 32 bytes in a skylink for encoding the Merkle root of the sector being linked, and 2 bytes encoding a link version, the offset, and the length of the sector being fetched.

For more information, check out the documentation in the skylink.go file.

SiaPath

Key Files

Siapaths are the format of filesystem paths on the Sia network. Internally they are handled as linux paths and use the / separator. Siapaths are used to identify directories on the Sia network as well as files. When manipulating Siapaths in memory the strings package should be used so that the / separator can be enforced. When Siapaths are being translated to System paths, the filepath package is used to ensure the correct path separator is used for the OS that is running.

Storage Manager

Key Files

TODO

  • fill out subsystem explanation

Documentation

Overview

Package modules contains definitions for all of the major modules of Sia, as well as some helper functions for performing actions that are common to multiple modules.

Index

Constants

View Source
const (
	// SeverityUnknown is the value of an uninitialized severity and should never
	// be used.
	SeverityUnknown = iota
	// SeverityWarning warns the user about potential issues which might require
	// preventive actions.
	SeverityWarning
	// SeverityError should be used for information about the system where
	// immediate action is recommended to avoid further issues like loss of data.
	SeverityError
	// SeverityCritical should be used for critical errors. e.g. a lack of funds
	// causing data to get lost without immediate action.
	SeverityCritical
)

The following consts are the different types of severity levels available in the alert system.

View Source
const (

	// AlertIDWalletLockedDuringMaintenance is the id of the alert that is
	// registered if the wallet is locked during a contract renewal or formation.
	AlertIDWalletLockedDuringMaintenance = "wallet-locked"
	// AlertIDRenterAllowanceLowFunds is the id of the alert that is registered if at least one
	// contract failed to renew/form due to low allowance.
	AlertIDRenterAllowanceLowFunds = "low-funds"
	// AlertIDRenterContractRenewalError is the id of the alert that is
	// registered if at least once contract renewal or refresh failed
	AlertIDRenterContractRenewalError = "contract-renewal-error"
	// AlertIDGatewayOffline is the id of the alert that is registered upon a
	// call to 'gateway.Offline' if the value returned is 'false' and
	// unregistered when it returns 'true'.
	AlertIDGatewayOffline = "gateway-offline"
	// AlertIDHostDiskTrouble is the id of the alert that is registered when the
	// host is encountering problems interacting with one or more of his disks
	AlertIDHostDiskTrouble = "host-disk-trouble"
	// AlertIDHostInsufficientCollateral is the id of the alert that is
	// registered if the host has insufficient collateral budget left to form or
	// renew a contract
	AlertIDHostInsufficientCollateral = "host-insufficient-collateral"
)

The following consts are a list of AlertIDs. All IDs used throughout Sia should be unique and listed here.

View Source
const (
	// HostDir names the directory that contains the host persistence.
	HostDir = "host"

	// HostSettingsFile is the name of the host's persistence file.
	HostSettingsFile = "host.json"

	// HostSiaMuxSubscriberName is the name used by the host to register a
	// listener on the SiaMux.
	HostSiaMuxSubscriberName = "host"
)
View Source
const (
	// MDMProgramInitTime is the time it takes to execute a program. This is a
	// hardcoded value which is meant to be replaced in the future. TODO: The
	// time is hardcoded to 10 for now until we add time management in the
	// future.
	MDMProgramInitTime = 10

	// MDMTimeAppend is the time for executing an 'Append' instruction.
	MDMTimeAppend = 10000

	// MDMTimeCommit is the time used for executing managedFinalize.
	MDMTimeCommit = 50e3

	// MDMTimeHasSector is the time for executing a 'HasSector' instruction.
	MDMTimeHasSector = 1

	// MDMTimeReadSector is the time for executing a 'ReadSector' instruction.
	MDMTimeReadSector = 1000

	// MDMTimeWriteSector is the time for executing a 'WriteSector' instruction.
	MDMTimeWriteSector = 10000

	// RPCIAppendLen is the expected length of the 'Args' of an Append
	// instructon.
	RPCIAppendLen = 9

	// RPCIDropSectorsLen is the expected length of the 'Args' of a DropSectors
	// Instruction.
	RPCIDropSectorsLen = 9

	// RPCIHasSectorLen is the expected length of the 'Args' of a HasSector
	// instruction.
	RPCIHasSectorLen = 8

	// RPCIReadSectorLen is the expected length of the 'Args' of a ReadSector
	// instruction.
	RPCIReadSectorLen = 25
)
View Source
const (
	// MinimumSupportedRenterHostProtocolVersion is the minimum version of Sia
	// that supports the currently used version of the renter-host protocol.
	MinimumSupportedRenterHostProtocolVersion = "1.4.1"

	// V1420HostOutOfStorageErrString is the string used by hosts since before
	// version 1.4.2 to indicate that they have run out of storage.
	//
	// Any update to this string needs to be done by making a new variable. This
	// variable should not be changed. IsOOSErr() needs to be updated to include
	// the new string while also still checking the old string as well to
	// preserve compatibility.
	V1420HostOutOfStorageErrString = "not enough storage remaining to accept sector"

	// V1420ContractNotRecognizedErrString is the string used by hosts since
	// before version 1.4.2 to indicate that they do not recognize the
	// contract that the renter is trying to update.
	//
	// Any update to this string needs to be done by making a new variable. This
	// variable should not be changed. IsContractNotRecognizedErr() needs to be
	// updated to include the new string while also still checking the old
	// string as well to preserve compatibility.
	V1420ContractNotRecognizedErrString = "no record of that contract"
)
View Source
const (
	// AcceptResponse is the response given to an RPC call to indicate
	// acceptance, i.e. that the sender wishes to continue communication.
	AcceptResponse = "accept"

	// StopResponse is the response given to an RPC call to indicate graceful
	// termination, i.e. that the sender wishes to cease communication, but
	// not due to an error.
	StopResponse = "stop"
)
View Source
const (
	// NegotiateDownloadTime defines the amount of time that the renter and
	// host have to negotiate a download request batch. The time is set high
	// enough that two nodes behind Tor have a reasonable chance of completing
	// the negotiation.
	NegotiateDownloadTime = 600 * time.Second

	// NegotiateFileContractRevisionTime defines the minimum amount of time
	// that the renter and host have to negotiate a file contract revision. The
	// time is set high enough that a full 4MB can be piped through a
	// connection that is running over Tor.
	NegotiateFileContractRevisionTime = 600 * time.Second

	// NegotiateFileContractTime defines the amount of time that the renter and
	// host have to negotiate a file contract. The time is set high enough that
	// a node behind Tor has a reasonable chance at making the multiple
	// required round trips to complete the negotiation.
	NegotiateFileContractTime = 360 * time.Second

	// NegotiateMaxDownloadActionRequestSize defines the maximum size that a
	// download request can be. Note, this is not a max size for the data that
	// can be requested, but instead is a max size for the definition of the
	// data being requested.
	NegotiateMaxDownloadActionRequestSize = 50e3

	// NegotiateMaxErrorSize indicates the maximum number of bytes that can be
	// used to encode an error being sent during negotiation.
	NegotiateMaxErrorSize = 256

	// NegotiateMaxFileContractRevisionSize specifies the maximum size that a
	// file contract revision is allowed to have when being sent over the wire
	// during negotiation.
	NegotiateMaxFileContractRevisionSize = 3e3

	// NegotiateMaxFileContractSetLen determines the maximum allowed size of a
	// transaction set that can be sent when trying to negotiate a file
	// contract. The transaction set will contain all of the unconfirmed
	// dependencies of the file contract, meaning that it can be quite large.
	// The transaction pool's size limit for transaction sets has been chosen
	// as a reasonable guideline for determining what is too large.
	NegotiateMaxFileContractSetLen = TransactionSetSizeLimit - 1e3

	// NegotiateMaxHostExternalSettingsLen is the maximum allowed size of an
	// encoded HostExternalSettings.
	NegotiateMaxHostExternalSettingsLen = 16000

	// NegotiateMaxSiaPubkeySize defines the maximum size that a SiaPubkey is
	// allowed to be when being sent over the wire during negotiation.
	NegotiateMaxSiaPubkeySize = 1e3

	// NegotiateMaxTransactionSignatureSize defines the maximum size that a
	// transaction signature is allowed to be when being sent over the wire
	// during negotiation.
	NegotiateMaxTransactionSignatureSize = 2e3

	// NegotiateMaxTransactionSignaturesSize defines the maximum size that a
	// transaction signature slice is allowed to be when being sent over the
	// wire during negotiation.
	NegotiateMaxTransactionSignaturesSize = 5e3

	// NegotiateRecentRevisionTime establishes the minimum amount of time that
	// the connection deadline is expected to be set to when a recent file
	// contract revision is being requested from the host. The deadline is long
	// enough that the connection should be successful even if both parties are
	// running Tor.
	NegotiateRecentRevisionTime = 120 * time.Second

	// NegotiateRenewContractTime defines the minimum amount of time that the
	// renter and host have to negotiate a final contract renewal. The time is
	// high enough that the negotiation can occur over a Tor connection, and
	// that both the host and the renter can have time to process large Merkle
	// tree calculations that may be involved with renewing a file contract.
	NegotiateRenewContractTime = 600 * time.Second
)
View Source
const (
	// DefaultDirPerm defines the default permissions used for a new dir if no
	// permissions are supplied. Changing this value is a compatibility issue
	// since users expect dirs to have these permissions.
	DefaultDirPerm = 0755

	// DefaultFilePerm defines the default permissions used for a new file if no
	// permissions are supplied. Changing this value is a compatibility issue
	// since users expect files to have these permissions.
	DefaultFilePerm = 0644
)

Filesystem related consts.

View Source
const (
	// RenterDir is the name of the directory that is used to store the
	// renter's persistent data.
	RenterDir = "renter"

	// FileSystemRoot is the name of the directory that is used as the root of
	// the renter's filesystem.
	FileSystemRoot = "fs"

	// HomeFolderRoot is the name of the directory that is used to store all of
	// the user accessible data.
	HomeFolderRoot = "home"

	// UserRoot is the name of the directory that is used to store the
	// renter's siafiles.
	UserRoot = "user"

	// BackupRoot is the name of the directory that is used to store the renter's
	// snapshot siafiles.
	BackupRoot = "snapshots"

	// CombinedChunksRoot is the name of the directory that contains combined
	// chunks consisting of multiple partial chunks.
	CombinedChunksRoot = "combinedchunks"

	// EstimatedFileContractTransactionSetSize is the estimated blockchain size
	// of a transaction set between a renter and a host that contains a file
	// contract. This transaction set will contain a setup transaction from each
	// the host and the renter, and will also contain a file contract and file
	// contract revision that have each been signed by all parties.
	EstimatedFileContractTransactionSetSize = 2048

	// EstimatedFileContractRevisionAndProofTransactionSetSize is the
	// estimated blockchain size of a transaction set used by the host to
	// provide the storage proof at the end of the contract duration.
	EstimatedFileContractRevisionAndProofTransactionSetSize = 5000

	// StreamDownloadSize is the size of downloaded in a single streaming download
	// request.
	StreamDownloadSize = uint64(1 << 16) // 64 KiB

	// StreamUploadSize is the size of downloaded in a single streaming upload
	// request.
	StreamUploadSize = uint64(1 << 16) // 64 KiB
)
View Source
const (
	// ContractManagerDir is the standard name used for the directory that
	// contains all files directly related to the contract manager.
	ContractManagerDir = "contractmanager"

	// StorageManagerDir is standard name used for the directory that contains
	// all of the storage manager files.
	StorageManagerDir = "storagemanager"
)
View Source
const (
	// TransactionSetSizeLimit defines the largest set of dependent unconfirmed
	// transactions that will be accepted by the transaction pool.
	TransactionSetSizeLimit = 250e3

	// TransactionSizeLimit defines the size of the largest transaction that
	// will be accepted by the transaction pool according to the IsStandard
	// rules.
	TransactionSizeLimit = 32e3
)
View Source
const (
	// PublicKeysPerSeed define the number of public keys that get pregenerated
	// for a seed at startup when searching for balances in the blockchain.
	PublicKeysPerSeed = 2500

	// SeedChecksumSize is the number of bytes that are used to checksum
	// addresses to prevent accidental spending.
	SeedChecksumSize = 6

	// WalletDir is the directory that contains the wallet persistence.
	WalletDir = "wallet"
)
View Source
const (
	// ExplorerDir is the name of the directory that is typically used for the
	// explorer.
	ExplorerDir = "explorer"
)
View Source
const (
	// FeeManagerDir is the name of the directory that is used to store the
	// FeeManager's persistent data
	FeeManagerDir = "feemanager"
)
View Source
const (
	// GatewayDir is the name of the directory used to store the gateway's
	// persistent data.
	GatewayDir = "gateway"
)
View Source
const MaxEncodedNetAddressLength = 266

MaxEncodedNetAddressLength is the maximum length of a NetAddress encoded with the encode package. 266 was chosen because the maximum length for the hostname is 254 + 1 for the separating colon + 5 for the port + 8 byte string length prefix.

View Source
const (
	// MinerDir is the name of the directory that is used to store the miner's
	// persistent data.
	MinerDir = "miner"
)
View Source
const RPCMinLen = 4096

RPCMinLen is the minimum size of an RPC message. If an encoded message would be smaller than RPCMinLen, it is padded with random data.

View Source
const (

	// SiaMuxDir is the name of the siamux dir
	SiaMuxDir = "siamux"
)
View Source
const (
	// SkylinkMaxFetchSize defines the maximum fetch size that is supported by
	// the skylink format. This is intentionally the same number as
	// modules.SectorSize on the release build. We could not use
	// modules.SectorSize directly because during testing that value is too
	// small to properly test the link format.
	SkylinkMaxFetchSize = 1 << 22
)
View Source
const (
	// WithdrawalNonceSize is the size of the nonce in the WithdralMessage
	WithdrawalNonceSize = 8
)

Variables

View Source
var (
	// ConsensusChangeBeginning is a special consensus change id that tells the
	// consensus set to provide all consensus changes starting from the very
	// first diff, which includes the genesis block diff.
	ConsensusChangeBeginning = ConsensusChangeID{}

	// ConsensusChangeRecent is a special consensus change id that tells the
	// consensus set to provide the most recent consensus change, instead of
	// starting from a specific value (which may not be known to the caller).
	ConsensusChangeRecent = ConsensusChangeID{1}

	// ErrBlockKnown is an error indicating that a block is already in the
	// database.
	ErrBlockKnown = errors.New("block already present in database")

	// ErrBlockUnsolved indicates that a block did not meet the required POW
	// target.
	ErrBlockUnsolved = errors.New("block does not meet target")

	// ErrInvalidConsensusChangeID indicates that ConsensusSetPersistSubscribe
	// was called with a consensus change id that is not recognized. Most
	// commonly, this means that the consensus set was deleted or replaced and
	// now the module attempting the subscription has desynchronized. This error
	// should be handled by the module, and not reported to the user.
	ErrInvalidConsensusChangeID = errors.New("consensus subscription has invalid id - files are inconsistent")

	// ErrNonExtendingBlock indicates that a block is valid but does not result
	// in a fork that is the heaviest known fork - the consensus set has not
	// changed as a result of seeing the block.
	ErrNonExtendingBlock = errors.New("block does not extend the longest fork")
)
View Source
var (
	// Hostv112PersistMetadata is the header of the v112 host persist file.
	Hostv112PersistMetadata = persist.Metadata{
		Header:  "Sia Host",
		Version: "0.5",
	}

	// Hostv120PersistMetadata is the header of the v120 host persist file.
	Hostv120PersistMetadata = persist.Metadata{
		Header:  "Sia Host",
		Version: "1.2.0",
	}

	// Hostv143PersistMetadata is the header of the v143 host persist file.
	Hostv143PersistMetadata = persist.Metadata{
		Header:  "Sia Host",
		Version: "1.4.3",
	}
)
View Source
var (
	// BlockBytesPerMonthTerabyte is the conversion rate between block-bytes and month-TB.
	BlockBytesPerMonthTerabyte = BytesPerTerabyte.Mul64(uint64(types.BlocksPerMonth))

	// BytesPerTerabyte is the conversion rate between bytes and terabytes.
	BytesPerTerabyte = types.NewCurrency64(1e12)
)
View Source
var (
	// HostConnectabilityStatusChecking is returned from ConnectabilityStatus()
	// if the host is still determining if it is connectable.
	HostConnectabilityStatusChecking = HostConnectabilityStatus("checking")

	// HostConnectabilityStatusConnectable is returned from
	// ConnectabilityStatus() if the host is connectable at its configured
	// netaddress.
	HostConnectabilityStatusConnectable = HostConnectabilityStatus("connectable")

	// HostConnectabilityStatusNotConnectable is returned from
	// ConnectabilityStatus() if the host is not connectable at its configured
	// netaddress.
	HostConnectabilityStatusNotConnectable = HostConnectabilityStatus("not connectable")

	// HostWorkingStatusChecking is returned from WorkingStatus() if the host is
	// still determining if it is working, that is, if settings calls are
	// incrementing.
	HostWorkingStatusChecking = HostWorkingStatus("checking")

	// HostWorkingStatusNotWorking is returned from WorkingStatus() if the host
	// has not received any settings calls over the duration of
	// workingStatusFrequency.
	HostWorkingStatusNotWorking = HostWorkingStatus("not working")

	// HostWorkingStatusWorking is returned from WorkingStatus() if the host has
	// received more than workingThreshold settings calls over the duration of
	// workingStatusFrequency.
	HostWorkingStatusWorking = HostWorkingStatus("working")
)
View Source
var (
	// SpecifierAppend is the specifier for the Append instruction.
	SpecifierAppend = InstructionSpecifier{'A', 'p', 'p', 'e', 'n', 'd'}

	// SpecifierDropSectors is the specifier for the DropSectors instruction.
	SpecifierDropSectors = InstructionSpecifier{'D', 'r', 'o', 'p', 'S', 'e', 'c', 't', 'o', 'r', 's'}

	// SpecifierHasSector is the specifier for the HasSector instruction.
	SpecifierHasSector = InstructionSpecifier{'H', 'a', 's', 'S', 'e', 'c', 't', 'o', 'r'}

	// SpecifierReadSector is the specifier for the ReadSector instruction.
	SpecifierReadSector = InstructionSpecifier{'R', 'e', 'a', 'd', 'S', 'e', 'c', 't', 'o', 'r'}

	// ErrMDMInsufficientBudget is the error returned if the remaining budget of
	// an MDM program is not sufficient to execute the next instruction.
	ErrMDMInsufficientBudget = errors.New("remaining budget is insufficient")
)
View Source
var (
	// ActionDelete is the specifier for a RevisionAction that deletes a
	// sector.
	ActionDelete = types.NewSpecifier("Delete")

	// ActionInsert is the specifier for a RevisionAction that inserts a
	// sector.
	ActionInsert = types.NewSpecifier("Insert")

	// ActionModify is the specifier for a RevisionAction that modifies sector
	// data.
	ActionModify = types.NewSpecifier("Modify")

	// ErrAnnNotAnnouncement indicates that the provided host announcement does
	// not use a recognized specifier, indicating that it's either not a host
	// announcement or it's not a recognized version of a host announcement.
	ErrAnnNotAnnouncement = errors.New("provided data does not form a recognized host announcement")

	// ErrAnnUnrecognizedSignature is returned when the signature in a host
	// announcement is not a type of signature that is recognized.
	ErrAnnUnrecognizedSignature = errors.New("the signature provided in the host announcement is not recognized")

	// ErrRevisionCoveredFields is returned if there is a covered fields object
	// in a transaction signature which has the 'WholeTransaction' field set to
	// true, meaning that miner fees cannot be added to the transaction without
	// invalidating the signature.
	ErrRevisionCoveredFields = errors.New("file contract revision transaction signature does not allow miner fees to be added")

	// ErrRevisionSigCount is returned when a file contract revision has the
	// wrong number of transaction signatures.
	ErrRevisionSigCount = errors.New("file contract revision has the wrong number of transaction signatures")

	// ErrStopResponse is the error returned by ReadNegotiationAcceptance when
	// it reads the StopResponse string.
	ErrStopResponse = errors.New("sender wishes to stop communicating")

	// PrefixHostAnnouncement is used to indicate that a transaction's
	// Arbitrary Data field contains a host announcement. The encoded
	// announcement will follow this prefix.
	PrefixHostAnnouncement = types.NewSpecifier("HostAnnouncement")

	// PrefixFileContractIdentifier is used to indicate that a transaction's
	// Arbitrary Data field contains a file contract identifier. The identifier
	// and its signature will follow this prefix.
	PrefixFileContractIdentifier = types.NewSpecifier("FCIdentifier")

	// RPCDownload is the specifier for downloading a file from a host.
	RPCDownload = types.NewSpecifier("Download" + string(2))

	// RPCFormContract is the specifier for forming a contract with a host.
	RPCFormContract = types.NewSpecifier("FormContract" + string(2))

	// RPCRenewContract is the specifier to renewing an existing contract.
	RPCRenewContract = types.NewSpecifier("RenewContract" + string(2))

	// RPCReviseContract is the specifier for revising an existing file
	// contract.
	RPCReviseContract = types.NewSpecifier("ReviseContract" + string(2))

	// RPCSettings is the specifier for requesting settings from the host.
	RPCSettings = types.NewSpecifier("Settings" + string(2))

	// SectorSize defines how large a sector should be in bytes. The sector
	// size needs to be a power of two to be compatible with package
	// merkletree. 4MB has been chosen for the live network because large
	// sectors significantly reduce the tracking overhead experienced by the
	// renter and the host.
	SectorSize = build.Select(build.Var{
		Dev:      SectorSizeDev,
		Standard: SectorSizeStandard,
		Testing:  SectorSizeTesting,
	}).(uint64)

	// SectorSizeDev defines how large a sector should be in Dev builds.
	SectorSizeDev = uint64(1 << SectorSizeScalingDev)
	// SectorSizeStandard defines how large a sector should be in Standard
	// builds.
	SectorSizeStandard = uint64(1 << SectorSizeScalingStandard)
	// SectorSizeTesting defines how large a sector should be in Testing builds.
	SectorSizeTesting = uint64(1 << SectorSizeScalingTesting)

	// SectorSizeScalingDev defines the power of 2 to which we scale sector
	// sizes in Dev builds.
	SectorSizeScalingDev = 18 // 256 KiB
	// SectorSizeScalingStandard defines the power of 2 to which we scale sector
	// sizes in Standard builds.
	SectorSizeScalingStandard = 22 // 4 MiB
	// SectorSizeScalingTesting defines the power of 2 to which we scale sector
	// sizes in Testing builds.
	SectorSizeScalingTesting = 12 // 4 KiB
)
View Source
var (
	RPCLoopEnter              = types.NewSpecifier("LoopEnter")
	RPCLoopExit               = types.NewSpecifier("LoopExit")
	RPCLoopFormContract       = types.NewSpecifier("LoopFormContract")
	RPCLoopLock               = types.NewSpecifier("LoopLock")
	RPCLoopRead               = types.NewSpecifier("LoopRead")
	RPCLoopRenewContract      = types.NewSpecifier("LoopRenew")
	RPCLoopRenewClearContract = types.NewSpecifier("LoopRenewClear")
	RPCLoopSectorRoots        = types.NewSpecifier("LoopSectorRoots")
	RPCLoopSettings           = types.NewSpecifier("LoopSettings")
	RPCLoopUnlock             = types.NewSpecifier("LoopUnlock")
	RPCLoopWrite              = types.NewSpecifier("LoopWrite")
)

New RPC IDs

View Source
var (
	CipherChaCha20Poly1305 = types.NewSpecifier("ChaCha20Poly1305")
	CipherNoOverlap        = types.NewSpecifier("NoOverlap")
)

RPC ciphers

View Source
var (
	WriteActionAppend = types.NewSpecifier("Append")
	WriteActionTrim   = types.NewSpecifier("Trim")
	WriteActionSwap   = types.NewSpecifier("Swap")
	WriteActionUpdate = types.NewSpecifier("Update")
)

Write actions

View Source
var (
	// ErrSizeTooLarge is returned for file sizes that exceed the sector size.
	ErrSizeTooLarge = errors.New("file size exceeds sector size")
	// ErrZeroSize is returned for zero-length files.
	ErrZeroSize = errors.New("file size of zero")
)
View Source
var (
	// ErrInsufficientPaymentForRPC is returned when the provided payment was
	// lower than the cost of the RPC.
	ErrInsufficientPaymentForRPC = errors.New("Insufficient payment, the provided payment did not cover the cost of the RPC.")

	// ErrExpiredRPCPriceTable is returned when the renter performs an RPC call
	// and the current block height exceeds the expiry block height of the RPC
	// price table.
	ErrExpiredRPCPriceTable = errors.New("Expired RPC price table, ensure you have the latest prices by calling the updatePriceTable RPC.")

	// ErrWithdrawalsInactive occurs when the host is not synced yet. If that is
	// the case the account manager does not allow trading money from the
	// ephemeral accounts.
	ErrWithdrawalsInactive = errors.New("ephemeral account withdrawals are inactive because the host is not synced")

	// ErrWithdrawalExpired occurs when the withdrawal message's expiry block
	// height is in the past.
	ErrWithdrawalExpired = errors.New("ephemeral account withdrawal message expired")

	// ErrWithdrawalExtremeFuture occurs when the withdrawal message's expiry
	// block height is too far into the future.
	ErrWithdrawalExtremeFuture = errors.New("ephemeral account withdrawal message expires too far into the future")

	// ErrWithdrawalInvalidSignature occurs when the signature provided with the
	// withdrawal message was invalid.
	ErrWithdrawalInvalidSignature = errors.New("ephemeral account withdrawal message signature is invalid")
)
View Source
var (
	// DefaultAllowance is the set of default allowance settings that will be
	// used when allowances are not set or not fully set
	DefaultAllowance = Allowance{
		Funds:       types.SiacoinPrecision.Mul64(500),
		Hosts:       uint64(PriceEstimationScope),
		Period:      3 * types.BlocksPerMonth,
		RenewWindow: types.BlocksPerMonth,

		ExpectedStorage:    1e12,
		ExpectedUpload:     uint64(200e9) / uint64(types.BlocksPerMonth),
		ExpectedDownload:   uint64(100e9) / uint64(types.BlocksPerMonth),
		ExpectedRedundancy: 3.0,
		MaxPeriodChurn:     uint64(250e9),
	}
	// ErrHostFault indicates if an error is the host's fault.
	ErrHostFault = errors.New("host has returned an error")

	// ErrDownloadCancelled is the error set when a download was cancelled
	// manually by the user.
	ErrDownloadCancelled = errors.New("download was cancelled")

	// PriceEstimationScope is the number of hosts that get queried by the
	// renter when providing price estimates. Especially for the 'Standard'
	// variable, there should be congruence with the number of contracts being
	// used in the renter allowance.
	PriceEstimationScope = build.Select(build.Var{
		Standard: int(50),
		Dev:      int(12),
		Testing:  int(4),
	}).(int)
	// BackupKeySpecifier is a specifier that is hashed with the wallet seed to
	// create a key for encrypting backups.
	BackupKeySpecifier = types.NewSpecifier("backupkey")
)
View Source
var (
	// GlobalRateLimits is the global object for regulating ratelimits
	// throughout siad. It is set using the gateway module.
	GlobalRateLimits = ratelimit.NewRateLimit(0, 0, 0)

	// ConfigName is the name of the config file on disk
	ConfigName = "siad.config"
)
View Source
var (
	// ErrEmptySiaPath is an error when SiaPath is empty
	ErrEmptySiaPath = errors.New("SiaPath must be a nonempty string")

	// SiaDirExtension is the extension for siadir metadata files on disk
	SiaDirExtension = ".siadir"

	// SiaFileExtension is the extension for siafiles on disk
	SiaFileExtension = ".sia"

	// PartialsSiaFileExtension is the extension for siafiles which contain
	// combined chunks.
	PartialsSiaFileExtension = ".csia"

	// CombinedChunkExtension is the extension for a combined chunk on disk.
	CombinedChunkExtension = ".cc"
	// UnfinishedChunkExtension is the extension for an unfinished combined chunk
	// and is appended to the file in addition to CombinedChunkExtension.
	UnfinishedChunkExtension = ".unfinished"
	// ChunkMetadataExtension is the extension of a metadata file for a combined
	// chunk.
	ChunkMetadataExtension = ".ccmd"
)
View Source
var (
	// SkyfileFormatNotSpecified is the default format for the endpoint when the
	// format isn't specified explicitly.
	SkyfileFormatNotSpecified = SkyfileFormat("")
	// SkyfileFormatConcat returns the skyfiles in a concatenated manner.
	SkyfileFormatConcat = SkyfileFormat("concat")
	// SkyfileFormatTar returns the skyfiles as a .tar.
	SkyfileFormatTar = SkyfileFormat("tar")
	// SkyfileFormatTarGz returns the skyfiles as a .tar.gz.
	SkyfileFormatTarGz = SkyfileFormat("targz")
)
View Source
var (
	// ErrDuplicateTransactionSet is the error that gets returned if a
	// duplicate transaction set is given to the transaction pool.
	ErrDuplicateTransactionSet = errors.New("transaction set contains only duplicate transactions")

	// ErrInvalidArbPrefix is the error that gets returned if a transaction is
	// submitted to the transaction pool which contains a prefix that is not
	// recognized. This helps prevent miners on old versions from mining
	// potentially illegal transactions in the event of a soft-fork.
	ErrInvalidArbPrefix = errors.New("transaction contains non-standard arbitrary data")

	// ErrLargeTransaction is the error that gets returned if a transaction
	// provided to the transaction pool is larger than what is allowed by the
	// IsStandard rules.
	ErrLargeTransaction = errors.New("transaction is too large for this transaction pool")

	// ErrLargeTransactionSet is the error that gets returned if a transaction
	// set given to the transaction pool is larger than the limit placed by the
	// IsStandard rules of the transaction pool.
	ErrLargeTransactionSet = errors.New("transaction set is too large for this transaction pool")

	// PrefixNonSia defines the prefix that should be appended to any
	// transactions that use the arbitrary data for reasons outside of the
	// standard Sia protocol. This will prevent these transactions from being
	// rejected by the IsStandard set of rules, but also means that the data
	// will never be used within the formal Sia protocol.
	PrefixNonSia = types.NewSpecifier("NonSia")

	// TransactionPoolDir is the name of the directory that is used to store
	// the transaction pool's persistent data.
	TransactionPoolDir = "transactionpool"
)
View Source
var (
	// ErrBadEncryptionKey is returned if the incorrect encryption key to a
	// file is provided.
	ErrBadEncryptionKey = errors.New("provided encryption key is incorrect")

	// ErrIncompleteTransactions is returned if the wallet has incomplete
	// transactions being built that are using all of the current outputs, and
	// therefore the wallet is unable to spend money despite it not technically
	// being 'unconfirmed' yet.
	ErrIncompleteTransactions = errors.New("wallet has coins spent in incomplete transactions - not enough remaining coins")

	// ErrLockedWallet is returned when an action cannot be performed due to
	// the wallet being locked.
	ErrLockedWallet = errors.New("wallet must be unlocked before it can be used")

	// ErrLowBalance is returned if the wallet does not have enough funds to
	// complete the desired action.
	ErrLowBalance = errors.New("insufficient balance")

	// ErrWalletShutdown is returned when a method can't continue execution due
	// to the wallet shutting down.
	ErrWalletShutdown = errors.New("wallet is shutting down")
)
View Source
var (
	// BootstrapPeers is a list of peers that can be used to find other peers -
	// when a client first connects to the network, the only options for
	// finding peers are either manual entry of peers or to use a hardcoded
	// bootstrap point. While the bootstrap point could be a central service,
	// it can also be a list of peers that are known to be stable. We have
	// chosen to hardcode known-stable peers.
	//
	// These peers have been verified to be v1.3.7 or higher
	BootstrapPeers = build.Select(build.Var{
		Standard: []NetAddress{
			"95.78.166.67:9981",
			"68.199.121.249:9981",
			"24.194.148.158:9981",
			"82.231.193.206:9981",
			"185.216.208.214:9981",
			"165.73.59.75:9981",
			"81.5.154.29:9981",
			"68.133.15.97:9981",
			"223.19.102.54:9981",
			"136.52.23.122:9981",
			"45.56.21.129:9981",
			"109.172.42.157:9981",
			"188.244.40.69:9985",
			"176.37.126.147:9981",
			"68.96.80.134:9981",
			"92.255.195.111:9981",
			"88.202.201.30:9981",
			"76.103.83.241:9981",
			"77.132.24.85:9981",
			"81.167.50.168:9981",
			"91.206.15.126:9981",
			"91.231.94.22:9981",
			"212.105.168.207:9981",
			"94.113.86.207:9981",
			"188.242.52.10:9981",
			"94.137.140.40:9981",
			"137.74.1.200:9981",
			"85.27.163.135:9981",
			"46.246.68.66:9981",
			"92.70.88.30:9981",
			"188.68.37.232:9981",
			"153.210.37.241:9981",
			"24.20.240.181:9981",
			"92.154.126.211:9981",
			"45.50.26.222:9981",
			"41.160.218.190:9981",
			"23.175.0.151:9981",
			"109.248.206.13:9981",
			"222.161.26.222:9981",
			"68.97.208.223:9981",
			"71.190.208.128:9981",
			"69.120.2.164:9981",
			"37.204.141.163:9981",
			"188.243.111.129:9981",
			"78.46.64.86:9981",
			"188.244.40.69:9981",
			"87.237.42.180:9981",
			"212.42.213.179:9981",
			"62.216.59.236:9981",
			"80.56.227.209:9981",
			"202.181.196.157:9981",
			"188.242.52.10:9986",
			"188.242.52.10:9988",
			"81.24.30.12:9981",
			"109.233.59.68:9981",
			"77.162.159.137:9981",
			"176.240.111.223:9981",
			"126.28.73.206:9981",
			"178.63.11.62:9981",
			"174.84.49.170:9981",
			"185.6.124.16:9981",
			"81.24.30.13:9981",
			"31.208.123.118:9981",
			"85.69.198.249:9981",
			"5.9.147.103:9981",
			"77.168.231.70:9981",
			"81.24.30.14:9981",
			"82.253.237.216:9981",
			"161.53.40.130:9981",
			"34.209.55.245:9981",
		},
		Dev:     []NetAddress(nil),
		Testing: []NetAddress(nil),
	}).([]NetAddress)
)
View Source
var (
	// NegotiateSettingsTime establishes the minimum amount of time that the
	// connection deadline is expected to be set to when settings are being
	// requested from the host. The deadline is long enough that the connection
	// should be successful even if both parties are on Tor.
	NegotiateSettingsTime = build.Select(build.Var{
		Dev:      120 * time.Second,
		Standard: 120 * time.Second,
		Testing:  3 * time.Second,
	}).(time.Duration)
)
View Source
var ProdDependencies = new(ProductionDependencies)

ProdDependencies act as a global instance of the production dependencies to avoid having to instantiate new dependencies every time we want to pass production dependencies.

View Source
var (
	// RPCChallengePrefix is the prefix prepended to the challenge data
	// supplied by the host when proving ownership of a contract's secret key.
	RPCChallengePrefix = types.NewSpecifier("challenge")
)
View Source
var (
	RPCLoopReadStop = types.NewSpecifier("ReadStop")
)

Read interrupt

View Source
var (
	// RPCUpdatePriceTable specifier
	RPCUpdatePriceTable = types.NewSpecifier("UpdatePriceTable")
)
View Source
var (
	// SafeMutexDelay is the recommended timeout for the deadlock detecting
	// mutex. This value is DEPRECATED, as safe mutexes are no longer
	// recommended. Instead, the locking conventions should be followed and a
	// traditional mutex or a demote mutex should be used.
	SafeMutexDelay time.Duration
)
View Source
var (
	// SkynetFolder is the Sia folder where all of the skyfiles are stored by
	// default.
	SkynetFolder = NewGlobalSiaPath("/var/skynet")
)

Functions

func CalculateFee added in v1.0.0

func CalculateFee(ts []types.Transaction) types.Currency

CalculateFee returns the fee-per-byte of a transaction set.

func CreateAnnouncement added in v1.0.0

func CreateAnnouncement(addr NetAddress, pk types.SiaPublicKey, sk crypto.SecretKey) (signedAnnouncement []byte, err error)

CreateAnnouncement will take a host announcement and encode it, returning the exact []byte that should be added to the arbitrary data of a transaction.

func FilesizeUnits added in v1.4.2

func FilesizeUnits(size uint64) string

FilesizeUnits returns a string that displays a filesize in human-readable units.

func HealthPercentage added in v1.4.2

func HealthPercentage(health float64) float64

HealthPercentage returns the health in a more human understandable format out of 100%

The percentage is out of 1.25, this is to account for the RepairThreshold of 0.25 and assumes that the worst health is 1.5. Since we do not repair until the health is worse than the RepairThreshold, a health of 0 - 0.25 is full health. Likewise, a health that is greater than 1.25 is essentially 0 health.

func IsConsensusConflict added in v1.4.2

func IsConsensusConflict(err error) bool

IsConsensusConflict returns true iff err is a ConsensusConflict.

func IsContractNotRecognizedErr added in v1.4.2

func IsContractNotRecognizedErr(err error) bool

IsContractNotRecognizedErr is a helper function to determine whether an error from a host is a indicating that they do not recognize a contract that the renter is updating.

Note: To preserve compatibility, this function needsd to be extended exclusively by adding more checks, the existing checks should not be altered or removed.

func IsHostsFault added in v1.3.3

func IsHostsFault(err error) bool

IsHostsFault indicates if a returned error is the host's fault.

func IsOOSErr added in v1.4.1

func IsOOSErr(err error) bool

IsOOSErr is a helper function to determine whether an error from a host is indicating that they are out of storage.

Note: To preserve compatibility, this function needsd to be extended exclusively by adding more checks, the existing checks should not be altered or removed.

func MDMAppendCost added in v1.4.4

func MDMAppendCost(pt RPCPriceTable) (types.Currency, types.Currency)

MDMAppendCost is the cost of executing an 'Append' instruction.

func MDMAppendMemory added in v1.4.4

func MDMAppendMemory() uint64

MDMAppendMemory returns the additional memory consumption of a 'Append' instruction.

func MDMCopyCost added in v1.4.4

func MDMCopyCost(pt RPCPriceTable, contractSize uint64) types.Currency

MDMCopyCost is the cost of executing a 'Copy' instruction.

func MDMDropSectorsCost added in v1.4.4

func MDMDropSectorsCost(pt RPCPriceTable, numSectorsDropped uint64) (types.Currency, types.Currency)

MDMDropSectorsCost is the cost of executing a 'DropSectors' instruction for a certain number of dropped sectors.

func MDMHasSectorCost added in v1.4.4

func MDMHasSectorCost(pt RPCPriceTable) (types.Currency, types.Currency)

MDMHasSectorCost is the cost of executing a 'HasSector' instruction.

func MDMHasSectorMemory added in v1.4.4

func MDMHasSectorMemory() uint64

MDMHasSectorMemory returns the additional memory consumption of a 'HasSector' instruction.

func MDMInitCost added in v1.4.4

func MDMInitCost(pt RPCPriceTable, programLen uint64) types.Currency

MDMInitCost is the cost of instantiatine the MDM. It is defined as: 'InitBaseCost' + 'MemoryTimeCost' * 'programLen' * Time

func MDMMemoryCost added in v1.4.4

func MDMMemoryCost(pt RPCPriceTable, usedMemory, time uint64) types.Currency

MDMMemoryCost computes the memory cost given a price table, memory and time.

func MDMReadCost added in v1.4.4

func MDMReadCost(pt RPCPriceTable, readLength uint64) (types.Currency, types.Currency)

MDMReadCost is the cost of executing a 'Read' instruction. It is defined as: 'readBaseCost' + 'readLengthCost' * `readLength`

func MDMReadMemory added in v1.4.4

func MDMReadMemory() uint64

MDMReadMemory returns the additional memory consumption of a 'Read' instruction.

func MDMSwapCost added in v1.4.4

func MDMSwapCost(pt RPCPriceTable, contractSize uint64) types.Currency

MDMSwapCost is the cost of executing a 'Swap' instruction.

func MDMTruncateCost added in v1.4.4

func MDMTruncateCost(pt RPCPriceTable, contractSize uint64) types.Currency

MDMTruncateCost is the cost of executing a 'Truncate' instruction.

func MDMWriteCost added in v1.4.4

func MDMWriteCost(pt RPCPriceTable, writeLength uint64) (types.Currency, types.Currency)

MDMWriteCost is the cost of executing a 'Write' instruction of a certain length.

func NewRenterSession added in v1.4.0

func NewRenterSession(conn net.Conn, hostPublicKey types.SiaPublicKey) (*RenterHostSession, LoopChallengeRequest, error)

NewRenterSession returns a new renter-side session of the renter-host protocol.

func NewSiaMux added in v1.4.3

func NewSiaMux(siaMuxDir, siaDir, address string) (*siamux.SiaMux, error)

NewSiaMux returns a new SiaMux object

func PeekErr added in v1.4.2

func PeekErr(errChan <-chan error) (err error)

PeekErr checks if a chan error has an error waiting to be returned. If it has it will return that error. Otherwise it returns 'nil'.

func RPCRead added in v1.4.5

func RPCRead(stream siamux.Stream, obj interface{}) error

RPCRead tries to read the given object from the stream.

func RPCWrite added in v1.4.5

func RPCWrite(stream siamux.Stream, obj interface{}) error

RPCWrite writes the given object to the stream.

func RPCWriteAll added in v1.4.5

func RPCWriteAll(stream siamux.Stream, objs ...interface{}) error

RPCWriteAll writes the given objects to the stream.

func RPCWriteError added in v1.4.5

func RPCWriteError(stream siamux.Stream, err error) error

RPCWriteError writes the given error to the stream.

func ReadNegotiationAcceptance added in v1.0.0

func ReadNegotiationAcceptance(r io.Reader) error

ReadNegotiationAcceptance reads an accept/reject response from r (usually a net.Conn). If the response is not AcceptResponse, ReadNegotiationAcceptance returns the response as an error. If the response is StopResponse, ErrStopResponse is returned, allowing for direct error comparison.

Note that since errors returned by ReadNegotiationAcceptance are newly allocated, they cannot be compared to other errors in the traditional fashion.

func ReadRPCID added in v1.4.0

func ReadRPCID(r io.Reader, aead cipher.AEAD) (rpcID types.Specifier, err error)

ReadRPCID reads an RPC request ID using the new loop protocol.

func ReadRPCMessage added in v1.4.0

func ReadRPCMessage(r io.Reader, aead cipher.AEAD, obj interface{}, maxLen uint64) error

ReadRPCMessage reads an encrypted RPC message.

func ReadRPCRequest added in v1.4.0

func ReadRPCRequest(r io.Reader, aead cipher.AEAD, req interface{}, maxLen uint64) error

ReadRPCRequest reads an RPC request using the new loop protocol.

func ReadRPCResponse added in v1.4.0

func ReadRPCResponse(r io.Reader, aead cipher.AEAD, resp interface{}, maxLen uint64) error

ReadRPCResponse reads an RPC response using the new loop protocol.

func RenterPayoutsPreTax added in v1.3.5

func RenterPayoutsPreTax(host HostDBEntry, funding, txnFee, basePrice, baseCollateral types.Currency, period types.BlockHeight, expectedStorage uint64) (renterPayout, hostPayout, hostCollateral types.Currency, err error)

RenterPayoutsPreTax calculates the renterPayout before tax and the hostPayout given a host, the available renter funding, the expected txnFee for the transaction and an optional basePrice in case this helper is used for a renewal. It also returns the hostCollateral.

func SeedToString added in v1.0.0

func SeedToString(seed Seed, did mnemonics.DictionaryID) (string, error)

SeedToString converts a wallet seed to a human friendly string.

func SiaPKToMuxPK added in v1.4.5

func SiaPKToMuxPK(spk types.SiaPublicKey) (mk mux.ED25519PublicKey)

SiaPKToMuxPK turns a SiaPublicKey into a mux.ED25519PublicKey

func VerifyFileContractRevisionTransactionSignatures added in v1.0.0

func VerifyFileContractRevisionTransactionSignatures(fcr types.FileContractRevision, tsigs []types.TransactionSignature, height types.BlockHeight) error

VerifyFileContractRevisionTransactionSignatures checks that the signatures on a file contract revision are valid and cover the right fields.

func WriteNegotiationAcceptance added in v1.0.0

func WriteNegotiationAcceptance(w io.Writer) error

WriteNegotiationAcceptance writes the 'accept' response to w (usually a net.Conn).

func WriteNegotiationRejection added in v1.0.0

func WriteNegotiationRejection(w io.Writer, err error) error

WriteNegotiationRejection will write a rejection response to w (usually a net.Conn) and return the input error. If the write fails, the write error is joined with the input error.

func WriteNegotiationStop added in v1.0.0

func WriteNegotiationStop(w io.Writer) error

WriteNegotiationStop writes the 'stop' response to w (usually a net.Conn).

func WriteRPCMessage added in v1.4.0

func WriteRPCMessage(w io.Writer, aead cipher.AEAD, obj interface{}) error

WriteRPCMessage writes an encrypted RPC message.

func WriteRPCRequest added in v1.4.0

func WriteRPCRequest(w io.Writer, aead cipher.AEAD, rpcID types.Specifier, req interface{}) error

WriteRPCRequest writes an encrypted RPC request using the new loop protocol.

func WriteRPCResponse added in v1.4.0

func WriteRPCResponse(w io.Writer, aead cipher.AEAD, resp interface{}, err error) error

WriteRPCResponse writes an RPC response or error using the new loop protocol. Either resp or err must be nil. If err is an *RPCError, it is sent directly; otherwise, a generic RPCError is created from err's Error string.

Types

type Alert added in v1.4.2

type Alert struct {
	// Cause is the cause for the Alert.
	// e.g. "Wallet is locked"
	Cause string `json:"cause"`
	// Msg is the message the Alert is meant to convey to the user.
	// e.g. "Contractor can't form new contrats"
	Msg string `json:"msg"`
	// Module contains information about what module the alert originated from.
	Module string `json:"module"`
	// Severity categorizes the Alerts to allow for an easy way to filter them.
	Severity AlertSeverity `json:"severity"`
}

Alert is a type that contains essential information about an alert.

func (Alert) Equals added in v1.4.2

func (x Alert) Equals(y Alert) bool

Equals returns true if x and y are identical alerts

func (Alert) EqualsWithErrorCause added in v1.4.2

func (x Alert) EqualsWithErrorCause(y Alert, causeErr string) bool

EqualsWithErrorCause returns true if x and y have the same module, message, and severity and if the provided error is in both of the alert's causes

type AlertID added in v1.4.2

type AlertID string

AlertID is a helper type for an Alert's ID.

func AlertIDSiafileLowRedundancy added in v1.4.2

func AlertIDSiafileLowRedundancy(uid string) AlertID

AlertIDSiafileLowRedundancy uses a Siafile's UID to create a unique AlertID for a low redundancy alert.

type AlertSeverity added in v1.4.2

type AlertSeverity uint64

AlertSeverity describes the severity of an alert.

func (AlertSeverity) MarshalJSON added in v1.4.2

func (a AlertSeverity) MarshalJSON() ([]byte, error)

MarshalJSON defines a JSON encoding for the AlertSeverity.

func (AlertSeverity) String added in v1.4.2

func (a AlertSeverity) String() string

String converts an alertSeverity to a string

func (*AlertSeverity) UnmarshalJSON added in v1.4.2

func (a *AlertSeverity) UnmarshalJSON(b []byte) error

UnmarshalJSON attempts to decode an AlertSeverity.

type Alerter added in v1.4.2

type Alerter interface {
	Alerts() (crit, err, warn []Alert)
}

Alerter is the interface implemented by all top-level modules. It's an interface that allows for asking a module about potential issues.

type Allowance added in v1.0.0

type Allowance struct {
	Funds       types.Currency    `json:"funds"`
	Hosts       uint64            `json:"hosts"`
	Period      types.BlockHeight `json:"period"`
	RenewWindow types.BlockHeight `json:"renewwindow"`

	// PaymentContractInitialFunding establishes the amount of money that the a
	// Skynet portal will put into a brand new payment contract. If this value
	// is set to zero, this node will not act as a Skynet portal. When this
	// value is non-zero, this node will act as a Skynet portal, and form
	// contracts with every reasonably priced host.
	PaymentContractInitialFunding types.Currency `json:"paymentcontractinitialfunding"`

	// ExpectedStorage is the amount of data that we expect to have in a contract.
	ExpectedStorage uint64 `json:"expectedstorage"`

	// ExpectedUpload is the expected amount of data uploaded through the API,
	// before redundancy, per block.
	ExpectedUpload uint64 `json:"expectedupload"`

	// ExpectedDownload is the expected amount of data downloaded through the
	// API per block.
	ExpectedDownload uint64 `json:"expecteddownload"`

	// ExpectedRedundancy is the average redundancy of files being uploaded.
	ExpectedRedundancy float64 `json:"expectedredundancy"`

	// MaxPeriodChurn is maximum amount of contract churn allowed in a single
	// period.
	MaxPeriodChurn uint64 `json:"maxperiodchurn"`

	// The following fields provide price gouging protection for the user. By
	// setting a particular maximum price for each mechanism that a host can use
	// to charge users, the workers know to avoid hosts that go outside of the
	// safety range.
	//
	// The intention is that if the fields are not set, a reasonable value will
	// be derived from the other allowance settings. The intention is that the
	// hostdb will pay attention to these limits when forming contracts,
	// understanding that a certain feature (such as storage) will not be used
	// if the host price is above the limit. If the hostdb believes that a host
	// is valuable for its other, more reasonably priced features, the hostdb
	// may choose to form a contract with the host anyway.
	//
	// NOTE: If the allowance max price fields are ever extended, all of the
	// price gouging checks throughout the worker code and contract formation
	// code also need to be extended.
	MaxRPCPrice               types.Currency `json:"maxrpcprice"`
	MaxContractPrice          types.Currency `json:"maxcontractprice"`
	MaxDownloadBandwidthPrice types.Currency `json:"maxdownloadbandwidthprice"`
	MaxSectorAccessPrice      types.Currency `json:"maxsectoraccessprice"`
	MaxStoragePrice           types.Currency `json:"maxstorageprice"`
	MaxUploadBandwidthPrice   types.Currency `json:"maxuploadbandwidthprice"`
}

An Allowance dictates how much the Renter is allowed to spend in a given period. Note that funds are spent on both storage and bandwidth.

NOTE: When changing the allowance struct, any new or adjusted fields are going to be loaded as blank when the contractor first starts up. The startup code either needs to set sane defaults, or the code which depends on the values needs to appropriately handle the values being empty.

func (Allowance) Active added in v1.4.2

func (a Allowance) Active() bool

Active returns true if and only if this allowance has been set in the contractor.

type BlockFacts added in v1.0.0

type BlockFacts struct {
	BlockID           types.BlockID     `json:"blockid"`
	Difficulty        types.Currency    `json:"difficulty"`
	EstimatedHashrate types.Currency    `json:"estimatedhashrate"`
	Height            types.BlockHeight `json:"height"`
	MaturityTimestamp types.Timestamp   `json:"maturitytimestamp"`
	Target            types.Target      `json:"target"`
	TotalCoins        types.Currency    `json:"totalcoins"`

	// Transaction type counts.
	MinerPayoutCount          uint64 `json:"minerpayoutcount"`
	TransactionCount          uint64 `json:"transactioncount"`
	SiacoinInputCount         uint64 `json:"siacoininputcount"`
	SiacoinOutputCount        uint64 `json:"siacoinoutputcount"`
	FileContractCount         uint64 `json:"filecontractcount"`
	FileContractRevisionCount uint64 `json:"filecontractrevisioncount"`
	StorageProofCount         uint64 `json:"storageproofcount"`
	SiafundInputCount         uint64 `json:"siafundinputcount"`
	SiafundOutputCount        uint64 `json:"siafundoutputcount"`
	MinerFeeCount             uint64 `json:"minerfeecount"`
	ArbitraryDataCount        uint64 `json:"arbitrarydatacount"`
	TransactionSignatureCount uint64 `json:"transactionsignaturecount"`

	// Factoids about file contracts.
	ActiveContractCost  types.Currency `json:"activecontractcost"`
	ActiveContractCount uint64         `json:"activecontractcount"`
	ActiveContractSize  types.Currency `json:"activecontractsize"`
	TotalContractCost   types.Currency `json:"totalcontractcost"`
	TotalContractSize   types.Currency `json:"totalcontractsize"`
	TotalRevisionVolume types.Currency `json:"totalrevisionvolume"`
}

BlockFacts returns a bunch of statistics about the consensus set as they were at a specific block.

type BlockManager added in v1.0.0

type BlockManager interface {
	// HeaderForWork returns a block header that can be grinded on and
	// resubmitted to the miner. HeaderForWork() will remember the block that
	// corresponds to the header for 50 calls.
	HeaderForWork() (types.BlockHeader, types.Target, error)

	// SubmitBlock accepts a solved block.
	SubmitBlock(types.Block) error

	// SubmitHeader takes a block header that has been worked on and has a
	// valid target.
	SubmitHeader(types.BlockHeader) error

	// BlocksMined returns the number of blocks and stale blocks that have been
	// mined using this miner.
	BlocksMined() (goodBlocks, staleBlocks int)
}

BlockManager contains functions that can interface with external miners, providing and receiving blocks that have experienced nonce grinding.

type CPUMiner added in v1.0.0

type CPUMiner interface {
	// CPUHashrate returns the hashrate of the cpu miner in hashes per second.
	CPUHashrate() int

	// Mining returns true if the cpu miner is enabled, and false otherwise.
	CPUMining() bool

	// StartMining turns on the miner, which will endlessly work for new
	// blocks.
	StartCPUMining()

	// StopMining turns off the miner, but keeps the same number of threads.
	StopCPUMining()
}

CPUMiner provides access to a single-threaded cpu miner.

type CombinedChunkID added in v1.4.2

type CombinedChunkID string

CombinedChunkID is a unique identifier for a combined chunk which makes up part of its filename on disk.

type ConsensusChange added in v0.3.3

type ConsensusChange struct {
	// ID is a unique id for the consensus change derived from the reverted
	// and applied blocks.
	ID ConsensusChangeID

	// RevertedBlocks is the list of blocks that were reverted by the change.
	// The reverted blocks were always all reverted before the applied blocks
	// were applied. The revered blocks are presented in the order that they
	// were reverted.
	RevertedBlocks []types.Block

	// AppliedBlocks is the list of blocks that were applied by the change. The
	// applied blocks are always all applied after all the reverted blocks were
	// reverted. The applied blocks are presented in the order that they were
	// applied.
	AppliedBlocks []types.Block

	// SiacoinOutputDiffs contains the set of siacoin diffs that were applied
	// to the consensus set in the recent change. The direction for the set of
	// diffs is 'DiffApply'.
	SiacoinOutputDiffs []SiacoinOutputDiff

	// FileContractDiffs contains the set of file contract diffs that were
	// applied to the consensus set in the recent change. The direction for the
	// set of diffs is 'DiffApply'.
	FileContractDiffs []FileContractDiff

	// SiafundOutputDiffs contains the set of siafund diffs that were applied
	// to the consensus set in the recent change. The direction for the set of
	// diffs is 'DiffApply'.
	SiafundOutputDiffs []SiafundOutputDiff

	// DelayedSiacoinOutputDiffs contains the set of delayed siacoin output
	// diffs that were applied to the consensus set in the recent change.
	DelayedSiacoinOutputDiffs []DelayedSiacoinOutputDiff

	// SiafundPoolDiffs are the siafund pool diffs that were applied to the
	// consensus set in the recent change.
	SiafundPoolDiffs []SiafundPoolDiff

	// ChildTarget defines the target of any block that would be the child
	// of the block most recently appended to the consensus set.
	ChildTarget types.Target

	// MinimumValidChildTimestamp defines the minimum allowed timestamp for
	// any block that is the child of the block most recently appended to
	// the consensus set.
	MinimumValidChildTimestamp types.Timestamp

	// Synced indicates whether or not the ConsensusSet is synced with its
	// peers.
	Synced bool

	// TryTransactionSet is an unlocked version of
	// ConsensusSet.TryTransactionSet. This allows the TryTransactionSet
	// function to be called by a subscriber during
	// ProcessConsensusChange.
	TryTransactionSet func([]types.Transaction) (ConsensusChange, error)
}

A ConsensusChange enumerates a set of changes that occurred to the consensus set.

func (ConsensusChange) Append added in v1.0.0

Append takes to ConsensusChange objects and adds all of their diffs together.

NOTE: It is possible for diffs to overlap or be inconsistent. This function should only be used with consecutive or disjoint consensus change objects.

type ConsensusChangeID added in v1.0.0

type ConsensusChangeID crypto.Hash

ConsensusChangeID is the id of a consensus change.

func (ConsensusChangeID) String added in v1.4.2

func (ccID ConsensusChangeID) String() string

String returns the ConsensusChangeID as a string.

type ConsensusConflict added in v1.0.0

type ConsensusConflict string

ConsensusConflict implements the error interface, and indicates that a transaction was rejected due to being incompatible with the current consensus set, meaning either a double spend or a consensus rule violation - it is unlikely that the transaction will ever be valid.

func NewConsensusConflict added in v1.0.0

func NewConsensusConflict(s string) ConsensusConflict

NewConsensusConflict returns a consensus conflict, which implements the error interface.

func (ConsensusConflict) Error added in v1.0.0

func (cc ConsensusConflict) Error() string

Error implements the error interface, turning the consensus conflict into an acceptable error type.

type ConsensusSet added in v0.3.2

type ConsensusSet interface {
	Alerter

	// AcceptBlock adds a block to consensus. An error will be returned if the
	// block is invalid, has been seen before, is an orphan, or doesn't
	// contribute to the heaviest fork known to the consensus set. If the block
	// does not become the head of the heaviest known fork but is otherwise
	// valid, it will be remembered by the consensus set but an error will
	// still be returned.
	AcceptBlock(types.Block) error

	// BlockAtHeight returns the block found at the input height, with a
	// bool to indicate whether that block exists.
	BlockAtHeight(types.BlockHeight) (types.Block, bool)

	// BlocksByID returns a block found for a given ID and its height, with
	// a bool to indicate whether that block exists.
	BlockByID(types.BlockID) (types.Block, types.BlockHeight, bool)

	// ChildTarget returns the target required to extend the current heaviest
	// fork. This function is typically used by miners looking to extend the
	// heaviest fork.
	ChildTarget(types.BlockID) (types.Target, bool)

	// Close will shut down the consensus set, giving the module enough time to
	// run any required closing routines.
	Close() error

	// ConsensusSetSubscribe adds a subscriber to the list of subscribers
	// and gives them every consensus change that has occurred since the
	// change with the provided id. There are a few special cases,
	// described by the ConsensusChangeX variables in this package.
	// A channel can be provided to abort the subscription process.
	ConsensusSetSubscribe(ConsensusSetSubscriber, ConsensusChangeID, <-chan struct{}) error

	// CurrentBlock returns the latest block in the heaviest known
	// blockchain.
	CurrentBlock() types.Block

	// Flush will cause the consensus set to finish all in-progress
	// routines.
	Flush() error

	// Height returns the current height of consensus.
	Height() types.BlockHeight

	// Synced returns true if the consensus set is synced with the network.
	Synced() bool

	// InCurrentPath returns true if the block id presented is found in the
	// current path, false otherwise.
	InCurrentPath(types.BlockID) bool

	// MinimumValidChildTimestamp returns the earliest timestamp that is
	// valid on the current longest fork according to the consensus set. This is
	// a required piece of information for the miner, who could otherwise be at
	// risk of mining invalid blocks.
	MinimumValidChildTimestamp(types.BlockID) (types.Timestamp, bool)

	// StorageProofSegment returns the segment to be used in the storage proof for
	// a given file contract.
	StorageProofSegment(types.FileContractID) (uint64, error)

	// TryTransactionSet checks whether the transaction set would be valid if
	// it were added in the next block. A consensus change is returned
	// detailing the diffs that would result from the application of the
	// transaction.
	TryTransactionSet([]types.Transaction) (ConsensusChange, error)

	// Unsubscribe removes a subscriber from the list of subscribers,
	// allowing for garbage collection and rescanning. If the subscriber is
	// not found in the subscriber database, no action is taken.
	Unsubscribe(ConsensusSetSubscriber)
}

A ConsensusSet accepts blocks and builds an understanding of network consensus.

type ConsensusSetSubscriber added in v0.3.2

type ConsensusSetSubscriber interface {
	// ProcessConsensusChange sends a consensus update to a module through
	// a function call. Updates will always be sent in the correct order.
	// There may not be any reverted blocks, but there will always be
	// applied blocks.
	ProcessConsensusChange(ConsensusChange)
}

A ConsensusSetSubscriber is an object that receives updates to the consensus set every time there is a change in consensus.

type ContractUtility added in v1.3.1

type ContractUtility struct {
	GoodForUpload bool
	GoodForRenew  bool

	// BadContract will be set to true if there's good reason to believe that
	// the contract is unusable and will continue to be unusable. For example,
	// if the host is claiming that the contract does not exist, the contract
	// should be marked as bad.
	BadContract bool
	LastOOSErr  types.BlockHeight // OOS means Out Of Storage

	// If a contract is locked, the utility should not be updated. 'Locked' is a
	// value that gets persisted.
	Locked bool
}

ContractUtility contains metrics internal to the contractor that reflect the utility of a given contract.

type ContractWatchStatus added in v1.4.2

type ContractWatchStatus struct {
	Archived                  bool              `json:"archived"`
	FormationSweepHeight      types.BlockHeight `json:"formationsweepheight"`
	ContractFound             bool              `json:"contractfound"`
	LatestRevisionFound       uint64            `json:"latestrevisionfound"`
	StorageProofFoundAtHeight types.BlockHeight `json:"storageprooffoundatheight"`
	DoubleSpendHeight         types.BlockHeight `json:"doublespendheight"`
	WindowStart               types.BlockHeight `json:"windowstart"`
	WindowEnd                 types.BlockHeight `json:"windowend"`
}

ContractWatchStatus provides information about the status of a contract in the renter's watchdog.

type ContractorChurnStatus added in v1.4.2

type ContractorChurnStatus struct {
	// AggregatCurrentePeriodChurn is the total size of files from churned contracts in this
	// period.
	AggregateCurrentPeriodChurn uint64 `json:"aggregatecurrentperiodchurn"`
	// MaxPeriodChurn is the (adjustable) maximum churn allowed per period.
	MaxPeriodChurn uint64 `json:"maxperiodchurn"`
}

ContractorChurnStatus contains the current churn budgets for the Contractor's churnLimiter and the aggregate churn for the current period.

type ContractorSpending added in v1.3.1

type ContractorSpending struct {
	// ContractFees are the sum of all fees in the contract. This means it
	// includes the ContractFee, TxnFee and SiafundFee
	ContractFees types.Currency `json:"contractfees"`
	// DownloadSpending is the money currently spent on downloads.
	DownloadSpending types.Currency `json:"downloadspending"`
	// StorageSpending is the money currently spent on storage.
	StorageSpending types.Currency `json:"storagespending"`
	// ContractSpending is the total amount of money that the renter has put
	// into contracts, whether it's locked and the renter gets that money
	// back or whether it's spent and the renter won't get the money back.
	TotalAllocated types.Currency `json:"totalallocated"`
	// UploadSpending is the money currently spent on uploads.
	UploadSpending types.Currency `json:"uploadspending"`
	// Unspent is locked-away, unspent money.
	Unspent types.Currency `json:"unspent"`
	// ContractSpendingDeprecated was renamed to TotalAllocated and always has the
	// same value as TotalAllocated.
	ContractSpendingDeprecated types.Currency `json:"contractspending,siamismatch"`
	// WithheldFunds are the funds from the previous period that are tied up
	// in contracts and have not been released yet
	WithheldFunds types.Currency `json:"withheldfunds"`
	// ReleaseBlock is the block at which the WithheldFunds should be
	// released to the renter, based on worst case.
	// Contract End Height + Host Window Size + Maturity Delay
	ReleaseBlock types.BlockHeight `json:"releaseblock"`
	// PreviousSpending is the total spend funds from old contracts
	// that are not included in the current period spending
	PreviousSpending types.Currency `json:"previousspending"`
}

ContractorSpending contains the metrics about how much the Contractor has spent during the current billing period.

type DelayedSiacoinOutputDiff added in v0.3.3

type DelayedSiacoinOutputDiff struct {
	Direction      DiffDirection
	ID             types.SiacoinOutputID
	SiacoinOutput  types.SiacoinOutput
	MaturityHeight types.BlockHeight
}

A DelayedSiacoinOutputDiff indicates the introduction of a siacoin output that cannot be spent until after maturing for 144 blocks. When the output has matured, a SiacoinOutputDiff will be provided.

type Dependencies added in v1.3.2

type Dependencies interface {
	// AtLeastOne will return a value that is at least one. In production,
	// the value should always be one. This function is used to test the
	// idempotency of actions, so during testing sometimes the value
	// returned will be higher, causing an idempotent action to be
	// committed multiple times. If the action is truly idempotent,
	// committing it multiple times should not cause any problems or
	// changes.
	AtLeastOne() uint64

	// CreateFile gives the host the ability to create files on the
	// operating system.
	CreateFile(string) (File, error)

	// Destruct will clean up the dependencies, panicking if there are
	// unclosed resources.
	Destruct()

	// DialTimeout tries to create a tcp connection to the specified
	// address with a certain timeout.
	DialTimeout(NetAddress, time.Duration) (net.Conn, error)

	// Disrupt can be inserted in the code as a way to inject problems,
	// such as a network call that take 10 minutes or a disk write that
	// never completes. disrupt will return true if the disruption is
	// forcibly triggered. In production, disrupt will always return false.
	Disrupt(string) bool

	// Listen gives the host the ability to receive incoming connections.
	Listen(string, string) (net.Listener, error)

	// LoadFile allows the host to load a persistence structure form disk.
	LoadFile(persist.Metadata, interface{}, string) error

	// LookupIP resolves a hostname to a number of IP addresses. If an IP
	// address is provided as an argument it will just return that IP.
	LookupIP(string) ([]net.IP, error)

	// MkdirAll gives the host the ability to create chains of folders
	// within the filesystem.
	MkdirAll(string, os.FileMode) error

	// NewLogger creates a logger that the host can use to log messages and
	// write critical statements.
	NewLogger(string) (*persist.Logger, error)

	// OpenDatabase creates a database that the host can use to interact
	// with large volumes of persistent data.
	OpenDatabase(persist.Metadata, string) (*persist.BoltDatabase, error)

	// Open opens a file readonly.
	Open(string) (File, error)

	// OpenFile opens a file with the specified mode.
	OpenFile(string, int, os.FileMode) (File, error)

	// Resolver returns a Resolver which can resolve hostnames to IPs.
	Resolver() Resolver

	// RandRead fills the input bytes with random data.
	RandRead([]byte) (int, error)

	// ReadFile reads a file in full from the filesystem.
	ReadFile(string) ([]byte, error)

	// RemoveFile removes a file from file filesystem.
	RemoveFile(string) error

	// RenameFile renames a file on disk to another name.
	RenameFile(string, string) error

	// SaveFileSync writes JSON encoded data to disk and syncs the file
	// afterwards.
	SaveFileSync(persist.Metadata, interface{}, string) error

	// Sleep blocks the calling thread for at least the specified duration.
	Sleep(time.Duration)

	// Symlink creates a sym link between a source and a destination.
	Symlink(s1, s2 string) error

	// WriteFile writes data to the filesystem using the provided filename.
	WriteFile(string, []byte, os.FileMode) error
}

Dependencies defines dependencies used by all of Sia's modules. Custom dependencies can be created to inject certain behavior during testing.

type DiffDirection added in v0.3.1

type DiffDirection bool

A DiffDirection indicates the "direction" of a diff, either applied or reverted. A bool is used to restrict the value to these two possibilities.

const (
	// ConsensusDir is the name of the directory used for all of the consensus
	// persistence files.
	ConsensusDir = "consensus"

	// DiffApply indicates that a diff is being applied to the consensus set.
	DiffApply DiffDirection = true

	// DiffRevert indicates that a diff is being reverted from the consensus
	// set.
	DiffRevert DiffDirection = false
)

type DirectoryInfo added in v1.4.0

type DirectoryInfo struct {
	// The following fields are aggregate values of the siadir. These values are
	// the totals of the siadir and any sub siadirs, or are calculated based on
	// all the values in the subtree
	AggregateHealth              float64   `json:"aggregatehealth"`
	AggregateLastHealthCheckTime time.Time `json:"aggregatelasthealthchecktime"`
	AggregateMaxHealth           float64   `json:"aggregatemaxhealth"`
	AggregateMaxHealthPercentage float64   `json:"aggregatemaxhealthpercentage"`
	AggregateMinRedundancy       float64   `json:"aggregateminredundancy"`
	AggregateMostRecentModTime   time.Time `json:"aggregatemostrecentmodtime"`
	AggregateNumFiles            uint64    `json:"aggregatenumfiles"`
	AggregateNumStuckChunks      uint64    `json:"aggregatenumstuckchunks"`
	AggregateNumSubDirs          uint64    `json:"aggregatenumsubdirs"`
	AggregateSize                uint64    `json:"aggregatesize"`
	AggregateStuckHealth         float64   `json:"aggregatestuckhealth"`

	// The following fields are information specific to the siadir that is not
	// an aggregate of the entire sub directory tree
	Health              float64     `json:"health"`
	LastHealthCheckTime time.Time   `json:"lasthealthchecktime"`
	MaxHealthPercentage float64     `json:"maxhealthpercentage"`
	MaxHealth           float64     `json:"maxhealth"`
	MinRedundancy       float64     `json:"minredundancy"`
	DirMode             os.FileMode `json:"mode,siamismatch"` // Field is called DirMode for fuse compatibility
	MostRecentModTime   time.Time   `json:"mostrecentmodtime"`
	NumFiles            uint64      `json:"numfiles"`
	NumStuckChunks      uint64      `json:"numstuckchunks"`
	NumSubDirs          uint64      `json:"numsubdirs"`
	SiaPath             SiaPath     `json:"siapath"`
	DirSize             uint64      `json:"size,siamismatch"` // Stays as 'size' in json for compatibility
	StuckHealth         float64     `json:"stuckhealth"`
	UID                 uint64      `json:"uid"`
}

DirectoryInfo provides information about a siadir

func (DirectoryInfo) IsDir added in v1.4.2

func (d DirectoryInfo) IsDir() bool

IsDir implements os.FileInfo.

func (DirectoryInfo) ModTime added in v1.4.2

func (d DirectoryInfo) ModTime() time.Time

ModTime implements os.FileInfo.

func (DirectoryInfo) Mode added in v1.4.2

func (d DirectoryInfo) Mode() os.FileMode

Mode implements os.FileInfo.

func (DirectoryInfo) Name added in v1.4.2

func (d DirectoryInfo) Name() string

Name implements os.FileInfo.

func (DirectoryInfo) Size added in v1.4.1

func (d DirectoryInfo) Size() int64

Size implements os.FileInfo.

func (DirectoryInfo) Sys added in v1.4.2

func (d DirectoryInfo) Sys() interface{}

Sys implements os.FileInfo.

type DownloadAction added in v1.0.0

type DownloadAction struct {
	MerkleRoot crypto.Hash
	Offset     uint64
	Length     uint64
}

A DownloadAction is a description of a download that the renter would like to make. The MerkleRoot indicates the root of the sector, the offset indicates what portion of the sector is being downloaded, and the length indicates how many bytes should be grabbed starting from the offset.

type DownloadID added in v1.4.2

type DownloadID string

DownloadID is a unique identifier used to identify downloads within the download history.

type DownloadInfo

type DownloadInfo struct {
	Destination     string  `json:"destination"`     // The destination of the download.
	DestinationType string  `json:"destinationtype"` // Can be "file", "memory buffer", or "http stream".
	Length          uint64  `json:"length"`          // The length requested for the download.
	Offset          uint64  `json:"offset"`          // The offset within the siafile requested for the download.
	SiaPath         SiaPath `json:"siapath"`         // The siapath of the file used for the download.

	Completed            bool      `json:"completed"`            // Whether or not the download has completed.
	EndTime              time.Time `json:"endtime"`              // The time when the download fully completed.
	Error                string    `json:"error"`                // Will be the empty string unless there was an error.
	Received             uint64    `json:"received"`             // Amount of data confirmed and decoded.
	StartTime            time.Time `json:"starttime"`            // The time when the download was started.
	StartTimeUnix        int64     `json:"starttimeunix"`        // The time when the download was started in unix format.
	TotalDataTransferred uint64    `json:"totaldatatransferred"` // Total amount of data transferred, including negotiation, etc.
}

DownloadInfo provides information about a file that has been requested for download.

type EncryptionManager added in v1.0.0

type EncryptionManager interface {
	// Encrypt will encrypt the wallet using the input key. Upon
	// encryption, a primary seed will be created for the wallet (no seed
	// exists prior to this point). If the key is blank, then the hash of
	// the seed that is generated will be used as the key.
	//
	// Encrypt can only be called once throughout the life of the wallet
	// and will return an error on subsequent calls (even after restarting
	// the wallet). To reset the wallet, the wallet files must be moved to
	// a different directory or deleted.
	Encrypt(masterKey crypto.CipherKey) (Seed, error)

	// Reset will reset the wallet, clearing the database and returning it to
	// the unencrypted state. Reset can only be called on a wallet that has
	// already been encrypted.
	Reset() error

	// Encrypted returns whether or not the wallet has been encrypted yet.
	// After being encrypted for the first time, the wallet can only be
	// unlocked using the encryption password.
	Encrypted() (bool, error)

	// InitFromSeed functions like Encrypt, but using a specified seed.
	// Unlike Encrypt, the blockchain will be scanned to determine the
	// seed's progress. For this reason, InitFromSeed should not be called
	// until the blockchain is fully synced.
	InitFromSeed(masterKey crypto.CipherKey, seed Seed) error

	// Lock deletes all keys in memory and prevents the wallet from being
	// used to spend coins or extract keys until 'Unlock' is called.
	Lock() error

	// Unlock must be called before the wallet is usable. All wallets and
	// wallet seeds are encrypted by default, and the wallet will not know
	// which addresses to watch for on the blockchain until unlock has been
	// called.
	//
	// All items in the wallet are encrypted using different keys which are
	// derived from the master key.
	Unlock(masterKey crypto.CipherKey) error

	// UnlockAsync must be called before the wallet is usable. All wallets and
	// wallet seeds are encrypted by default, and the wallet will not know
	// which addresses to watch for on the blockchain until unlock has been
	// called.
	// UnlockAsync will return a channel as soon as the wallet is unlocked but
	// before the wallet is caught up to consensus.
	//
	// All items in the wallet are encrypted using different keys which are
	// derived from the master key.
	UnlockAsync(masterKey crypto.CipherKey) <-chan error

	// ChangeKey changes the wallet's materKey from masterKey to newKey,
	// re-encrypting the wallet with the provided key.
	ChangeKey(masterKey crypto.CipherKey, newKey crypto.CipherKey) error

	// IsMasterKey verifies that the masterKey is the key used to encrypt
	// the wallet.
	IsMasterKey(masterKey crypto.CipherKey) (bool, error)

	// ChangeKeyWithSeed is the same as ChangeKey but uses the primary seed
	// instead of the current masterKey.
	ChangeKeyWithSeed(seed Seed, newKey crypto.CipherKey) error

	// Unlocked returns true if the wallet is currently unlocked, false
	// otherwise.
	Unlocked() (bool, error)
}

EncryptionManager can encrypt, lock, unlock, and indicate the current status of the EncryptionManager.

type ErasureCoder added in v1.0.0

type ErasureCoder interface {
	// NumPieces is the number of pieces returned by Encode.
	NumPieces() int

	// MinPieces is the minimum number of pieces that must be present to
	// recover the original data.
	MinPieces() int

	// Encode splits data into equal-length pieces, with some pieces
	// containing parity data.
	Encode(data []byte) ([][]byte, error)

	// Identifier returns the ErasureCoderIdentifier of the ErasureCoder.
	Identifier() ErasureCoderIdentifier

	// EncodeShards encodes the input data like Encode but accepts an already
	// sharded input.
	EncodeShards(data [][]byte) ([][]byte, error)

	// Reconstruct recovers the full set of encoded shards from the provided
	// pieces, of which at least MinPieces must be non-nil.
	Reconstruct(pieces [][]byte) error

	// Recover recovers the original data from pieces and writes it to w.
	// pieces should be identical to the slice returned by Encode (length and
	// order must be preserved), but with missing elements set to nil. n is
	// the number of bytes to be written to w; this is necessary because
	// pieces may have been padded with zeros during encoding.
	Recover(pieces [][]byte, n uint64, w io.Writer) error

	// SupportsPartialEncoding returns true if the ErasureCoder can be used
	// to encode/decode any crypto.SegmentSize bytes of an encoded piece or
	// false otherwise.
	SupportsPartialEncoding() bool

	// Type returns the type identifier of the ErasureCoder.
	Type() ErasureCoderType
}

An ErasureCoder is an error-correcting encoder and decoder.

type ErasureCoderIdentifier added in v1.4.2

type ErasureCoderIdentifier string

ErasureCoderIdentifier is an identifier that only matches another ErasureCoder's identifier if they both are of the same type and settings.

type ErasureCoderType added in v1.4.0

type ErasureCoderType [4]byte

ErasureCoderType is an identifier for the individual types of erasure coders.

type Explorer added in v1.0.0

type Explorer interface {
	Alerter

	// Block returns the block that matches the input block id. The bool
	// indicates whether the block appears in the blockchain.
	Block(types.BlockID) (types.Block, types.BlockHeight, bool)

	// BlockFacts returns a set of statistics about the blockchain as they
	// appeared at a given block.
	BlockFacts(types.BlockHeight) (BlockFacts, bool)

	// LatestBlockFacts returns the block facts of the last block
	// in the explorer's database.
	LatestBlockFacts() BlockFacts

	// Transaction returns the block that contains the input transaction
	// id. The transaction itself is either the block (indicating the miner
	// payouts are somehow involved), or it is a transaction inside of the
	// block. The bool indicates whether the transaction is found in the
	// consensus set.
	Transaction(types.TransactionID) (types.Block, types.BlockHeight, bool)

	// UnlockHash returns all of the transaction ids associated with the
	// provided unlock hash.
	UnlockHash(types.UnlockHash) []types.TransactionID

	// SiacoinOutput will return the siacoin output associated with the
	// input id.
	SiacoinOutput(types.SiacoinOutputID) (types.SiacoinOutput, bool)

	// SiacoinOutputID returns all of the transaction ids associated with
	// the provided siacoin output id.
	SiacoinOutputID(types.SiacoinOutputID) []types.TransactionID

	// FileContractHistory returns the history associated with a file
	// contract, which includes the file contract itself and all of the
	// revisions that have been submitted to the blockchain. The first bool
	// indicates whether the file contract exists, and the second bool
	// indicates whether a storage proof was successfully submitted for the
	// file contract.
	FileContractHistory(types.FileContractID) (fc types.FileContract, fcrs []types.FileContractRevision, fcExists bool, storageProofExists bool)

	// FileContractID returns all of the transaction ids associated with
	// the provided file contract id.
	FileContractID(types.FileContractID) []types.TransactionID

	// SiafundOutput will return the siafund output associated with the
	// input id.
	SiafundOutput(types.SiafundOutputID) (types.SiafundOutput, bool)

	// SiafundOutputID returns all of the transaction ids associated with
	// the provided siafund output id.
	SiafundOutputID(types.SiafundOutputID) []types.TransactionID

	Close() error
}

Explorer tracks the blockchain and provides tools for gathering statistics and finding objects or patterns within the blockchain.

type FeeManager added in v1.4.5

type FeeManager interface {
	// Close closes the FeeManager
	Close() error
}

FeeManager manages fees for applications

type File added in v1.3.2

type File interface {
	io.ReadWriteCloser
	Name() string
	ReadAt([]byte, int64) (int, error)
	Seek(int64, int) (int64, error)
	Sync() error
	Truncate(int64) error
	WriteAt([]byte, int64) (int, error)
}

File implements all of the methods that can be called on an os.File.

type FileContractDiff added in v0.3.1

type FileContractDiff struct {
	Direction    DiffDirection
	ID           types.FileContractID
	FileContract types.FileContract
}

A FileContractDiff indicates the addition or removal of a FileContract in the consensus set.

type FileInfo

type FileInfo struct {
	AccessTime       time.Time         `json:"accesstime"`
	Available        bool              `json:"available"`
	ChangeTime       time.Time         `json:"changetime"`
	CipherType       string            `json:"ciphertype"`
	CreateTime       time.Time         `json:"createtime"`
	Expiration       types.BlockHeight `json:"expiration"`
	Filesize         uint64            `json:"filesize"`
	Health           float64           `json:"health"`
	LocalPath        string            `json:"localpath"`
	MaxHealth        float64           `json:"maxhealth"`
	MaxHealthPercent float64           `json:"maxhealthpercent"`
	ModificationTime time.Time         `json:"modtime,siamismatch"` // Stays as 'modtime' in json for compatibility
	FileMode         os.FileMode       `json:"mode,siamismatch"`    // Field is called FileMode for fuse compatibility
	NumStuckChunks   uint64            `json:"numstuckchunks"`
	OnDisk           bool              `json:"ondisk"`
	Recoverable      bool              `json:"recoverable"`
	Redundancy       float64           `json:"redundancy"`
	Renewing         bool              `json:"renewing"`
	Skylinks         []string          `json:"skylinks"`
	SiaPath          SiaPath           `json:"siapath"`
	Stuck            bool              `json:"stuck"`
	StuckHealth      float64           `json:"stuckhealth"`
	UID              uint64            `json:"uid"`
	UploadedBytes    uint64            `json:"uploadedbytes"`
	UploadProgress   float64           `json:"uploadprogress"`
}

FileInfo provides information about a file.

func (FileInfo) IsDir added in v1.4.2

func (f FileInfo) IsDir() bool

IsDir implements os.FileInfo.

func (FileInfo) ModTime added in v1.4.0

func (f FileInfo) ModTime() time.Time

ModTime implements os.FileInfo.

func (FileInfo) Mode added in v1.4.2

func (f FileInfo) Mode() os.FileMode

Mode implements os.FileInfo.

func (FileInfo) Name added in v1.4.2

func (f FileInfo) Name() string

Name implements os.FileInfo.

func (FileInfo) Size added in v1.4.2

func (f FileInfo) Size() int64

Size implements os.FileInfo.

func (FileInfo) Sys added in v1.4.2

func (f FileInfo) Sys() interface{}

Sys implements os.FileInfo.

type FilePlacement added in v1.4.4

type FilePlacement struct {
	FileID       string
	Size         uint64
	SectorIndex  uint64
	SectorOffset uint64
}

FilePlacement contains the sector of a file and its offset in the sector.

func PackFiles added in v1.4.4

func PackFiles(files map[string]uint64) ([]FilePlacement, uint64, error)

PackFiles packs files, given as a map (id => size), into sectors in an efficient manner.

1. Sort the files by size in descending order.

2. Going from larger to smaller files, try to fit each file into an available bucket in a sector.

a. The first largest bucket should be chosen.

b. The first byte of the file must be aligned to a certain multiple of KiB,
based on its size.

  i. For a file size up to 32*2^n KiB, the file must align to 4*2^n KiB,
  for 0 <= n <= 7.

  ii. Alignment is based on the start of the sector, not the bucket.

  iii. Alignment may cause a file not to fit into an otherwise
  large-enough bucket.

c. If there are no suitable buckets, create a new sector and a new bucket
in that sector that fills the whole sector.

3. Pack the file into the bucket at the correct alignment.

a. Delete the bucket and make up to 2 new buckets. The new buckets, if any,
should stay ordered with regards to their positions in the sectors:

  i. If the file could not align to the start of the bucket, make a new
  bucket from the start of the old bucket to the start of the file.

  ii. If the file does not go to the end of the bucket, make a new bucket that
  goes from the end of the file to the end of the old bucket.

4. Return the array of file IDs in the order that they are packed.

type FileUploadParams added in v0.3.1

type FileUploadParams struct {
	Source              string
	SiaPath             SiaPath
	ErasureCode         ErasureCoder
	Force               bool
	DisablePartialChunk bool
	Repair              bool

	// CipherType was added later. If it is left blank, the renter will use the
	// default encryption method (as of writing, Threefish)
	CipherType crypto.CipherType
}

FileUploadParams contains the information used by the Renter to upload a file.

type FilterMode added in v1.4.0

type FilterMode int

FilterMode is the helper type for the enum constants for the HostDB filter mode

const (
	HostDBFilterError FilterMode = iota
	HostDBDisableFilter
	HostDBActivateBlacklist
	HostDBActiveWhitelist
)

HostDBFilterError HostDBDisableFilter HostDBActivateBlacklist and HostDBActiveWhitelist are the constants used to enable and disable the filter mode of the renter's hostdb

func (*FilterMode) FromString added in v1.4.0

func (fm *FilterMode) FromString(s string) error

FromString assigned the FilterMode from the provide string

func (FilterMode) String added in v1.4.0

func (fm FilterMode) String() string

String returns the string value for the FilterMode

type Gateway

type Gateway interface {
	Alerter

	// BandwidthCounters returns the Gateway's upload and download bandwidth
	BandwidthCounters() (uint64, uint64, time.Time, error)

	// Connect establishes a persistent connection to a peer.
	Connect(NetAddress) error

	// ConnectManual is a Connect wrapper for a user-initiated Connect
	ConnectManual(NetAddress) error

	// Disconnect terminates a connection to a peer.
	Disconnect(NetAddress) error

	// DiscoverAddress discovers and returns the current public IP address
	// of the gateway. Contrary to Address, DiscoverAddress is blocking and
	// might take multiple minutes to return. A channel to cancel the
	// discovery can be supplied optionally.
	DiscoverAddress(cancel <-chan struct{}) (net.IP, error)

	// ForwardPort adds a port mapping to the router. It will block until
	// the mapping is established or until it is interrupted by a shutdown.
	ForwardPort(port string) error

	// DisconnectManual is a Disconnect wrapper for a user-initiated
	// disconnect
	DisconnectManual(NetAddress) error

	// AddToBlacklist adds addresses to the blacklist of the gateway
	AddToBlacklist(addresses []string) error

	// Blacklist returns the current blacklist of the Gateway
	Blacklist() ([]string, error)

	// RemoveFromBlacklist removes addresses from the blacklist of the
	// gateway
	RemoveFromBlacklist(addresses []string) error

	// SetBlacklist sets the blacklist of the gateway
	SetBlacklist(addresses []string) error

	// Address returns the Gateway's address.
	Address() NetAddress

	// Peers returns the addresses that the Gateway is currently connected
	// to.
	Peers() []Peer

	// RegisterRPC registers a function to handle incoming connections that
	// supply the given RPC ID.
	RegisterRPC(string, RPCFunc)

	// RateLimits returns the currently set bandwidth limits of the gateway.
	RateLimits() (int64, int64)

	// SetRateLimits changes the rate limits for the peer-connections of the
	// gateway.
	SetRateLimits(downloadSpeed, uploadSpeed int64) error

	// UnregisterRPC unregisters an RPC and removes all references to the
	// RPCFunc supplied in the corresponding RegisterRPC call. References to
	// RPCFuncs registered with RegisterConnectCall are not removed and
	// should be removed with UnregisterConnectCall. If the RPC does not
	// exist no action is taken.
	UnregisterRPC(string)

	// RegisterConnectCall registers an RPC name and function to be called
	// upon connecting to a peer.
	RegisterConnectCall(string, RPCFunc)

	// UnregisterConnectCall unregisters an RPC and removes all references to the
	// RPCFunc supplied in the corresponding RegisterConnectCall call. References
	// to RPCFuncs registered with RegisterRPC are not removed and should be
	// removed with UnregisterRPC. If the RPC does not exist no action is taken.
	UnregisterConnectCall(string)

	// RPC calls an RPC on the given address. RPC cannot be called on an
	// address that the Gateway is not connected to.
	RPC(NetAddress, string, RPCFunc) error

	// Broadcast transmits obj, prefaced by the RPC name, to all of the
	// given peers in parallel.
	Broadcast(name string, obj interface{}, peers []Peer)

	// Online returns true if the gateway is connected to remote hosts
	Online() bool

	// Close safely stops the Gateway's listener process.
	Close() error
}

A Gateway facilitates the interactions between the local node and remote nodes (peers). It relays incoming blocks and transactions to local modules, and broadcasts outgoing blocks and transactions to peers. In a broad sense, it is responsible for ensuring that the local consensus set is consistent with the "network" consensus set.

type GenericAlerter added in v1.4.2

type GenericAlerter struct {
	// contains filtered or unexported fields
}

GenericAlerter implements the Alerter interface. It can be used as a helper type to implement the Alerter interface for modules and submodules.

func NewAlerter added in v1.4.2

func NewAlerter(module string) *GenericAlerter

NewAlerter creates a new alerter for the renter.

func (*GenericAlerter) Alerts added in v1.4.2

func (a *GenericAlerter) Alerts() (crit, err, warn []Alert)

Alerts returns the current alerts tracked by the alerter.

func (*GenericAlerter) RegisterAlert added in v1.4.2

func (a *GenericAlerter) RegisterAlert(id AlertID, msg, cause string, severity AlertSeverity)

RegisterAlert adds an alert to the alerter.

func (*GenericAlerter) UnregisterAlert added in v1.4.2

func (a *GenericAlerter) UnregisterAlert(id AlertID)

UnregisterAlert removes an alert from the alerter by id.

type Host

type Host interface {
	Alerter

	// AddSector will add a sector on the host. If the sector already
	// exists, a virtual sector will be added, meaning that the 'sectorData'
	// will be ignored and no new disk space will be consumed. The expiry
	// height is used to track what height the sector can be safely deleted
	// at, though typically the host will manually delete the sector before
	// the expiry height. The same sector can be added multiple times at
	// different expiry heights, and the host is expected to only store the
	// data once.
	AddSector(sectorRoot crypto.Hash, sectorData []byte) error

	// HasSector indicates whether the host stores a sector with a given
	// root or not.
	HasSector(crypto.Hash) bool

	// AddSectorBatch is a performance optimization over AddSector when
	// adding a bunch of virtual sectors. It is necessary because otherwise
	// potentially thousands or even tens-of-thousands of fsync calls would
	// need to be made in serial, which would prevent renters from ever
	// successfully renewing.
	AddSectorBatch(sectorRoots []crypto.Hash) error

	// AddStorageFolder adds a storage folder to the host. The host may not
	// check that there is enough space available on-disk to support as much
	// storage as requested, though the manager should gracefully handle
	// running out of storage unexpectedly.
	AddStorageFolder(path string, size uint64) error

	// Announce submits a host announcement to the blockchain.
	Announce() error

	// AnnounceAddress submits an announcement using the given address.
	AnnounceAddress(NetAddress) error

	// The host needs to be able to shut down.
	Close() error

	// ConnectabilityStatus returns the connectability status of the host,
	// that is, if it can connect to itself on the configured NetAddress.
	ConnectabilityStatus() HostConnectabilityStatus

	// DeleteSector deletes a sector, meaning that the host will be
	// unable to upload that sector and be unable to provide a storage
	// proof on that sector. DeleteSector is for removing the data
	// entirely, and will remove instances of the sector appearing at all
	// heights. The primary purpose of DeleteSector is to comply with legal
	// requests to remove data.
	DeleteSector(sectorRoot crypto.Hash) error

	// ExternalSettings returns the settings of the host as seen by an
	// untrusted node querying the host for settings.
	ExternalSettings() HostExternalSettings

	// BandwidthCounters returns the Hosts's upload and download bandwidth
	BandwidthCounters() (uint64, uint64, time.Time, error)

	// FinancialMetrics returns the financial statistics of the host.
	FinancialMetrics() HostFinancialMetrics

	// InternalSettings returns the host's internal settings, including
	// potentially private or sensitive information.
	InternalSettings() HostInternalSettings

	// NetworkMetrics returns information on the types of RPC calls that
	// have been made to the host.
	NetworkMetrics() HostNetworkMetrics

	// PruneStaleStorageObligations will delete storage obligations from the
	// host that, for whatever reason, did not make it on the block chain.
	// As these stale storage obligations have an impact on the host
	// financial metrics, this method updates the host financial metrics to
	// show the correct values.
	PruneStaleStorageObligations() error

	// PublicKey returns the public key of the host.
	PublicKey() types.SiaPublicKey

	// ReadSector will read a sector from the host, returning the bytes that
	// match the input sector root.
	ReadSector(sectorRoot crypto.Hash) ([]byte, error)

	// ReadPartialSector will read a sector from the storage manager, returning the
	// 'length' bytes at offset 'offset' that match the input sector root.
	ReadPartialSector(sectorRoot crypto.Hash, offset, length uint64) ([]byte, error)

	// RemoveSector will remove a sector from the host. The height at which
	// the sector expires should be provided, so that the auto-expiry
	// information for that sector can be properly updated.
	RemoveSector(sectorRoot crypto.Hash) error

	// RemoveSectorBatch is a non-ACID performance optimization to remove a
	// ton of sectors from the host all at once. This is necessary when
	// clearing out an entire contract from the host.
	RemoveSectorBatch(sectorRoots []crypto.Hash) error

	// RemoveStorageFolder will remove a storage folder from the host. All
	// storage on the folder will be moved to other storage folders, meaning
	// that no data will be lost. If the host is unable to save data, an
	// error will be returned and the operation will be stopped. If the
	// force flag is set to true, errors will be ignored and the remove
	// operation will be completed, meaning that data will be lost.
	RemoveStorageFolder(index uint16, force bool) error

	// ResetStorageFolderHealth will reset the health statistics on a
	// storage folder.
	ResetStorageFolderHealth(index uint16) error

	// ResizeStorageFolder will grow or shrink a storage folder on the host.
	// The host may not check that there is enough space on-disk to support
	// growing the storage folder, but should gracefully handle running out
	// of space unexpectedly. When shrinking a storage folder, any data in
	// the folder that needs to be moved will be placed into other storage
	// folders, meaning that no data will be lost. If the manager is unable
	// to migrate the data, an error will be returned and the operation will
	// be stopped. If the force flag is set to true, errors will be ignored
	// and the resize operation completed, meaning that data will be lost.
	ResizeStorageFolder(index uint16, newSize uint64, force bool) error

	// SetInternalSettings sets the hosting parameters of the host.
	SetInternalSettings(HostInternalSettings) error

	// StorageObligations returns the set of storage obligations held by
	// the host.
	StorageObligations() []StorageObligation

	// StorageFolders will return a list of storage folders tracked by the
	// host.
	StorageFolders() []StorageFolderMetadata

	// WorkingStatus returns the working state of the host, determined by if
	// settings calls are increasing.
	WorkingStatus() HostWorkingStatus
}

A Host can take storage from disk and offer it to the network, managing things such as announcements, settings, and implementing all of the RPCs of the host protocol.

type HostAnnouncement

type HostAnnouncement struct {
	Specifier  types.Specifier
	NetAddress NetAddress
	PublicKey  types.SiaPublicKey
}

HostAnnouncement is an announcement by the host that appears in the blockchain. 'Specifier' is always 'PrefixHostAnnouncement'. The announcement is always followed by a signature from the public key of the whole announcement.

type HostConnectabilityStatus added in v1.2.0

type HostConnectabilityStatus string

HostConnectabilityStatus reports the connectability state of a host. Can be one of "checking", "connectable", or "not connectable"

type HostDB

type HostDB interface {
	Alerter

	// ActiveHosts returns the list of hosts that are actively being selected
	// from.
	ActiveHosts() ([]HostDBEntry, error)

	// AllHosts returns the full list of hosts known to the hostdb, sorted in
	// order of preference.
	AllHosts() ([]HostDBEntry, error)

	// CheckForIPViolations accepts a number of host public keys and returns the
	// ones that violate the rules of the addressFilter.
	CheckForIPViolations([]types.SiaPublicKey) ([]types.SiaPublicKey, error)

	// Close closes the hostdb.
	Close() error

	// EstimateHostScore returns the estimated score breakdown of a host with the
	// provided settings.
	EstimateHostScore(HostDBEntry, Allowance) (HostScoreBreakdown, error)

	// Filter returns the hostdb's filterMode and filteredHosts
	Filter() (FilterMode, map[string]types.SiaPublicKey, error)

	// SetFilterMode sets the renter's hostdb filter mode
	SetFilterMode(lm FilterMode, hosts []types.SiaPublicKey) error

	// Host returns the HostDBEntry for a given host.
	Host(pk types.SiaPublicKey) (HostDBEntry, bool, error)

	// IncrementSuccessfulInteractions increments the number of successful
	// interactions with a host for a given key
	IncrementSuccessfulInteractions(types.SiaPublicKey) error

	// IncrementFailedInteractions increments the number of failed interactions with
	// a host for a given key
	IncrementFailedInteractions(types.SiaPublicKey) error

	// initialScanComplete returns a boolean indicating if the initial scan of the
	// hostdb is completed.
	InitialScanComplete() (bool, error)

	// IPViolationsCheck returns a boolean indicating if the IP violation check is
	// enabled or not.
	IPViolationsCheck() (bool, error)

	// RandomHosts returns a set of random hosts, weighted by their estimated
	// usefulness / attractiveness to the renter. RandomHosts will not return
	// any offline or inactive hosts.
	RandomHosts(int, []types.SiaPublicKey, []types.SiaPublicKey) ([]HostDBEntry, error)

	// RandomHostsWithAllowance is the same as RandomHosts but accepts an
	// allowance as an argument to be used instead of the allowance set in the
	// renter.
	RandomHostsWithAllowance(int, []types.SiaPublicKey, []types.SiaPublicKey, Allowance) ([]HostDBEntry, error)

	// ScoreBreakdown returns a detailed explanation of the various properties
	// of the host.
	ScoreBreakdown(HostDBEntry) (HostScoreBreakdown, error)

	// SetAllowance updates the allowance used by the hostdb for weighing hosts by
	// updating the host weight function. It will completely rebuild the hosttree so
	// it should be used with care.
	SetAllowance(Allowance) error

	// SetIPViolationCheck enables/disables the IP violation check within the
	// hostdb.
	SetIPViolationCheck(enabled bool) error

	// UpdateContracts rebuilds the knownContracts of the HostBD using the provided
	// contracts.
	UpdateContracts([]RenterContract) error
}

A HostDB is a database of hosts that the renter can use for figuring out who to upload to, and download from.

type HostDBEntry added in v1.0.0

type HostDBEntry struct {
	HostExternalSettings

	// FirstSeen is the last block height at which this host was announced.
	FirstSeen types.BlockHeight `json:"firstseen"`

	// Measurements that have been taken on the host. The most recent
	// measurements are kept in full detail, historic ones are compressed into
	// the historic values.
	HistoricDowntime time.Duration `json:"historicdowntime"`
	HistoricUptime   time.Duration `json:"historicuptime"`
	ScanHistory      HostDBScans   `json:"scanhistory"`

	// Measurements that are taken whenever we interact with a host.
	HistoricFailedInteractions     float64 `json:"historicfailedinteractions"`
	HistoricSuccessfulInteractions float64 `json:"historicsuccessfulinteractions"`
	RecentFailedInteractions       float64 `json:"recentfailedinteractions"`
	RecentSuccessfulInteractions   float64 `json:"recentsuccessfulinteractions"`

	LastHistoricUpdate types.BlockHeight `json:"lasthistoricupdate"`

	// Measurements related to the IP subnet mask.
	IPNets          []string  `json:"ipnets"`
	LastIPNetChange time.Time `json:"lastipnetchange"`

	// The public key of the host, stored separately to minimize risk of certain
	// MitM based vulnerabilities.
	PublicKey types.SiaPublicKey `json:"publickey"`

	// Filtered says whether or not a HostDBEntry is being filtered out of the
	// filtered hosttree due to the filter mode of the hosttree
	Filtered bool `json:"filtered"`
}

A HostDBEntry represents one host entry in the Renter's host DB. It aggregates the host's external settings and metrics with its public key.

type HostDBScan added in v1.1.0

type HostDBScan struct {
	Timestamp time.Time `json:"timestamp"`
	Success   bool      `json:"success"`
}

HostDBScan represents a single scan event.

type HostDBScans added in v1.1.0

type HostDBScans []HostDBScan

HostDBScans represents a sortable slice of scans.

func (HostDBScans) Len added in v1.1.0

func (s HostDBScans) Len() int

func (HostDBScans) Less added in v1.1.0

func (s HostDBScans) Less(i, j int) bool

func (HostDBScans) Swap added in v1.1.0

func (s HostDBScans) Swap(i, j int)

type HostExternalSettings added in v1.0.0

type HostExternalSettings struct {
	// MaxBatchSize indicates the maximum size in bytes that a batch is
	// allowed to be. A batch is an array of revision actions; each
	// revision action can have a different number of bytes, depending on
	// the action, so the number of revision actions allowed depends on the
	// sizes of each.
	AcceptingContracts   bool              `json:"acceptingcontracts"`
	MaxDownloadBatchSize uint64            `json:"maxdownloadbatchsize"`
	MaxDuration          types.BlockHeight `json:"maxduration"`
	MaxReviseBatchSize   uint64            `json:"maxrevisebatchsize"`
	NetAddress           NetAddress        `json:"netaddress"`
	RemainingStorage     uint64            `json:"remainingstorage"`
	SectorSize           uint64            `json:"sectorsize"`
	TotalStorage         uint64            `json:"totalstorage"`
	UnlockHash           types.UnlockHash  `json:"unlockhash"`
	WindowSize           types.BlockHeight `json:"windowsize"`

	// Collateral is the amount of collateral that the host will put up for
	// storage in 'bytes per block', as an assurance to the renter that the
	// host really is committed to keeping the file. But, because the file
	// contract is created with no data available, this does leave the host
	// exposed to an attack by a wealthy renter whereby the renter causes
	// the host to lockup in-advance a bunch of funds that the renter then
	// never uses, meaning the host will not have collateral for other
	// clients.
	//
	// MaxCollateral indicates the maximum number of coins that a host is
	// willing to put into a file contract.
	Collateral    types.Currency `json:"collateral"`
	MaxCollateral types.Currency `json:"maxcollateral"`

	// ContractPrice is the number of coins that the renter needs to pay to
	// the host just to open a file contract with them. Generally, the price
	// is only to cover the siacoin fees that the host will suffer when
	// submitting the file contract revision and storage proof to the
	// blockchain.
	//
	// BaseRPC price is a flat per-RPC fee charged by the host for any
	// non-free RPC.
	//
	// 'Download' bandwidth price is the cost per byte of downloading data
	// from the host. This includes metadata such as Merkle proofs.
	//
	// SectorAccessPrice is the cost per sector of data accessed when
	// downloading data.
	//
	// StoragePrice is the cost per-byte-per-block in hastings of storing
	// data on the host.
	//
	// 'Upload' bandwidth price is the cost per byte of uploading data to
	// the host.
	BaseRPCPrice           types.Currency `json:"baserpcprice"`
	ContractPrice          types.Currency `json:"contractprice"`
	DownloadBandwidthPrice types.Currency `json:"downloadbandwidthprice"`
	SectorAccessPrice      types.Currency `json:"sectoraccessprice"`
	StoragePrice           types.Currency `json:"storageprice"`
	UploadBandwidthPrice   types.Currency `json:"uploadbandwidthprice"`

	// Because the host has a public key, and settings are signed, and
	// because settings may be MITM'd, settings need a revision number so
	// that a renter can compare multiple sets of settings and determine
	// which is the most recent.
	RevisionNumber uint64 `json:"revisionnumber"`
	Version        string `json:"version"`

	SiaMuxPort string `json:"siamuxport"`
}

HostExternalSettings are the parameters advertised by the host. These are the values that the renter will request from the host in order to build its database.

NOTE: Anytime the pricing is extended for the HostExternalSettings, the Allowance also needs to be extended to support manually setting a maximum reasonable price.

type HostFinancialMetrics added in v1.0.0

type HostFinancialMetrics struct {
	// Every time a renter forms a contract with a host, a contract fee is
	// paid by the renter. These stats track the total contract fees.
	ContractCount                 uint64         `json:"contractcount"`
	ContractCompensation          types.Currency `json:"contractcompensation"`
	PotentialContractCompensation types.Currency `json:"potentialcontractcompensation"`

	// Metrics related to storage proofs, collateral, and submitting
	// transactions to the blockchain.
	LockedStorageCollateral types.Currency `json:"lockedstoragecollateral"`
	LostRevenue             types.Currency `json:"lostrevenue"`
	LostStorageCollateral   types.Currency `json:"loststoragecollateral"`
	PotentialStorageRevenue types.Currency `json:"potentialstoragerevenue"`
	RiskedStorageCollateral types.Currency `json:"riskedstoragecollateral"`
	StorageRevenue          types.Currency `json:"storagerevenue"`
	TransactionFeeExpenses  types.Currency `json:"transactionfeeexpenses"`

	// Bandwidth financial metrics.
	DownloadBandwidthRevenue          types.Currency `json:"downloadbandwidthrevenue"`
	PotentialDownloadBandwidthRevenue types.Currency `json:"potentialdownloadbandwidthrevenue"`
	PotentialUploadBandwidthRevenue   types.Currency `json:"potentialuploadbandwidthrevenue"`
	UploadBandwidthRevenue            types.Currency `json:"uploadbandwidthrevenue"`
}

HostFinancialMetrics provides financial statistics for the host, including money that is locked in contracts. Though verbose, these statistics should provide a clear picture of where the host's money is currently being used. The front end can consolidate stats where desired. Potential revenue refers to revenue that is available in a file contract for which the file contract window has not yet closed.

type HostInternalSettings added in v1.0.0

type HostInternalSettings struct {
	AcceptingContracts   bool              `json:"acceptingcontracts"`
	MaxDownloadBatchSize uint64            `json:"maxdownloadbatchsize"`
	MaxDuration          types.BlockHeight `json:"maxduration"`
	MaxReviseBatchSize   uint64            `json:"maxrevisebatchsize"`
	NetAddress           NetAddress        `json:"netaddress"`
	WindowSize           types.BlockHeight `json:"windowsize"`

	Collateral       types.Currency `json:"collateral"`
	CollateralBudget types.Currency `json:"collateralbudget"`
	MaxCollateral    types.Currency `json:"maxcollateral"`

	MinBaseRPCPrice           types.Currency `json:"minbaserpcprice"`
	MinContractPrice          types.Currency `json:"mincontractprice"`
	MinDownloadBandwidthPrice types.Currency `json:"mindownloadbandwidthprice"`
	MinSectorAccessPrice      types.Currency `json:"minsectoraccessprice"`
	MinStoragePrice           types.Currency `json:"minstorageprice"`
	MinUploadBandwidthPrice   types.Currency `json:"minuploadbandwidthprice"`

	EphemeralAccountExpiry     uint64         `json:"ephemeralaccountexpiry"`
	MaxEphemeralAccountBalance types.Currency `json:"maxephemeralaccountbalance"`
	MaxEphemeralAccountRisk    types.Currency `json:"maxephemeralaccountrisk"`
}

HostInternalSettings contains a list of settings that can be changed.

type HostNetworkMetrics added in v1.0.0

type HostNetworkMetrics struct {
	DownloadCalls     uint64 `json:"downloadcalls"`
	ErrorCalls        uint64 `json:"errorcalls"`
	FormContractCalls uint64 `json:"formcontractcalls"`
	RenewCalls        uint64 `json:"renewcalls"`
	ReviseCalls       uint64 `json:"revisecalls"`
	SettingsCalls     uint64 `json:"settingscalls"`
	UnrecognizedCalls uint64 `json:"unrecognizedcalls"`
}

HostNetworkMetrics reports the quantity of each type of RPC call that has been made to the host.

type HostOldExternalSettings added in v1.4.0

type HostOldExternalSettings struct {
	AcceptingContracts     bool              `json:"acceptingcontracts"`
	MaxDownloadBatchSize   uint64            `json:"maxdownloadbatchsize"`
	MaxDuration            types.BlockHeight `json:"maxduration"`
	MaxReviseBatchSize     uint64            `json:"maxrevisebatchsize"`
	NetAddress             NetAddress        `json:"netaddress"`
	RemainingStorage       uint64            `json:"remainingstorage"`
	SectorSize             uint64            `json:"sectorsize"`
	TotalStorage           uint64            `json:"totalstorage"`
	UnlockHash             types.UnlockHash  `json:"unlockhash"`
	WindowSize             types.BlockHeight `json:"windowsize"`
	Collateral             types.Currency    `json:"collateral"`
	MaxCollateral          types.Currency    `json:"maxcollateral"`
	ContractPrice          types.Currency    `json:"contractprice"`
	DownloadBandwidthPrice types.Currency    `json:"downloadbandwidthprice"`
	StoragePrice           types.Currency    `json:"storageprice"`
	UploadBandwidthPrice   types.Currency    `json:"uploadbandwidthprice"`
	RevisionNumber         uint64            `json:"revisionnumber"`
	Version                string            `json:"version"`
}

HostOldExternalSettings are the pre-v1.4.0 host settings.

type HostScoreBreakdown added in v1.1.1

type HostScoreBreakdown struct {
	Score          types.Currency `json:"score"`
	ConversionRate float64        `json:"conversionrate"`

	AgeAdjustment              float64 `json:"ageadjustment"`
	BurnAdjustment             float64 `json:"burnadjustment"`
	CollateralAdjustment       float64 `json:"collateraladjustment"`
	DurationAdjustment         float64 `json:"durationadjustment"`
	InteractionAdjustment      float64 `json:"interactionadjustment"`
	PriceAdjustment            float64 `json:"pricesmultiplier,siamismatch"`
	StorageRemainingAdjustment float64 `json:"storageremainingadjustment"`
	UptimeAdjustment           float64 `json:"uptimeadjustment"`
	VersionAdjustment          float64 `json:"versionadjustment"`
}

HostScoreBreakdown provides a piece-by-piece explanation of why a host has the score that they do.

NOTE: Renters are free to use whatever scoring they feel appropriate for hosts. Some renters will outright blacklist or whitelist sets of hosts. The results provided by this struct can only be used as a guide, and may vary significantly from machine to machine.

type HostWorkingStatus added in v1.2.0

type HostWorkingStatus string

HostWorkingStatus reports the working state of a host. Can be one of "checking", "working", or "not working".

type Instruction added in v1.4.2

type Instruction struct {
	Specifier InstructionSpecifier
	Args      []byte
}

Instruction specifies a generic instruction used as an input to `mdm.ExecuteProgram`.

func RPCIReadSector added in v1.4.2

func RPCIReadSector(rootOff, offsetOff, lengthOff uint64, merkleProof bool) Instruction

RPCIReadSector is a convenience method to create an Instruction of type 'ReadSector'.

type InstructionSpecifier added in v1.4.2

type InstructionSpecifier types.Specifier

InstructionSpecifier specifies the type of the instruction.

type KeyManager added in v1.0.0

type KeyManager interface {
	// AllAddresses returns all addresses that the wallet is able to spend
	// from, including unseeded addresses. Addresses are returned sorted in
	// byte-order.
	AllAddresses() ([]types.UnlockHash, error)

	// AllSeeds returns all of the seeds that are being tracked by the
	// wallet, including the primary seed. Only the primary seed is used to
	// generate new addresses, but the wallet can spend funds sent to
	// public keys generated by any of the seeds returned.
	AllSeeds() ([]Seed, error)

	// CreateBackup will create a backup of the wallet at the provided
	// filepath. The backup will have all seeds and keys.
	CreateBackup(string) error

	// LastAddresses returns the last n addresses starting at the last seedProgress
	// for which an address was generated.
	LastAddresses(n uint64) ([]types.UnlockHash, error)

	// Load033xWallet will load a version 0.3.3.x wallet from disk and add all of
	// the keys in the wallet as unseeded keys.
	Load033xWallet(crypto.CipherKey, string) error

	// LoadSeed will recreate a wallet file using the recovery phrase.
	// LoadSeed only needs to be called if the original seed file or
	// encryption password was lost. The master key is used to encrypt the
	// recovery seed before saving it to disk.
	LoadSeed(crypto.CipherKey, Seed) error

	// LoadSiagKeys will take a set of filepaths that point to a siag key
	// and will have the siag keys loaded into the wallet so that they will
	// become spendable.
	LoadSiagKeys(crypto.CipherKey, []string) error

	// NextAddress returns a new coin addresses generated from the
	// primary seed.
	NextAddress() (types.UnlockConditions, error)

	// NextAddresses returns n new coin addresses generated from the primary
	// seed.
	NextAddresses(uint64) ([]types.UnlockConditions, error)

	// PrimarySeed returns the unencrypted primary seed of the wallet,
	// along with a uint64 indicating how many addresses may be safely
	// generated from the seed.
	PrimarySeed() (Seed, uint64, error)

	// SignTransaction signs txn using secret keys known to the wallet.
	// The transaction should be complete with the exception of the
	// Signature fields of each TransactionSignature referenced by toSign.
	SignTransaction(txn *types.Transaction, toSign []crypto.Hash) error

	// SweepSeed scans the blockchain for outputs generated from seed and
	// creates a transaction that transfers them to the wallet. Note that
	// this incurs a transaction fee. It returns the total value of the
	// outputs, minus the fee. If only siafunds were found, the fee is
	// deducted from the wallet.
	SweepSeed(seed Seed) (coins, funds types.Currency, err error)
}

KeyManager manages wallet keys, including the use of seeds, creating and loading backups, and providing a layer of compatibility for older wallet files.

type LoopChallengeRequest added in v1.4.0

type LoopChallengeRequest struct {
	// Entropy signed by the renter to prove that it controls the secret key
	// used to sign contract revisions. The actual data signed should be:
	//
	//    blake2b(RPCChallengePrefix | Challenge)
	Challenge [16]byte
}

LoopChallengeRequest contains a challenge for the renter to prove their identity. It is the host's first encrypted message, and immediately follows KeyExchangeResponse.

type LoopContractAdditions added in v1.4.0

type LoopContractAdditions struct {
	Parents []types.Transaction
	Inputs  []types.SiacoinInput
	Outputs []types.SiacoinOutput
}

LoopContractAdditions contains the parent transaction, inputs, and outputs added by the host when negotiating a file contract.

type LoopContractSignatures added in v1.4.0

type LoopContractSignatures struct {
	ContractSignatures []types.TransactionSignature
	RevisionSignature  types.TransactionSignature
}

LoopContractSignatures contains the signatures for a contract transaction and initial revision. These signatures are sent by both the renter and host during contract formation and renewal.

type LoopFormContractRequest added in v1.4.0

type LoopFormContractRequest struct {
	Transactions []types.Transaction
	RenterKey    types.SiaPublicKey
}

LoopFormContractRequest contains the request parameters for RPCLoopFormContract.

type LoopKeyExchangeRequest added in v1.4.0

type LoopKeyExchangeRequest struct {
	// The renter's ephemeral X25519 public key.
	PublicKey crypto.X25519PublicKey

	// Encryption ciphers that the renter supports.
	Ciphers []types.Specifier
}

LoopKeyExchangeRequest is the first object sent when initializing the renter-host protocol.

type LoopKeyExchangeResponse added in v1.4.0

type LoopKeyExchangeResponse struct {
	// The host's ephemeral X25519 public key.
	PublicKey crypto.X25519PublicKey

	// Signature of (Host's Public Key | Renter's Public Key). Note that this
	// also serves to authenticate the host.
	Signature []byte

	// Cipher selected by the host. Must be one of the ciphers offered in
	// the key exchange request.
	Cipher types.Specifier
}

LoopKeyExchangeResponse contains the host's response to the KeyExchangeRequest.

type LoopLockRequest added in v1.4.0

type LoopLockRequest struct {
	// The contract to lock; implicitly referenced by subsequent RPCs.
	ContractID types.FileContractID

	// The host's challenge, signed by the renter's contract key.
	Signature []byte

	// Lock timeout, in milliseconds.
	Timeout uint64
}

LoopLockRequest contains the request parameters for RPCLoopLock.

type LoopLockResponse added in v1.4.0

type LoopLockResponse struct {
	Acquired     bool
	NewChallenge [16]byte
	Revision     types.FileContractRevision
	Signatures   []types.TransactionSignature
}

LoopLockResponse contains the response data for RPCLoopLock.

type LoopReadRequest added in v1.4.0

type LoopReadRequest struct {
	Sections    []LoopReadRequestSection
	MerkleProof bool

	NewRevisionNumber    uint64
	NewValidProofValues  []types.Currency
	NewMissedProofValues []types.Currency
	Signature            []byte
}

LoopReadRequest contains the request parameters for RPCLoopRead.

type LoopReadRequestSection added in v1.4.0

type LoopReadRequestSection struct {
	MerkleRoot [32]byte
	Offset     uint32
	Length     uint32
}

LoopReadRequestSection is a section requested in LoopReadRequest.

type LoopReadResponse added in v1.4.0

type LoopReadResponse struct {
	Signature   []byte
	Data        []byte
	MerkleProof []crypto.Hash
}

LoopReadResponse contains the response data for RPCLoopRead.

type LoopRenewAndClearContractRequest added in v1.4.4

type LoopRenewAndClearContractRequest struct {
	Transactions []types.Transaction
	RenterKey    types.SiaPublicKey

	FinalValidProofValues  []types.Currency
	FinalMissedProofValues []types.Currency
}

LoopRenewAndClearContractRequest contains the request parameters for RPCLoopRenewClearContract.

type LoopRenewAndClearContractSignatures added in v1.4.4

type LoopRenewAndClearContractSignatures struct {
	ContractSignatures []types.TransactionSignature
	RevisionSignature  types.TransactionSignature

	FinalRevisionSignature []byte
}

LoopRenewAndClearContractSignatures contains the signatures for a contract transaction, initial revision and final revision of the old contract. These signatures are sent by the renter during contract renewal.

type LoopRenewContractRequest added in v1.4.0

type LoopRenewContractRequest struct {
	Transactions []types.Transaction
	RenterKey    types.SiaPublicKey
}

LoopRenewContractRequest contains the request parameters for RPCLoopRenewContract.

type LoopSectorRootsRequest added in v1.4.0

type LoopSectorRootsRequest struct {
	RootOffset uint64
	NumRoots   uint64

	NewRevisionNumber    uint64
	NewValidProofValues  []types.Currency
	NewMissedProofValues []types.Currency
	Signature            []byte
}

LoopSectorRootsRequest contains the request parameters for RPCLoopSectorRoots.

type LoopSectorRootsResponse added in v1.4.0

type LoopSectorRootsResponse struct {
	Signature   []byte
	SectorRoots []crypto.Hash
	MerkleProof []crypto.Hash
}

LoopSectorRootsResponse contains the response data for RPCLoopSectorRoots.

type LoopSettingsResponse added in v1.4.0

type LoopSettingsResponse struct {
	Settings []byte // actually a JSON-encoded HostExternalSettings
}

LoopSettingsResponse contains the response data for RPCLoopSettingsResponse.

type LoopWriteAction added in v1.4.0

type LoopWriteAction struct {
	Type types.Specifier
	A, B uint64
	Data []byte
}

LoopWriteAction is a generic Write action. The meaning of each field depends on the Type of the action.

type LoopWriteMerkleProof added in v1.4.0

type LoopWriteMerkleProof struct {
	OldSubtreeHashes []crypto.Hash
	OldLeafHashes    []crypto.Hash
	NewMerkleRoot    crypto.Hash
}

LoopWriteMerkleProof contains the optional Merkle proof for response data for RPCLoopWrite.

type LoopWriteRequest added in v1.4.0

type LoopWriteRequest struct {
	Actions     []LoopWriteAction
	MerkleProof bool

	NewRevisionNumber    uint64
	NewValidProofValues  []types.Currency
	NewMissedProofValues []types.Currency
}

LoopWriteRequest contains the request parameters for RPCLoopWrite.

type LoopWriteResponse added in v1.4.0

type LoopWriteResponse struct {
	Signature []byte
}

LoopWriteResponse contains the response data for RPCLoopWrite.

type MerkleRootSet added in v1.1.1

type MerkleRootSet []crypto.Hash

MerkleRootSet is a set of Merkle roots, and gets encoded more efficiently.

func (MerkleRootSet) MarshalJSON added in v1.1.1

func (mrs MerkleRootSet) MarshalJSON() ([]byte, error)

MarshalJSON defines a JSON encoding for a MerkleRootSet.

func (*MerkleRootSet) UnmarshalJSON added in v1.1.1

func (mrs *MerkleRootSet) UnmarshalJSON(b []byte) error

UnmarshalJSON attempts to decode a MerkleRootSet, falling back on the legacy decoding of a []crypto.Hash if that fails.

type Miner added in v0.3.1

type Miner interface {
	BlockManager
	CPUMiner
	io.Closer
}

The Miner interface provides access to mining features.

type MountInfo added in v1.4.2

type MountInfo struct {
	MountPoint string  `json:"mountpoint"`
	SiaPath    SiaPath `json:"siapath"`

	MountOptions MountOptions `json:"mountoptions"`
}

MountInfo contains information about a mounted FUSE filesystem.

type MountOptions added in v1.4.2

type MountOptions struct {
	AllowOther bool `json:"allowother"`
	ReadOnly   bool `json:"readonly"`
}

MountOptions specify various settings of a FUSE filesystem mount.

type NetAddress

type NetAddress string

A NetAddress contains the information needed to contact a peer.

func DecodeAnnouncement added in v1.0.0

func DecodeAnnouncement(fullAnnouncement []byte) (na NetAddress, spk types.SiaPublicKey, err error)

DecodeAnnouncement decodes announcement bytes into a host announcement, verifying the prefix and the signature.

func (NetAddress) Host

func (na NetAddress) Host() string

Host removes the port from a NetAddress, returning just the host. If the address is not of the form "host:port" the empty string is returned. The port will still be returned for invalid NetAddresses (e.g. "unqualified:0" will return "unqualified"), but in general you should only call Host on valid addresses.

func (NetAddress) IsLocal added in v1.0.3

func (na NetAddress) IsLocal() bool

IsLocal returns true if the input IP address belongs to a local address range such as 192.168.x.x or 127.x.x.x

func (NetAddress) IsLoopback added in v1.0.0

func (na NetAddress) IsLoopback() bool

IsLoopback returns true for IP addresses that are on the same machine.

func (NetAddress) IsStdValid added in v1.0.3

func (na NetAddress) IsStdValid() error

IsStdValid returns an error if the NetAddress is invalid. A valid NetAddress is of the form "host:port", such that "host" is either a valid IPv4/IPv6 address or a valid hostname, and "port" is an integer in the range [1,65535]. Valid IPv4 addresses, IPv6 addresses, and hostnames are detailed in RFCs 791, 2460, and 952, respectively.

func (NetAddress) IsValid added in v1.0.0

func (na NetAddress) IsValid() error

IsValid is an extension to IsStdValid that also forbids the loopback address. IsValid is being phased out in favor of allowing the loopback address but verifying through other means that the connection is not to yourself (which is the original reason that the loopback address was banned).

func (NetAddress) Port

func (na NetAddress) Port() string

Port returns the NetAddress object's port number. If the address is not of the form "host:port" the empty string is returned. The port will still be returned for invalid NetAddresses (e.g. "localhost:0" will return "0"), but in general you should only call Port on valid addresses.

type PartialChunk added in v1.4.2

type PartialChunk struct {
	ChunkID        CombinedChunkID // The ChunkID of the combined chunk the partial is in.
	InPartialsFile bool            // 'true' if the combined chunk is already in the partials siafile.
	Length         uint64          // length of the partial chunk within the combined chunk.
	Offset         uint64          // offset of the partial chunk within the combined chunk.
}

PartialChunk holds some information about a combined chunk

type Peer added in v1.0.0

type Peer struct {
	Inbound    bool       `json:"inbound"`
	Local      bool       `json:"local"`
	NetAddress NetAddress `json:"netaddress"`
	Version    string     `json:"version"`
}

Peer contains all the info necessary to Broadcast to a peer.

type PeerConn added in v0.3.1

type PeerConn interface {
	net.Conn
	RPCAddr() NetAddress
}

A PeerConn is the connection type used when communicating with peers during an RPC. It is identical to a net.Conn with the additional RPCAddr method. This method acts as an identifier for peers and is the address that the peer can be dialed on. It is also the address that should be used when calling an RPC on the peer.

type ProcessedInput added in v1.0.0

type ProcessedInput struct {
	ParentID       types.OutputID   `json:"parentid"`
	FundType       types.Specifier  `json:"fundtype"`
	WalletAddress  bool             `json:"walletaddress"`
	RelatedAddress types.UnlockHash `json:"relatedaddress"`
	Value          types.Currency   `json:"value"`
}

A ProcessedInput represents funding to a transaction. The input is coming from an address and going to the outputs. The fund types are 'SiacoinInput', 'SiafundInput'.

type ProcessedOutput added in v1.0.0

type ProcessedOutput struct {
	ID             types.OutputID    `json:"id"`
	FundType       types.Specifier   `json:"fundtype"`
	MaturityHeight types.BlockHeight `json:"maturityheight"`
	WalletAddress  bool              `json:"walletaddress"`
	RelatedAddress types.UnlockHash  `json:"relatedaddress"`
	Value          types.Currency    `json:"value"`
}

A ProcessedOutput is a siacoin output that appears in a transaction. Some outputs mature immediately, some are delayed, and some may never mature at all (in the event of storage proofs).

Fund type can either be 'SiacoinOutput', 'SiafundOutput', 'ClaimOutput', 'MinerPayout', or 'MinerFee'. All outputs except the miner fee create outputs accessible to an address. Miner fees are not spendable, and instead contribute to the block subsidy.

MaturityHeight indicates at what block height the output becomes available. SiacoinInputs and SiafundInputs become available immediately. ClaimInputs and MinerPayouts become available after 144 confirmations.

type ProcessedTransaction added in v1.0.0

type ProcessedTransaction struct {
	Transaction           types.Transaction   `json:"transaction"`
	TransactionID         types.TransactionID `json:"transactionid"`
	ConfirmationHeight    types.BlockHeight   `json:"confirmationheight"`
	ConfirmationTimestamp types.Timestamp     `json:"confirmationtimestamp"`

	Inputs  []ProcessedInput  `json:"inputs"`
	Outputs []ProcessedOutput `json:"outputs"`
}

A ProcessedTransaction is a transaction that has been processed into explicit inputs and outputs and tagged with some header data such as confirmation height + timestamp.

Because of the block subsidy, a block is considered as a transaction. Since there is technically no transaction id for the block subsidy, the block id is used instead.

type ProductionDependencies added in v1.3.2

type ProductionDependencies struct {
	// contains filtered or unexported fields
}

ProductionDependencies are the dependencies used in a Release or Debug production build.

func (*ProductionDependencies) AtLeastOne added in v1.3.2

func (*ProductionDependencies) AtLeastOne() uint64

AtLeastOne will return a value that is equal to 1 if debugging is disabled. If debugging is enabled, a higher value may be returned.

func (*ProductionDependencies) CreateFile added in v1.3.2

func (pd *ProductionDependencies) CreateFile(s string) (File, error)

CreateFile gives the host the ability to create files on the operating system.

func (*ProductionDependencies) Destruct added in v1.3.2

func (pd *ProductionDependencies) Destruct()

Destruct checks that all resources have been cleaned up correctly.

func (*ProductionDependencies) DialTimeout added in v1.3.2

func (*ProductionDependencies) DialTimeout(addr NetAddress, timeout time.Duration) (net.Conn, error)

DialTimeout creates a tcp connection to a certain address with the specified timeout.

func (*ProductionDependencies) Disrupt added in v1.3.2

Disrupt can be used to inject specific behavior into a module by overwriting it using a custom dependency.

func (*ProductionDependencies) Listen added in v1.3.2

func (*ProductionDependencies) Listen(s1, s2 string) (net.Listener, error)

Listen gives the host the ability to receive incoming connections.

func (*ProductionDependencies) LoadFile added in v1.3.2

func (*ProductionDependencies) LoadFile(meta persist.Metadata, data interface{}, filename string) error

LoadFile loads JSON encoded data from a file.

func (*ProductionDependencies) LookupIP added in v1.3.4

func (*ProductionDependencies) LookupIP(host string) ([]net.IP, error)

LookupIP resolves a hostname to a number of IP addresses. If an IP address is provided as an argument it will just return that IP.

func (*ProductionDependencies) MkdirAll added in v1.3.2

func (*ProductionDependencies) MkdirAll(s string, fm os.FileMode) error

MkdirAll gives the host the ability to create chains of folders within the filesystem.

func (*ProductionDependencies) NewLogger added in v1.3.2

func (*ProductionDependencies) NewLogger(s string) (*persist.Logger, error)

NewLogger creates a logger that the host can use to log messages and write critical statements.

func (*ProductionDependencies) Open added in v1.4.0

func (pd *ProductionDependencies) Open(s string) (File, error)

Open opens a file readonly.

func (*ProductionDependencies) OpenDatabase added in v1.3.2

OpenDatabase creates a database that the host can use to interact with large volumes of persistent data.

func (*ProductionDependencies) OpenFile added in v1.3.2

func (pd *ProductionDependencies) OpenFile(s string, i int, fm os.FileMode) (File, error)

OpenFile opens a file with the specified mode and permissions.

func (*ProductionDependencies) RandRead added in v1.3.2

func (*ProductionDependencies) RandRead(b []byte) (int, error)

RandRead fills the input bytes with random data.

func (*ProductionDependencies) ReadFile added in v1.3.2

func (*ProductionDependencies) ReadFile(s string) ([]byte, error)

ReadFile reads a file from the filesystem.

func (*ProductionDependencies) RemoveFile added in v1.3.2

func (pd *ProductionDependencies) RemoveFile(s string) error

RemoveFile will remove a file from disk.

func (*ProductionDependencies) RenameFile added in v1.3.2

func (pd *ProductionDependencies) RenameFile(s1 string, s2 string) error

RenameFile renames a file on disk.

func (*ProductionDependencies) Resolver added in v1.3.5

func (*ProductionDependencies) Resolver() Resolver

Resolver returns the ProductionResolver.

func (*ProductionDependencies) SaveFileSync added in v1.3.2

func (*ProductionDependencies) SaveFileSync(meta persist.Metadata, data interface{}, filename string) error

SaveFileSync writes JSON encoded data to a file and syncs the file to disk afterwards.

func (*ProductionDependencies) Sleep added in v1.3.2

Sleep blocks the calling thread for a certain duration.

func (*ProductionDependencies) Symlink(s1, s2 string) error

Symlink creates a symlink between a source and a destination file.

func (*ProductionDependencies) WriteFile added in v1.3.2

func (*ProductionDependencies) WriteFile(s string, b []byte, fm os.FileMode) error

WriteFile writes a file to the filesystem.

type ProductionFile added in v1.3.2

type ProductionFile struct {
	*os.File
	// contains filtered or unexported fields
}

ProductionFile is the implementation of the File interface that is used in a Release or Debug production build.

func (*ProductionFile) Close added in v1.3.2

func (pf *ProductionFile) Close() error

Close will close a file, checking whether the file handle is open somewhere else before closing completely. This check is performed on Windows but not Linux, therefore a mock is used to ensure that linux testing picks up potential problems that would be seen on Windows.

type ProductionResolver added in v1.3.5

type ProductionResolver struct{}

ProductionResolver is the hostname resolver used in production builds.

func (ProductionResolver) LookupIP added in v1.3.5

func (ProductionResolver) LookupIP(host string) ([]net.IP, error)

LookupIP is a passthrough function to net.LookupIP. In testing builds it returns a random IP.

type RPCError added in v1.4.0

type RPCError struct {
	Type        types.Specifier
	Data        []byte // structure depends on Type
	Description string // human-readable error string
}

An RPCError may be sent instead of a Response to any RPC.

func (*RPCError) Error added in v1.4.0

func (e *RPCError) Error() string

Error implements the error interface.

type RPCFunc

type RPCFunc func(PeerConn) error

RPCFunc is the type signature of functions that handle RPCs. It is used for both the caller and the callee. RPCFuncs may perform locking. RPCFuncs may close the connection early, and it is recommended that they do so to avoid keeping the connection open after all necessary I/O has been performed.

type RPCPriceTable added in v1.4.3

type RPCPriceTable struct {
	// UUID is a specifier that uniquely identifies this price table
	UUID types.Specifier

	// Expiry is a unix timestamp that specifies the time until which the
	// MDMCostTable is valid.
	Expiry int64 `json:"expiry"`

	// UpdatePriceTableCost refers to the cost of fetching a new price table
	// from the host.
	UpdatePriceTableCost types.Currency `json:"updatepricetablecost"`

	// MDM related costs
	//
	// InitBaseCost is the amount of cost that is incurred when an MDM program
	// starts to run. This doesn't include the memory used by the program data.
	// The total cost to initialize a program is calculated as
	// InitCost = InitBaseCost + MemoryTimeCost * Time
	InitBaseCost types.Currency `json:"initbasecost"`

	// MemoryTimeCost is the amount of cost per byte per time that is incurred
	// by the memory consumption of the program.
	MemoryTimeCost types.Currency `json:"memorytimecost"`

	// Cost values specific to the DropSectors instruction.
	DropSectorsBaseCost   types.Currency `json:"dropsectorsbasecost"`
	DropSectorsLengthCost types.Currency `json:"dropsectorslengthcost"`

	// Cost values specific to the Read instruction.
	ReadBaseCost   types.Currency `json:"readbasecost"`
	ReadLengthCost types.Currency `json:"readlengthcost"`

	// Cost values specific to the Write instruction.
	WriteBaseCost   types.Currency `json:"writebasecost"`
	WriteLengthCost types.Currency `json:"writelengthcost"`
	WriteStoreCost  types.Currency `json:"writestorecost"`
}

RPCPriceTable contains the cost of executing a RPC on a host. Each host can set its own prices for the individual MDM instructions and RPC costs.

type RPCUpdatePriceTableResponse added in v1.4.3

type RPCUpdatePriceTableResponse struct {
	PriceTableJSON []byte
}

RPCUpdatePriceTableResponse contains a JSON encoded RPC price table

type RecoverableContract added in v1.4.0

type RecoverableContract struct {
	types.FileContract
	// ID is the FileContract's ID.
	ID types.FileContractID `json:"id"`
	// HostPublicKey is the public key of the host we formed this contract
	// with.
	HostPublicKey types.SiaPublicKey `json:"hostpublickey"`
	// InputParentID is the ParentID of the first SiacoinInput of the
	// transaction that contains this contract.
	InputParentID types.SiacoinOutputID `json:"inputparentid"`
	// StartHeight is the estimated startheight of a recoverable contract.
	StartHeight types.BlockHeight `json:"startheight"`
	// TxnFee of the transaction which contains the contract.
	TxnFee types.Currency `json:"txnfee"`
}

RecoverableContract is a types.FileContract as it appears on the blockchain with additional fields which contain the information required to recover its latest revision from a host.

type Renter

type Renter interface {
	Alerter

	// ActiveHosts provides the list of hosts that the renter is selecting,
	// sorted by preference.
	ActiveHosts() ([]HostDBEntry, error)

	// AllHosts returns the full list of hosts known to the renter.
	AllHosts() ([]HostDBEntry, error)

	// Close closes the Renter.
	Close() error

	// CancelContract cancels a specific contract of the renter.
	CancelContract(id types.FileContractID) error

	// Contracts returns the staticContracts of the renter's hostContractor.
	Contracts() []RenterContract

	// ContractStatus returns the status of the contract with the given ID in the
	// watchdog, and a bool indicating whether or not the watchdog is aware of it.
	ContractStatus(fcID types.FileContractID) (ContractWatchStatus, bool)

	// CreateBackup creates a backup of the renter's siafiles. If a secret is not
	// nil, the backup will be encrypted using the provided secret.
	CreateBackup(dst string, secret []byte) error

	// LoadBackup loads the siafiles of a previously created backup into the
	// renter. If the backup is encrypted, secret will be used to decrypt it.
	// Otherwise the argument is ignored.
	// If a file from the backup would have the same path as an already
	// existing file, a suffix of the form _[num] is appended to the siapath.
	// [num] is incremented until a siapath is found that is not already in
	// use.
	LoadBackup(src string, secret []byte) error

	// InitRecoveryScan starts scanning the whole blockchain for recoverable
	// contracts within a separate thread.
	InitRecoveryScan() error

	// OldContracts returns the oldContracts of the renter's hostContractor.
	OldContracts() []RenterContract

	// ContractorChurnStatus returns contract churn stats for the current period.
	ContractorChurnStatus() ContractorChurnStatus

	// ContractUtility provides the contract utility for a given host key.
	ContractUtility(pk types.SiaPublicKey) (ContractUtility, bool)

	// CurrentPeriod returns the height at which the current allowance period
	// began.
	CurrentPeriod() types.BlockHeight

	// Mount mounts a FUSE filesystem at mountPoint, making the contents of sp
	// available via the local filesystem.
	Mount(mountPoint string, sp SiaPath, opts MountOptions) error

	// MountInfo returns the list of currently mounted FUSE filesystems.
	MountInfo() []MountInfo

	// Unmount unmounts the FUSE filesystem currently mounted at mountPoint.
	Unmount(mountPoint string) error

	// PeriodSpending returns the amount spent on contracts in the current
	// billing period.
	PeriodSpending() (ContractorSpending, error)

	// RecoverableContracts returns the contracts that the contractor deems
	// recoverable. That means they are not expired yet and also not part of the
	// active contracts. Usually this should return an empty slice unless the host
	// isn't available for recovery or something went wrong.
	RecoverableContracts() []RecoverableContract

	// RecoveryScanStatus returns a bool indicating if a scan for recoverable
	// contracts is in progress and if it is, the current progress of the scan.
	RecoveryScanStatus() (bool, types.BlockHeight)

	// RefreshedContract checks if the contract was previously refreshed
	RefreshedContract(fcid types.FileContractID) bool

	// SetFileStuck sets the 'stuck' status of a file.
	SetFileStuck(siaPath SiaPath, stuck bool) error

	// UploadBackup uploads a backup to hosts, such that it can be retrieved
	// using only the seed.
	UploadBackup(src string, name string) error

	// DownloadBackup downloads a backup previously uploaded to hosts.
	DownloadBackup(dst string, name string) error

	// UploadedBackups returns a list of backups previously uploaded to hosts,
	// along with a list of which hosts are storing all known backups.
	UploadedBackups() ([]UploadedBackup, []types.SiaPublicKey, error)

	// BackupsOnHost returns the backups stored on the specified host.
	BackupsOnHost(hostKey types.SiaPublicKey) ([]UploadedBackup, error)

	// DeleteFile deletes a file entry from the renter.
	DeleteFile(siaPath SiaPath) error

	// Download creates a download according to the parameters passed, including
	// downloads of `offset` and `length` type. It returns a method to
	// start the download.
	Download(params RenterDownloadParameters) (DownloadID, func() error, error)

	// DownloadAsync creates a file download using the passed parameters without
	// blocking until the download is finished. The download needs to be started
	// using the method returned by DownloadAsync. DownloadAsync also accepts an
	// optional input function which will be registered to be called when the
	// download is finished.
	DownloadAsync(params RenterDownloadParameters, onComplete func(error) error) (uid DownloadID, start func() error, cancel func(), err error)

	// ClearDownloadHistory clears the download history of the renter
	// inclusive for before and after times.
	ClearDownloadHistory(after, before time.Time) error

	// DownloadByUID returns a download from the download history given its uid.
	DownloadByUID(uid DownloadID) (DownloadInfo, bool)

	// DownloadHistory lists all the files that have been scheduled for download.
	DownloadHistory() []DownloadInfo

	// File returns information on specific file queried by user
	File(siaPath SiaPath) (FileInfo, error)

	// FileList returns information on all of the files stored by the renter at the
	// specified folder. The 'cached' argument specifies whether cached values
	// should be returned or not.
	FileList(siaPath SiaPath, recursive, cached bool) ([]FileInfo, error)

	// Filter returns the renter's hostdb's filterMode and filteredHosts
	Filter() (FilterMode, map[string]types.SiaPublicKey, error)

	// SetFilterMode sets the renter's hostdb filter mode
	SetFilterMode(fm FilterMode, hosts []types.SiaPublicKey) error

	// Host provides the DB entry and score breakdown for the requested host.
	Host(pk types.SiaPublicKey) (HostDBEntry, bool, error)

	// InitialScanComplete returns a boolean indicating if the initial scan of the
	// hostdb is completed.
	InitialScanComplete() (bool, error)

	// PriceEstimation estimates the cost in siacoins of performing various
	// storage and data operations.
	PriceEstimation(allowance Allowance) (RenterPriceEstimation, Allowance, error)

	// RenameFile changes the path of a file.
	RenameFile(siaPath, newSiaPath SiaPath) error

	// RenameDir changes the path of a dir.
	RenameDir(oldPath, newPath SiaPath) error

	// EstimateHostScore will return the score for a host with the provided
	// settings, assuming perfect age and uptime adjustments
	EstimateHostScore(entry HostDBEntry, allowance Allowance) (HostScoreBreakdown, error)

	// ScoreBreakdown will return the score for a host db entry using the
	// hostdb's weighting algorithm.
	ScoreBreakdown(entry HostDBEntry) (HostScoreBreakdown, error)

	// Settings returns the Renter's current settings.
	Settings() (RenterSettings, error)

	// SetSettings sets the Renter's settings.
	SetSettings(RenterSettings) error

	// SetFileTrackingPath sets the on-disk location of an uploaded file to a
	// new value. Useful if files need to be moved on disk.
	SetFileTrackingPath(siaPath SiaPath, newPath string) error

	// PauseRepairsAndUploads pauses the renter's repairs and uploads for a time
	// duration
	PauseRepairsAndUploads(duration time.Duration) error

	// ResumeRepairsAndUploads resumes the renter's repairs and uploads
	ResumeRepairsAndUploads() error

	// Streamer creates a io.ReadSeeker that can be used to stream downloads
	// from the Sia network and also returns the fileName of the streamed
	// resource.
	Streamer(siapath SiaPath, disableLocalFetch bool) (string, Streamer, error)

	// Upload uploads a file using the input parameters.
	Upload(FileUploadParams) error

	// UploadStreamFromReader reads from the provided reader until io.EOF is reached and
	// upload the data to the Sia network.
	UploadStreamFromReader(up FileUploadParams, reader io.Reader) error

	// CreateDir creates a directory for the renter
	CreateDir(siaPath SiaPath, mode os.FileMode) error

	// DeleteDir deletes a directory from the renter
	DeleteDir(siaPath SiaPath) error

	// DirList lists the directories in a siadir
	DirList(siaPath SiaPath) ([]DirectoryInfo, error)

	// CreateSkylinkFromSiafile will create a skylink from a siafile. This will
	// result in some uploading - the base sector skyfile needs to be uploaded
	// separately, and if there is a fanout expansion that needs to be uploaded
	// separately as well.
	CreateSkylinkFromSiafile(SkyfileUploadParameters, SiaPath) (Skylink, error)

	// DownloadSkylink will fetch a file from the Sia network using the skylink.
	DownloadSkylink(Skylink, time.Duration) (SkyfileMetadata, Streamer, error)

	// UploadSkyfile will upload data to the Sia network from a reader and
	// create a skyfile, returning the skylink that can be used to access the
	// file.
	//
	// NOTE: A skyfile is a file that is tracked and repaired by the renter.  A
	// skyfile contains more than just the file data, it also contains metadata
	// about the file and other information which is useful in fetching the
	// file.
	UploadSkyfile(SkyfileUploadParameters) (Skylink, error)

	// Blacklist returns the merkleroots that are blacklisted
	Blacklist() ([]crypto.Hash, error)

	// UpdateSkynetBlacklist updates the list of skylinks that are blacklisted
	UpdateSkynetBlacklist(additions, removals []Skylink) error

	// PinSkylink re-uploads the data stored at the file under that skylink with
	// the given parameters.
	PinSkylink(Skylink, SkyfileUploadParameters, time.Duration) error
}

A Renter uploads, tracks, repairs, and downloads a set of files for the user.

type RenterContract added in v1.0.0

type RenterContract struct {
	ID            types.FileContractID
	HostPublicKey types.SiaPublicKey
	Transaction   types.Transaction

	StartHeight types.BlockHeight
	EndHeight   types.BlockHeight

	// RenterFunds is the amount remaining in the contract that the renter can
	// spend.
	RenterFunds types.Currency

	// The FileContract does not indicate what funds were spent on, so we have
	// to track the various costs manually.
	DownloadSpending types.Currency
	StorageSpending  types.Currency
	UploadSpending   types.Currency

	// Utility contains utility information about the renter.
	Utility ContractUtility

	// TotalCost indicates the amount of money that the renter spent and/or
	// locked up while forming a contract. This includes fees, and includes
	// funds which were allocated (but not necessarily committed) to spend on
	// uploads/downloads/storage.
	TotalCost types.Currency

	// ContractFee is the amount of money paid to the host to cover potential
	// future transaction fees that the host may incur, and to cover any other
	// overheads the host may have.
	//
	// TxnFee is the amount of money spent on the transaction fee when putting
	// the renter contract on the blockchain.
	//
	// SiafundFee is the amount of money spent on siafund fees when creating the
	// contract. The siafund fee that the renter pays covers both the renter and
	// the host portions of the contract, and therefore can be unexpectedly high
	// if the the host collateral is high.
	ContractFee types.Currency
	TxnFee      types.Currency
	SiafundFee  types.Currency
}

A RenterContract contains metadata about a file contract. It is read-only; modifying a RenterContract does not modify the actual file contract.

type RenterDownloadParameters added in v1.3.0

type RenterDownloadParameters struct {
	Async            bool
	Httpwriter       io.Writer
	Length           uint64
	Offset           uint64
	SiaPath          SiaPath
	Destination      string
	DisableDiskFetch bool
}

RenterDownloadParameters defines the parameters passed to the Renter's Download method.

type RenterHostSession added in v1.4.0

type RenterHostSession struct {
	// contains filtered or unexported fields
}

A RenterHostSession is a session of the new renter-host protocol.

func (*RenterHostSession) ReadRPCID added in v1.4.0

func (s *RenterHostSession) ReadRPCID() (rpcID types.Specifier, err error)

ReadRPCID reads an RPC request ID using the new loop protocol.

func (*RenterHostSession) ReadRequest added in v1.4.0

func (s *RenterHostSession) ReadRequest(req interface{}, maxLen uint64) error

ReadRequest reads an RPC request using the new loop protocol.

func (*RenterHostSession) ReadResponse added in v1.4.0

func (s *RenterHostSession) ReadResponse(resp interface{}, maxLen uint64) error

ReadResponse reads an RPC response using the new loop protocol.

func (*RenterHostSession) WriteRequest added in v1.4.0

func (s *RenterHostSession) WriteRequest(rpcID types.Specifier, req interface{}) error

WriteRequest writes an encrypted RPC request using the new loop protocol.

func (*RenterHostSession) WriteResponse added in v1.4.0

func (s *RenterHostSession) WriteResponse(resp interface{}, err error) error

WriteResponse writes an RPC response or error using the new loop protocol. Either resp or err must be nil. If err is an *RPCError, it is sent directly; otherwise, a generic RPCError is created from err's Error string.

type RenterPriceEstimation added in v1.1.1

type RenterPriceEstimation struct {
	// The cost of downloading 1 TB of data.
	DownloadTerabyte types.Currency `json:"downloadterabyte"`

	// The cost of forming a set of contracts using the defaults.
	FormContracts types.Currency `json:"formcontracts"`

	// The cost of storing 1 TB for a month, including redundancy.
	StorageTerabyteMonth types.Currency `json:"storageterabytemonth"`

	// The cost of consuming 1 TB of upload bandwidth from the host, including
	// redundancy.
	UploadTerabyte types.Currency `json:"uploadterabyte"`
}

RenterPriceEstimation contains a bunch of files estimating the costs of various operations on the network.

type RenterSettings added in v1.0.0

type RenterSettings struct {
	Allowance        Allowance     `json:"allowance"`
	IPViolationCheck bool          `json:"ipviolationcheck"`
	MaxUploadSpeed   int64         `json:"maxuploadspeed"`
	MaxDownloadSpeed int64         `json:"maxdownloadspeed"`
	UploadsStatus    UploadsStatus `json:"uploadsstatus"`
}

RenterSettings control the behavior of the Renter.

type Resolver added in v1.3.5

type Resolver interface {
	LookupIP(string) ([]net.IP, error)
}

Resolver is an interface that allows resolving a hostname into IP addresses.

type RevisionAction added in v1.0.0

type RevisionAction struct {
	Type        types.Specifier
	SectorIndex uint64
	Offset      uint64
	Data        []byte
}

A RevisionAction is a description of an edit to be performed on a file contract. Three types are allowed, 'ActionDelete', 'ActionInsert', and 'ActionModify'. ActionDelete just takes a sector index, indicating which sector is going to be deleted. ActionInsert takes a sector index, and a full sector of data, indicating that a sector at the index should be inserted with the provided data. 'Modify' revises the sector at the given index, rewriting it with the provided data starting from the 'offset' within the sector.

Modify could be simulated with an insert and a delete, however an insert requires a full sector to be uploaded, and a modify can be just a few kb, which can be significantly faster.

type Seed added in v1.0.0

type Seed [crypto.EntropySize]byte

Seed is cryptographic entropy that is used to derive spendable wallet addresses.

func StringToSeed added in v1.0.0

func StringToSeed(str string, did mnemonics.DictionaryID) (Seed, error)

StringToSeed converts a string to a wallet seed.

type SiaPath added in v1.4.0

type SiaPath struct {
	Path string `json:"path"`
}

SiaPath is the struct used to uniquely identify siafiles and siadirs across Sia

func CombinedSiaFilePath added in v1.4.2

func CombinedSiaFilePath(ec ErasureCoder) SiaPath

CombinedSiaFilePath returns the SiaPath to a hidden siafile which is used to store chunks that contain pieces of multiple siafiles.

func HomeSiaPath added in v1.4.2

func HomeSiaPath() SiaPath

HomeSiaPath returns a siapath to /home

func NewGlobalSiaPath added in v1.4.2

func NewGlobalSiaPath(s string) SiaPath

NewGlobalSiaPath can be used to create a global var which is a SiaPath. If there is an error creating the SiaPath, the function will panic, making this function unsuitable for typical use.

func NewSiaPath added in v1.4.0

func NewSiaPath(s string) (SiaPath, error)

NewSiaPath returns a new SiaPath with the path set

func RandomSiaPath added in v1.4.1

func RandomSiaPath() (sp SiaPath)

RandomSiaPath returns a random SiaPath created from 20 bytes of base32 encoded entropy.

func RootSiaPath added in v1.4.0

func RootSiaPath() SiaPath

RootSiaPath returns a SiaPath for the root siadir which has a blank path

func SnapshotsSiaPath added in v1.4.2

func SnapshotsSiaPath() SiaPath

SnapshotsSiaPath returns a siapath to /snapshots

func UserSiaPath added in v1.4.2

func UserSiaPath() SiaPath

UserSiaPath returns a siapath to /home/user

func (SiaPath) AddSuffix added in v1.4.1

func (sp SiaPath) AddSuffix(suffix uint) SiaPath

AddSuffix adds a numeric suffix to the end of the SiaPath.

func (SiaPath) Dir added in v1.4.0

func (sp SiaPath) Dir() (SiaPath, error)

Dir returns the directory of the SiaPath

func (SiaPath) Equals added in v1.4.0

func (sp SiaPath) Equals(siaPath SiaPath) bool

Equals compares two SiaPath types for equality

func (*SiaPath) FromSysPath added in v1.4.1

func (sp *SiaPath) FromSysPath(siaFilePath, dir string) (err error)

FromSysPath creates a SiaPath from a siaFilePath and corresponding root files dir.

func (SiaPath) IsEmpty added in v1.4.2

func (sp SiaPath) IsEmpty() bool

IsEmpty returns true if the siapath is equal to the nil value

func (SiaPath) IsRoot added in v1.4.0

func (sp SiaPath) IsRoot() bool

IsRoot indicates whether or not the SiaPath path is a root directory siapath

func (SiaPath) Join added in v1.4.0

func (sp SiaPath) Join(s string) (SiaPath, error)

Join joins the string to the end of the SiaPath with a "/" and returns the new SiaPath.

func (*SiaPath) LoadString added in v1.4.0

func (sp *SiaPath) LoadString(s string) error

LoadString sets the path of the SiaPath to the provided string

func (*SiaPath) LoadSysPath added in v1.4.1

func (sp *SiaPath) LoadSysPath(dir, path string) error

LoadSysPath loads a SiaPath from a given system path by trimming the dir at the front of the path, the extension at the back and returning the remaining path as a SiaPath.

func (SiaPath) MarshalJSON added in v1.4.0

func (sp SiaPath) MarshalJSON() ([]byte, error)

MarshalJSON marshales a SiaPath as a string.

func (SiaPath) Name added in v1.4.1

func (sp SiaPath) Name() string

Name returns the name of the file.

func (SiaPath) Rebase added in v1.4.1

func (sp SiaPath) Rebase(oldBase, newBase SiaPath) (SiaPath, error)

Rebase changes the base of a siapath from oldBase to newBase and returns a new SiaPath. e.g. rebasing 'a/b/myfile' from oldBase 'a/b/' to 'a/' would result in 'a/myfile'

func (SiaPath) SiaDirMetadataSysPath added in v1.4.0

func (sp SiaPath) SiaDirMetadataSysPath(dir string) string

SiaDirMetadataSysPath returns the system path needed to read the SiaDir metadata file from disk, the input dir is the root siadir directory on disk

func (SiaPath) SiaDirSysPath added in v1.4.0

func (sp SiaPath) SiaDirSysPath(dir string) string

SiaDirSysPath returns the system path needed to read a directory on disk, the input dir is the root siadir directory on disk

func (SiaPath) SiaFileSysPath added in v1.4.0

func (sp SiaPath) SiaFileSysPath(dir string) string

SiaFileSysPath returns the system path needed to read the SiaFile from disk, the input dir is the root siafile directory on disk

func (SiaPath) SiaPartialsFileSysPath added in v1.4.2

func (sp SiaPath) SiaPartialsFileSysPath(dir string) string

SiaPartialsFileSysPath returns the system path needed to read the PartialsSiaFile from disk, the input dir is the root siafile directory on disk

func (SiaPath) String added in v1.4.0

func (sp SiaPath) String() string

String returns the SiaPath's path

func (*SiaPath) UnmarshalJSON added in v1.4.0

func (sp *SiaPath) UnmarshalJSON(b []byte) error

UnmarshalJSON unmarshals a siapath into a SiaPath object.

func (SiaPath) Validate added in v1.4.1

func (sp SiaPath) Validate(isRoot bool) error

Validate checks that a Siapath is a legal filename. ../ is disallowed to prevent directory traversal, and paths must not begin with / or be empty.

type SiacoinOutputDiff added in v0.3.1

type SiacoinOutputDiff struct {
	Direction     DiffDirection
	ID            types.SiacoinOutputID
	SiacoinOutput types.SiacoinOutput
}

A SiacoinOutputDiff indicates the addition or removal of a SiacoinOutput in the consensus set.

type SiadConfig added in v1.4.1

type SiadConfig struct {
	// Ratelimit related fields
	ReadBPS            int64  `json:"readbps"`
	WriteBPSDeprecated int64  `json:"writeps,siamismatch"`
	WriteBPS           int64  `json:"writebps"`
	PacketSize         uint64 `json:"packetsize"`
	// contains filtered or unexported fields
}

SiadConfig is a helper type to manage the global siad config.

func NewConfig added in v1.4.1

func NewConfig(path string) (*SiadConfig, error)

NewConfig loads a config from disk or creates a new one if no config exists yet.

func (*SiadConfig) SetRatelimit added in v1.4.1

func (cfg *SiadConfig) SetRatelimit(readBPS, writeBPS int64) error

SetRatelimit sets the ratelimit related fields in the config and persists it to disk.

type SiafundOutputDiff added in v0.3.1

type SiafundOutputDiff struct {
	Direction     DiffDirection
	ID            types.SiafundOutputID
	SiafundOutput types.SiafundOutput
}

A SiafundOutputDiff indicates the addition or removal of a SiafundOutput in the consensus set.

type SiafundPoolDiff added in v0.3.1

type SiafundPoolDiff struct {
	Direction DiffDirection
	Previous  types.Currency
	Adjusted  types.Currency
}

A SiafundPoolDiff contains the value of the siafundPool before the block was applied, and after the block was applied. When applying the diff, set siafundPool to 'Adjusted'. When reverting the diff, set siafundPool to 'Previous'.

type SkyfileFormat added in v1.4.4

type SkyfileFormat string

SkyfileFormat is the file format the API uses to return a Skyfile as.

type SkyfileMetadata added in v1.4.3

type SkyfileMetadata struct {
	Mode     os.FileMode     `json:"mode,omitempty"`
	Filename string          `json:"filename,omitempty"`
	Subfiles SkyfileSubfiles `json:"subfiles,omitempty"`
}

SkyfileMetadata is all of the metadata that gets placed into the first 4096 bytes of the skyfile, and is used to set the metadata of the file when writing back to disk. The data is json-encoded when it is placed into the leading bytes of the skyfile, meaning that this struct can be extended without breaking compatibility.

func (SkyfileMetadata) ContentType added in v1.4.4

func (sm SkyfileMetadata) ContentType() string

ContentType returns the Content Type of the data. We only return a content-type if it has exactly one subfile. As that is the only case where we can be sure of it.

func (SkyfileMetadata) ForPath added in v1.4.4

func (sm SkyfileMetadata) ForPath(path string) (SkyfileMetadata, bool, uint64, uint64)

ForPath returns a subset of the SkyfileMetadata that contains all of the subfiles for the given path. The path can lead to both a directory or a file. Note that this method will return the subfiles with offsets relative to the given path, so if a directory is requested, the subfiles in that directory will start at offset 0, relative to the path.

type SkyfileMultipartUploadParameters added in v1.4.4

type SkyfileMultipartUploadParameters struct {
	SiaPath             SiaPath   `json:"siapath"`
	Force               bool      `json:"force"`
	Root                bool      `json:"root"`
	BaseChunkRedundancy uint8     `json:"basechunkredundancy"`
	Reader              io.Reader `json:"reader"`

	// Filename indicates the filename of the skyfile.
	Filename string `json:"filename"`

	// ContentType indicates the media type of the data supplied by the reader.
	ContentType string `json:"contenttype"`
}

SkyfileMultipartUploadParameters defines the parameters specific to multipart uploads. See SkyfileUploadParameters for a detailed description of the fields.

type SkyfilePinParameters added in v1.4.4

type SkyfilePinParameters struct {
	SiaPath             SiaPath `json:"siapath"`
	Force               bool    `json:"force"`
	Root                bool    `json:"root"`
	BaseChunkRedundancy uint8   `json:"basechunkredundancy"`
}

SkyfilePinParameters defines the parameters specific to pinning a skylink. See SkyfileUploadParameters for a detailed description of the fields.

type SkyfileSubfileMetadata added in v1.4.4

type SkyfileSubfileMetadata struct {
	FileMode    os.FileMode `json:"mode,omitempty,siamismatch"` // different json name for compat reasons
	Filename    string      `json:"filename,omitempty"`
	ContentType string      `json:"contenttype,omitempty"`
	Offset      uint64      `json:"offset,omitempty"`
	Len         uint64      `json:"len,omitempty"`
}

SkyfileSubfileMetadata is all of the metadata that belongs to a subfile in a skyfile. Most importantly it contains the offset at which the subfile is written and its length. Its filename can potentially include a '/' character as nested files and directories are allowed within a single Skyfile

func (SkyfileSubfileMetadata) IsDir added in v1.4.4

func (sm SkyfileSubfileMetadata) IsDir() bool

IsDir implements the os.FileInfo interface for SkyfileSubfileMetadata.

func (SkyfileSubfileMetadata) ModTime added in v1.4.4

func (sm SkyfileSubfileMetadata) ModTime() time.Time

ModTime implements the os.FileInfo interface for SkyfileSubfileMetadata.

func (SkyfileSubfileMetadata) Mode added in v1.4.4

Mode implements the os.FileInfo interface for SkyfileSubfileMetadata.

func (SkyfileSubfileMetadata) Name added in v1.4.4

func (sm SkyfileSubfileMetadata) Name() string

Name implements the os.FileInfo interface for SkyfileSubfileMetadata.

func (SkyfileSubfileMetadata) Size added in v1.4.4

func (sm SkyfileSubfileMetadata) Size() int64

Size implements the os.FileInfo interface for SkyfileSubfileMetadata.

func (SkyfileSubfileMetadata) Sys added in v1.4.4

func (sm SkyfileSubfileMetadata) Sys() interface{}

Sys implements the os.FileInfo interface for SkyfileSubfileMetadata.

type SkyfileSubfiles added in v1.4.4

type SkyfileSubfiles map[string]SkyfileSubfileMetadata

SkyfileSubfiles contains the subfiles of a skyfile, indexed by their filename.

type SkyfileUploadParameters added in v1.4.3

type SkyfileUploadParameters struct {
	// SiaPath defines the siapath that the skyfile is going to be uploaded to.
	// Recommended that the skyfile is placed in /var/skynet
	SiaPath SiaPath `json:"siapath"`

	// Force determines whether the upload should overwrite an existing siafile
	// at 'SiaPath'. If set to false, an error will be returned if there is
	// already a file or folder at 'SiaPath'. If set to true, any existing file
	// or folder at 'SiaPath' will be deleted and overwritten.
	Force bool `json:"force"`

	// Root determines whether the upload should treat the filepath as a path
	// from system root, or if the path should be from /var/skynet.
	Root bool `json:"root"`

	// The base chunk is always uploaded with a 1-of-N erasure coding setting,
	// meaning that only the redundancy needs to be configured by the user.
	BaseChunkRedundancy uint8 `json:"basechunkredundancy"`

	// This metadata will be included in the base chunk, meaning that this
	// metadata is visible to the downloader before any of the file data is
	// visible.
	FileMetadata SkyfileMetadata `json:"filemetadata"`

	// Reader supplies the file data for the skyfile.
	Reader io.Reader `json:"reader"`
}

SkyfileUploadParameters establishes the parameters such as the intra-root erasure coding.

type Skylink struct {
	// contains filtered or unexported fields
}

Skylink contains all of the information that can be encoded into a skylink. This information consists of a 32 byte MerkleRoot and a 2 byte bitfield.

The first two bits of the bitfield (values 1 and 2 in decimal) determine the version of the skylink. The skylink version determines how the remaining bits are used. Not all values of the bitfield are legal.

func NewSkylinkV1 added in v1.4.3

func NewSkylinkV1(merkleRoot crypto.Hash, offset, length uint64) (Skylink, error)

NewSkylinkV1 will return a v1 Skylink object with the version set to 1 and the remaining fields set appropriately. Note that the offset needs to be aligned correctly. Check OffsetAndFetchSize for a full list of rules on legal offsets - the value of a legal offset depends on the provided length.

The input length will automatically be converted to the nearest fetch size.

func (*Skylink) Bitfield added in v1.4.3

func (sl *Skylink) Bitfield() uint16

Bitfield returns the bitfield of a skylink.

func (*Skylink) LoadString added in v1.4.3

func (sl *Skylink) LoadString(s string) error

LoadString converts from a string and loads the result into sl.

func (Skylink) MerkleRoot added in v1.4.3

func (sl Skylink) MerkleRoot() crypto.Hash

MerkleRoot returns the merkle root of the Skylink.

func (Skylink) OffsetAndFetchSize added in v1.4.3

func (sl Skylink) OffsetAndFetchSize() (offset uint64, fetchSize uint64, err error)

OffsetAndFetchSize returns the offset and fetch size of a file that sits within a skylink sector. All skylinks point to one sector of data. If the file is large enough that more data is necessary, a "fanout" is used to point to more sectors.

NOTE: To fully understand the bitfield of the v1 Skylink, it is recommended that the following documentation is read alongside the code.

Sectors are 4 MiB large. To enable the support of efficiently storing and downloading smaller files, the skylink allows an offset and a fetch size to be specified for a file, which means many files can be stored within a single sector root, and each file can get a unique 46 byte skylink.

Existing content addressing systems use 46 bytes, to maximize compatibility we have also chosen to adhere to a 46 byte link size. 46 bytes of base64 is 34 bytes of raw data, which means there are only 34 bytes to work with for storing extra information such as the version, offset, and fetch size of a file. The tight data constraints resulted in this compact format.

Skylinks are given 2 bits for a version. These bits are always the first 2 bits of the bitfield, which correspond to the values '1' and '2' when the bitfield is interpreted as a uint16. The version must be set to 1 to retrieve an offset and a fetch size.

That leaves 14 bits to determine the actual offset and fetch size for the file. The first 8 of those 14 bits are conditional bits, operating somewhat like varints. There are 8 total "modes" that can be triggered by these 8 bits. The first mode is triggered if the first of the 8 bits is a "0". That mode indicates that the 13 remaining bits should be used to compute the offset and fetch size using mode 1. If the first of the 8 bits is a "1", it means check the next bit. If that next bit is a "0", the second mode is triggered, meaning that the remaining 12 bits should be used to compute the offset and fetch size using mode 2.

Out of the 8 modes total, each mode has 1 fewer bit than the previous mode for computing the offset and fetch size. The first mode has 13 bits total, and the final mode has 6 bits total. The first three of these bits always indicates the fetch size. More on that later.

The modes themselves are fairly simple. The first mode indicates that the file is stored on an offset that is aligned to 4096 (1 << 12) bytes. With that alignment, there are 1024 possible offsets for the file to start at within a 4 MiB sector. That takes 10 bits to represent with perfect precision, and is conveniently the number of remaining bits to determine the offset after the fetch size has been parsed.

The second mode indicates that the file is stored on an offset that is aligned to 8192 (1 << 13) bytes, which means there are 512 possible offsets. Because a bit was consumed to switch modes, only 9 bits are available to indicate what the offset is. But as there are only 512 possible offsets, only 9 bits are needed.

This continues until the final mode, which indicates that the file is stored on an offset that is aligned to 512 kib (1 << 19). This is where it stops, larger offsets are unnecessary. Having 8 consecutive 1's in a v1 Skylink is invalid, which means means there are 64 total unused states (all states where the first 8 of 14 non-version bits are set to '1').

The fetch size is an upper bound that says 'the file is no more than this many bytes', and tells the client to download that many bytes to get the whole file. The actual length of the file is in the metadata that gets downloaded along with the file.

For every mode, there are 8 possible fetch sizes. For the first mode, the first possible fetch size is 4 kib, and each additional possible fetch size is another 4 kib. That means files in the first mode can be placed on any 4096 byte aligned offset within the Merkle root and can be up to 32 kib large.

For the second mode, the fetch sizes also increase by 4 kib at a time, starting where the first mode left off. The smallest fetch size that a file in the second mode can have is 36 kib, and the largest fetch size that a file in the second mode can have is 64 kib.

Each mode after that, the increment of the fetch size doubles. So the third mode starts at a fetch size of 72 kib, and goes up to a fetch size of 128 kib. And the fourth mode starts at a fetch size of 144 kib, and goes up to a fetch size of 256 kib. The eighth and final mode extends up to a fetch size of 4 MiB, which is the full size of the sector.

A full table of fetch sizes is presented here:

   4,    8,   12,   16,   20,   24,   28,   32,
  36,   40,   44,   48,   52,   56,   60,   64,
  72,   80,   88,   96,  104,  112,  120,  128,
 144,  160,  176,  192,  208,  224,  240,  256,
 288,  320,  352,  384,  416,  448,  480,  512,
 576,  640,  704,  768,  832,  896,  960, 1024,
1152, 1280, 1408, 1536, 1664, 1792, 1920, 2048,
2304, 2560, 2816, 3072, 3328, 3584, 3840, 4096,

Certain combinations of offset + fetch size are illegal. Specifically, it is illegal to indicate a fetch size that goes beyond the boundary of the file. The first mode has 28 illegal states, and each mode after that has 60 illegal states. Combined with the 64 illegal states that can be created by incorrectly set mode bits, there are 512 illegal states total for v1 of the Sia link.

It's possible that these states will be repurposed in the future, extending the functionality of the v1 skylink. More likely however, a transition to v2 will be made instead.

NOTE: If there is an error, OffsetAndLen will return a signal to download the entire sector. This means that any code which is ignoring the error will still have mostly sane behavior.

func (Skylink) String added in v1.4.3

func (sl Skylink) String() string

String converts Skylink to a string.

func (Skylink) Version added in v1.4.3

func (sl Skylink) Version() uint16

Version will pull the version out of the bitfield and return it. The version is a 2 bit number, meaning there are 4 possible values. The bitwise values cover the range [0, 3], however we want to return a value in the range [1, 4], so we increment the bitwise result.

type StorageFolderMetadata added in v1.0.0

type StorageFolderMetadata struct {
	Capacity          uint64 `json:"capacity"`          // bytes
	CapacityRemaining uint64 `json:"capacityremaining"` // bytes
	Index             uint16 `json:"index"`
	Path              string `json:"path"`

	// Below are statistics about the filesystem. FailedReads and
	// FailedWrites are only incremented if the filesystem is returning
	// errors when operations are being performed. A large number of
	// FailedWrites can indicate that more space has been allocated on a
	// drive than is physically available. A high number of failures can
	// also indicate disk trouble.
	FailedReads      uint64 `json:"failedreads"`
	FailedWrites     uint64 `json:"failedwrites"`
	SuccessfulReads  uint64 `json:"successfulreads"`
	SuccessfulWrites uint64 `json:"successfulwrites"`

	// Certain operations on a storage folder can take a long time (Add,
	// Remove, and Resize). The fields below indicate the progress of any
	// long running operations that might be under way in the storage
	// folder. Progress is always reported in bytes.
	ProgressNumerator   uint64
	ProgressDenominator uint64
}

StorageFolderMetadata contains metadata about a storage folder that is tracked by the storage folder manager.

type StorageManager added in v1.0.0

type StorageManager interface {
	Alerter

	// AddSector will add a sector to the storage manager. If the sector
	// already exists, a virtual sector will be added, meaning that the
	// 'sectorData' will be ignored and no new disk space will be consumed.
	// The expiry height is used to track what height the sector can be
	// safely deleted at, though typically the host will manually delete
	// the sector before the expiry height. The same sector can be added
	// multiple times at different expiry heights, and the storage manager
	// is expected to only store the data once.
	AddSector(sectorRoot crypto.Hash, sectorData []byte) error

	// HasSector indicates whether the contract manager stores a sector with
	// a given root or not.
	HasSector(crypto.Hash) bool

	// AddSectorBatch is a performance optimization over AddSector when
	// adding a bunch of virtual sectors. It is necessary because otherwise
	// potentially thousands or even tens-of-thousands of fsync calls would
	// need to be made in serial, which would prevent renters from ever
	// successfully renewing.
	AddSectorBatch(sectorRoots []crypto.Hash) error

	// AddStorageFolder adds a storage folder to the manager. The manager
	// may not check that there is enough space available on-disk to
	// support as much storage as requested, though the manager should
	// gracefully handle running out of storage unexpectedly.
	AddStorageFolder(path string, size uint64) error

	// The storage manager needs to be able to shut down.
	Close() error

	// DeleteSector deletes a sector, meaning that the manager will be
	// unable to upload that sector and be unable to provide a storage
	// proof on that sector. DeleteSector is for removing the data
	// entirely, and will remove instances of the sector appearing at all
	// heights. The primary purpose of DeleteSector is to comply with legal
	// requests to remove data.
	DeleteSector(sectorRoot crypto.Hash) error

	// ReadSector will read a sector from the storage manager, returning the
	// bytes that match the input sector root.
	ReadSector(sectorRoot crypto.Hash) ([]byte, error)

	// ReadPartialSector will read a sector from the storage manager,
	// returning the bytes that match the input sector root.
	ReadPartialSector(sectorRoot crypto.Hash, offset, length uint64) ([]byte, error)

	// RemoveSector will remove a sector from the storage manager. The
	// height at which the sector expires should be provided, so that the
	// auto-expiry information for that sector can be properly updated.
	RemoveSector(sectorRoot crypto.Hash) error

	// RemoveSectorBatch is a non-ACID performance optimization to remove a
	// ton of sectors from the storage manager all at once. This is
	// necessary when clearing out an entire contract from the host.
	RemoveSectorBatch(sectorRoots []crypto.Hash) error

	// RemoveStorageFolder will remove a storage folder from the manager.
	// All storage on the folder will be moved to other storage folders,
	// meaning that no data will be lost. If the manager is unable to save
	// data, an error will be returned and the operation will be stopped. If
	// the force flag is set to true, errors will be ignored and the remove
	// operation will be completed, meaning that data will be lost.
	RemoveStorageFolder(index uint16, force bool) error

	// ResetStorageFolderHealth will reset the health statistics on a
	// storage folder.
	ResetStorageFolderHealth(index uint16) error

	// ResizeStorageFolder will grow or shrink a storage folder in the
	// manager. The manager may not check that there is enough space
	// on-disk to support growing the storage folder, but should gracefully
	// handle running out of space unexpectedly. When shrinking a storage
	// folder, any data in the folder that needs to be moved will be placed
	// into other storage folders, meaning that no data will be lost. If
	// the manager is unable to migrate the data, an error will be returned
	// and the operation will be stopped. If the force flag is set to true,
	// errors will be ignored and the resize operation completed, meaning
	// that data will be lost.
	ResizeStorageFolder(index uint16, newSize uint64, force bool) error

	// StorageFolders will return a list of storage folders tracked by the
	// manager.
	StorageFolders() []StorageFolderMetadata
}

A StorageManager is responsible for managing storage folders and sectors. Sectors are the base unit of storage that gets moved between renters and hosts, and primarily is stored on the hosts.

type StorageObligation added in v1.1.2

type StorageObligation struct {
	ContractCost             types.Currency       `json:"contractcost"`
	DataSize                 uint64               `json:"datasize"`
	LockedCollateral         types.Currency       `json:"lockedcollateral"`
	ObligationId             types.FileContractID `json:"obligationid"`
	PotentialDownloadRevenue types.Currency       `json:"potentialdownloadrevenue"`
	PotentialStorageRevenue  types.Currency       `json:"potentialstoragerevenue"`
	PotentialUploadRevenue   types.Currency       `json:"potentialuploadrevenue"`
	RiskedCollateral         types.Currency       `json:"riskedcollateral"`
	SectorRootsCount         uint64               `json:"sectorrootscount"`
	TransactionFeesAdded     types.Currency       `json:"transactionfeesadded"`
	TransactionID            types.TransactionID  `json:"transactionid"`

	// The negotiation height specifies the block height at which the file
	// contract was negotiated. The expiration height and the proof deadline
	// are equal to the window start and window end. Between the expiration height
	// and the proof deadline, the host must submit the storage proof.
	ExpirationHeight  types.BlockHeight `json:"expirationheight"`
	NegotiationHeight types.BlockHeight `json:"negotiationheight"`
	ProofDeadLine     types.BlockHeight `json:"proofdeadline"`

	// Variables indicating whether the critical transactions in a storage
	// obligation have been confirmed on the blockchain.
	ObligationStatus    string `json:"obligationstatus"`
	OriginConfirmed     bool   `json:"originconfirmed"`
	ProofConfirmed      bool   `json:"proofconfirmed"`
	ProofConstructed    bool   `json:"proofconstructed"`
	RevisionConfirmed   bool   `json:"revisionconfirmed"`
	RevisionConstructed bool   `json:"revisionconstructed"`
}

StorageObligation contains information about a storage obligation that the host has accepted.

type Streamer added in v1.4.0

type Streamer interface {
	io.ReadSeeker
	io.Closer
}

Streamer is the interface implemented by the Renter's streamer type which allows for streaming files uploaded to the Sia network.

type TestMiner added in v1.0.0

type TestMiner interface {
	// AddBlock is an extension of FindBlock - AddBlock will submit the block
	// after finding it.
	AddBlock() (types.Block, error)

	// BlockForWork returns a block that is ready for nonce grinding. All
	// blocks returned by BlockForWork have a unique Merkle root, meaning that
	// each can safely start from nonce 0.
	BlockForWork() (types.Block, types.Target, error)

	// FindBlock will have the miner make 1 attempt to find a solved block that
	// builds on the current consensus set. It will give up after a few
	// seconds, returning the block and a bool indicating whether the block is
	// solved.
	FindBlock() (types.Block, error)

	// SolveBlock will have the miner make 1 attempt to solve the input block,
	// which amounts to trying a few thousand different nonces. SolveBlock is
	// primarily used for testing.
	SolveBlock(types.Block, types.Target) (types.Block, bool)

	// Needs to have all other miner functions in addition to shortcuts for
	// mining blocks.
	Miner
}

TestMiner provides direct access to block fetching, solving, and manipulation. The primary use of this interface is integration testing.

type TransactionBuilder added in v1.0.0

type TransactionBuilder interface {
	// FundSiacoins will add a siacoin input of exactly 'amount' to the
	// transaction. A parent transaction may be needed to achieve an input
	// with the correct value. The siacoin input will not be signed until
	// 'Sign' is called on the transaction builder. The expectation is that
	// the transaction will be completed and broadcast within a few hours.
	// Longer risks double-spends, as the wallet will assume that the
	// transaction failed.
	FundSiacoins(amount types.Currency) error

	// FundSiafunds will add a siafund input of exactly 'amount' to the
	// transaction. A parent transaction may be needed to achieve an input
	// with the correct value. The siafund input will not be signed until
	// 'Sign' is called on the transaction builder. Any siacoins that are
	// released by spending the siafund outputs will be sent to another
	// address owned by the wallet. The expectation is that the transaction
	// will be completed and broadcast within a few hours. Longer risks
	// double-spends, because the wallet will assume the transaction
	// failed.
	FundSiafunds(amount types.Currency) error

	// AddParents adds a set of parents to the transaction.
	AddParents([]types.Transaction)

	// AddMinerFee adds a miner fee to the transaction, returning the index
	// of the miner fee within the transaction.
	AddMinerFee(fee types.Currency) uint64

	// AddSiacoinInput adds a siacoin input to the transaction, returning
	// the index of the siacoin input within the transaction. When 'Sign'
	// gets called, this input will be left unsigned.
	AddSiacoinInput(types.SiacoinInput) uint64

	// AddSiacoinOutput adds a siacoin output to the transaction, returning
	// the index of the siacoin output within the transaction.
	AddSiacoinOutput(types.SiacoinOutput) uint64

	// ReplaceSiacoinOutput replaces the siacoin output in the transaction at the
	// given index.
	ReplaceSiacoinOutput(index uint64, output types.SiacoinOutput) error

	// AddFileContract adds a file contract to the transaction, returning
	// the index of the file contract within the transaction.
	AddFileContract(types.FileContract) uint64

	// AddFileContractRevision adds a file contract revision to the
	// transaction, returning the index of the file contract revision
	// within the transaction. When 'Sign' gets called, this revision will
	// be left unsigned.
	AddFileContractRevision(types.FileContractRevision) uint64

	// AddStorageProof adds a storage proof to the transaction, returning
	// the index of the storage proof within the transaction.
	AddStorageProof(types.StorageProof) uint64

	// AddSiafundInput adds a siafund input to the transaction, returning
	// the index of the siafund input within the transaction. When 'Sign'
	// is called, this input will be left unsigned.
	AddSiafundInput(types.SiafundInput) uint64

	// AddSiafundOutput adds a siafund output to the transaction, returning
	// the index of the siafund output within the transaction.
	AddSiafundOutput(types.SiafundOutput) uint64

	// AddArbitraryData adds arbitrary data to the transaction, returning
	// the index of the data within the transaction.
	AddArbitraryData(arb []byte) uint64

	// AddTransactionSignature adds a transaction signature to the
	// transaction, returning the index of the signature within the
	// transaction. The signature should already be valid, and shouldn't
	// sign any of the inputs that were added by calling 'FundSiacoins' or
	// 'FundSiafunds'.
	AddTransactionSignature(types.TransactionSignature) uint64

	// Copy creates a copy of the current transactionBuilder that can be used to
	// extend the transaction in an alternate way (i.e. create a double spend
	// transaction).
	Copy() TransactionBuilder

	// MarkWalletInputs updates internal TransactionBuilder state by inferring
	// which inputs belong to this wallet. This allows those inputs to be
	// signed. Returns true if and only if any inputs belonging to the wallet
	// are found.
	MarkWalletInputs() bool

	// Sign will sign any inputs added by 'FundSiacoins' or 'FundSiafunds'
	// and return a transaction set that contains all parents prepended to
	// the transaction. If more fields need to be added, a new transaction
	// builder will need to be created.
	//
	// If the whole transaction flag is set to true, then the whole
	// transaction flag will be set in the covered fields object. If the
	// whole transaction flag is set to false, then the covered fields
	// object will cover all fields that have already been added to the
	// transaction, but will also leave room for more fields to be added.
	//
	// An error will be returned if there are multiple calls to 'Sign',
	// sometimes even if the first call to Sign has failed. Sign should
	// only ever be called once, and if the first signing fails, the
	// transaction should be dropped.
	Sign(wholeTransaction bool) ([]types.Transaction, error)

	// UnconfirmedParents returns any unconfirmed parents the transaction set that
	// is being built by the transaction builder could have.
	UnconfirmedParents() ([]types.Transaction, error)

	// View returns the incomplete transaction along with all of its
	// parents.
	View() (txn types.Transaction, parents []types.Transaction)

	// ViewAdded returns all of the siacoin inputs, siafund inputs, and
	// parent transactions that have been automatically added by the
	// builder. Items are returned by index.
	ViewAdded() (newParents, siacoinInputs, siafundInputs, transactionSignatures []int)

	// Drop indicates that a transaction is no longer useful and will not be
	// broadcast, and that all of the outputs can be reclaimed. 'Drop'
	// should only be used before signatures are added.
	Drop()
}

TransactionBuilder is used to construct custom transactions. A transaction builder is initialized via 'RegisterTransaction' and then can be modified by adding funds or other fields. The transaction is completed by calling 'Sign', which will sign all inputs added via the 'FundSiacoins' or 'FundSiafunds' call. All modifications are additive.

Parents of the transaction are kept in the transaction builder. A parent is any unconfirmed transaction that is required for the child to be valid.

Transaction builders are not thread safe.

type TransactionPool

type TransactionPool interface {
	Alerter

	// AcceptTransactionSet accepts a set of potentially interdependent
	// transactions.
	AcceptTransactionSet([]types.Transaction) error

	// Broadcast broadcasts a transaction set to all of the transaction pool's
	// peers.
	Broadcast(ts []types.Transaction)

	// Close is necessary for clean shutdown (e.g. during testing).
	Close() error

	// FeeEstimation returns an estimation for how high the transaction fee
	// needs to be per byte. The minimum recommended targets getting accepted
	// in ~3 blocks, and the maximum recommended targets getting accepted
	// immediately. Taking the average has a moderate chance of being accepted
	// within one block. The minimum has a strong chance of getting accepted
	// within 10 blocks.
	FeeEstimation() (minimumRecommended, maximumRecommended types.Currency)

	// PurgeTransactionPool is a temporary function available to the miner. In
	// the event that a miner mines an unacceptable block, the transaction pool
	// will be purged to clear out the transaction pool and get rid of the
	// illegal transaction. This should never happen, however there are bugs
	// that make this condition necessary.
	PurgeTransactionPool()

	// Transaction returns the transaction and unconfirmed parents
	// corresponding to the provided transaction id.
	Transaction(id types.TransactionID) (txn types.Transaction, unconfirmedParents []types.Transaction, exists bool)

	// Transactions returns the transactions of the transaction pool
	Transactions() []types.Transaction

	// TransactionConfirmed returns true if the transaction has been seen on the
	// blockchain. Note, however, that the block containing the transaction may
	// later be invalidated by a reorg.
	TransactionConfirmed(id types.TransactionID) (bool, error)

	// TransactionList returns a list of all transactions in the transaction
	// pool. The transactions are provided in an order that can acceptably be
	// put into a block.
	TransactionList() []types.Transaction

	// TransactionPoolSubscribe adds a subscriber to the transaction pool.
	// Subscribers will receive all consensus set changes as well as
	// transaction pool changes, and should not subscribe to both.
	TransactionPoolSubscribe(TransactionPoolSubscriber)

	// TransactionSet returns the transaction set the provided object
	// appears in.
	TransactionSet(crypto.Hash) []types.Transaction

	// Unsubscribe removes a subscriber from the transaction pool.
	// This is necessary for clean shutdown of the miner.
	Unsubscribe(TransactionPoolSubscriber)
}

A TransactionPool manages unconfirmed transactions.

type TransactionPoolDiff added in v1.3.0

type TransactionPoolDiff struct {
	AppliedTransactions  []*UnconfirmedTransactionSet
	RevertedTransactions []TransactionSetID
}

A TransactionPoolDiff indicates the adding or removal of a transaction set to the transaction pool. The transactions in the pool are not persisted, so at startup modules should assume an empty transaction pool.

type TransactionPoolSubscriber added in v0.3.1

type TransactionPoolSubscriber interface {
	// ReceiveTransactionPoolUpdate notifies subscribers of a change to the
	// consensus set and/or unconfirmed set, and includes the consensus change
	// that would result if all of the transactions made it into a block.
	ReceiveUpdatedUnconfirmedTransactions(*TransactionPoolDiff)
}

A TransactionPoolSubscriber receives updates about the confirmed and unconfirmed set from the transaction pool. Generally, there is no need to subscribe to both the consensus set and the transaction pool.

type TransactionSetID added in v1.3.0

type TransactionSetID crypto.Hash

TransactionSetID is a type-safe wrapper for a crypto.Hash that represents the ID of an entire transaction set.

type UnconfirmedTransactionSet added in v1.3.0

type UnconfirmedTransactionSet struct {
	Change *ConsensusChange
	ID     TransactionSetID

	IDs          []types.TransactionID
	Sizes        []uint64
	Transactions []types.Transaction
}

UnconfirmedTransactionSet defines a new unconfirmed transaction that has been added to the transaction pool. ID is the ID of the set, IDs contains an ID for each transaction, eliminating the need to recompute it (because that's an expensive operation).

type UnspentOutput added in v1.3.5

type UnspentOutput struct {
	ID                 types.OutputID    `json:"id"`
	FundType           types.Specifier   `json:"fundtype"`
	UnlockHash         types.UnlockHash  `json:"unlockhash"`
	Value              types.Currency    `json:"value"`
	ConfirmationHeight types.BlockHeight `json:"confirmationheight"`
	IsWatchOnly        bool              `json:"iswatchonly"`
}

A UnspentOutput is a SiacoinOutput or SiafundOutput that the wallet is tracking.

type UploadedBackup added in v1.4.1

type UploadedBackup struct {
	Name           string
	UID            [16]byte
	CreationDate   types.Timestamp
	Size           uint64 // size of snapshot .sia file
	UploadProgress float64
}

UploadedBackup contains metadata about an uploaded backup.

type UploadsStatus added in v1.4.2

type UploadsStatus struct {
	Paused       bool      `json:"paused"`
	PauseEndTime time.Time `json:"pauseendtime"`
}

UploadsStatus contains information about the Renter's Uploads

type ValuedTransaction added in v1.4.0

type ValuedTransaction struct {
	ProcessedTransaction

	ConfirmedIncomingValue types.Currency `json:"confirmedincomingvalue"`
	ConfirmedOutgoingValue types.Currency `json:"confirmedoutgoingvalue"`
}

ValuedTransaction is a transaction that has been given incoming and outgoing siacoin value fields.

type Wallet

type Wallet interface {
	Alerter
	EncryptionManager
	KeyManager

	// AddUnlockConditions adds a set of UnlockConditions to the wallet database.
	AddUnlockConditions(uc types.UnlockConditions) error

	// AddWatchAddresses instructs the wallet to begin tracking a set of
	// addresses, in addition to the addresses it was previously tracking.
	// If none of the addresses have appeared in the blockchain, the
	// unused flag may be set to true. Otherwise, the wallet must rescan
	// the blockchain to search for transactions containing the addresses.
	AddWatchAddresses(addrs []types.UnlockHash, unused bool) error

	// Close permits clean shutdown during testing and serving.
	Close() error

	// ConfirmedBalance returns the confirmed balance of the wallet, minus
	// any outgoing transactions. ConfirmedBalance will include unconfirmed
	// refund transactions.
	ConfirmedBalance() (siacoinBalance types.Currency, siafundBalance types.Currency, siacoinClaimBalance types.Currency, err error)

	// UnconfirmedBalance returns the unconfirmed balance of the wallet.
	// Outgoing funds and incoming funds are reported separately. Refund
	// outputs are included, meaning that sending a single coin to
	// someone could result in 'outgoing: 12, incoming: 11'. Siafunds are
	// not considered in the unconfirmed balance.
	UnconfirmedBalance() (outgoingSiacoins types.Currency, incomingSiacoins types.Currency, err error)

	// Height returns the wallet's internal processed consensus height
	Height() (types.BlockHeight, error)

	// AddressTransactions returns all of the transactions that are related
	// to a given address.
	AddressTransactions(types.UnlockHash) ([]ProcessedTransaction, error)

	// AddressUnconfirmedHistory returns all of the unconfirmed
	// transactions related to a given address.
	AddressUnconfirmedTransactions(types.UnlockHash) ([]ProcessedTransaction, error)

	// Transaction returns the transaction with the given id. The bool
	// indicates whether the transaction is in the wallet database. The
	// wallet only stores transactions that are related to the wallet.
	Transaction(types.TransactionID) (ProcessedTransaction, bool, error)

	// Transactions returns all of the transactions that were confirmed at
	// heights [startHeight, endHeight]. Unconfirmed transactions are not
	// included.
	Transactions(startHeight types.BlockHeight, endHeight types.BlockHeight) ([]ProcessedTransaction, error)

	// UnconfirmedTransactions returns all unconfirmed transactions
	// relative to the wallet.
	UnconfirmedTransactions() ([]ProcessedTransaction, error)

	// RegisterTransaction takes a transaction and its parents and returns
	// a TransactionBuilder which can be used to expand the transaction.
	RegisterTransaction(t types.Transaction, parents []types.Transaction) (TransactionBuilder, error)

	// RemoveWatchAddresses instructs the wallet to stop tracking a set of
	// addresses and delete their associated transactions. If none of the
	// addresses have appeared in the blockchain, the unused flag may be
	// set to true. Otherwise, the wallet must rescan the blockchain to
	// rebuild its transaction history.
	RemoveWatchAddresses(addrs []types.UnlockHash, unused bool) error

	// Rescanning reports whether the wallet is currently rescanning the
	// blockchain.
	Rescanning() (bool, error)

	// Settings returns the Wallet's current settings.
	Settings() (WalletSettings, error)

	// SetSettings sets the Wallet's settings.
	SetSettings(WalletSettings) error

	// StartTransaction is a convenience method that calls
	// RegisterTransaction(types.Transaction{}, nil)
	StartTransaction() (TransactionBuilder, error)

	// SendSiacoins is a tool for sending siacoins from the wallet to an
	// address. Sending money usually results in multiple transactions. The
	// transactions are automatically given to the transaction pool, and are
	// also returned to the caller.
	SendSiacoins(amount types.Currency, dest types.UnlockHash) ([]types.Transaction, error)

	// SendSiacoinsFeeIncluded sends siacoins with fees included.
	SendSiacoinsFeeIncluded(amount types.Currency, dest types.UnlockHash) ([]types.Transaction, error)

	// SendSiacoinsMulti sends coins to multiple addresses.
	SendSiacoinsMulti(outputs []types.SiacoinOutput) ([]types.Transaction, error)

	// SendSiafunds is a tool for sending siafunds from the wallet to an
	// address. Sending money usually results in multiple transactions. The
	// transactions are automatically given to the transaction pool, and
	// are also returned to the caller.
	SendSiafunds(amount types.Currency, dest types.UnlockHash) ([]types.Transaction, error)

	// DustThreshold returns the quantity per byte below which a Currency is
	// considered to be Dust.
	DustThreshold() (types.Currency, error)

	// UnspentOutputs returns the unspent outputs tracked by the wallet.
	UnspentOutputs() ([]UnspentOutput, error)

	// UnlockConditions returns the UnlockConditions for the specified
	// address, if they are known to the wallet.
	UnlockConditions(addr types.UnlockHash) (types.UnlockConditions, error)

	// WatchAddresses returns the set of addresses that the wallet is
	// currently watching.
	WatchAddresses() ([]types.UnlockHash, error)
}

Wallet stores and manages siacoins and siafunds. The wallet file is encrypted using a user-specified password. Common addresses are all derived from a single address seed.

type WalletSettings added in v1.3.2

type WalletSettings struct {
	NoDefrag bool `json:"nodefrag"`
}

WalletSettings control the behavior of the Wallet.

type WalletTransactionID added in v1.0.0

type WalletTransactionID crypto.Hash

WalletTransactionID is a unique identifier for a wallet transaction.

func CalculateWalletTransactionID added in v1.0.0

func CalculateWalletTransactionID(tid types.TransactionID, oid types.OutputID) WalletTransactionID

CalculateWalletTransactionID is a helper function for determining the id of a wallet transaction.

type WithdrawalMessage added in v1.4.2

type WithdrawalMessage struct {
	Account string
	Expiry  types.BlockHeight
	Amount  types.Currency
	Nonce   [WithdrawalNonceSize]byte
}

WithdrawalMessage contains all details to spend from an ephemeral account

func (*WithdrawalMessage) Validate added in v1.4.2

func (wm *WithdrawalMessage) Validate(blockHeight, expiry types.BlockHeight, hash crypto.Hash, sig crypto.Signature) error

Validate checks the WithdrawalMessage's expiry and signature. If the signature is invalid, or if the WithdrawlMessage is already expired, or it expires too far into the future, an error is returned.

func (*WithdrawalMessage) ValidateExpiry added in v1.4.2

func (wm *WithdrawalMessage) ValidateExpiry(blockHeight, expiry types.BlockHeight) error

ValidateExpiry returns an error if the withdrawal message is either already expired or if it expires too far into the future

func (*WithdrawalMessage) ValidateSignature added in v1.4.2

func (wm *WithdrawalMessage) ValidateSignature(hash crypto.Hash, sig crypto.Signature) error

ValidateSignature returns an error if the provided signature is invalid

Directories

Path Synopsis
Package explorer provides a glimpse into what the Sia network currently looks like.
Package explorer provides a glimpse into what the Sia network currently looks like.
Package gateway connects a Sia node to the Sia flood network.
Package gateway connects a Sia node to the Sia flood network.
Package host is an implementation of the host module, and is responsible for participating in the storage ecosystem, turning available disk space an internet bandwidth into profit for the user.
Package host is an implementation of the host module, and is responsible for participating in the storage ecosystem, turning available disk space an internet bandwidth into profit for the user.
mdm
Package miner is responsible for creating and submitting siacoin blocks
Package miner is responsible for creating and submitting siacoin blocks
Package renter is responsible for uploading and downloading files on the sia network.
Package renter is responsible for uploading and downloading files on the sia network.
hostdb
Package hostdb provides a HostDB object that implements the renter.hostDB interface.
Package hostdb provides a HostDB object that implements the renter.hostDB interface.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL