Documentation ¶
Index ¶
- func GetLogger(ctx context.Context) *log.Logger
- func JobWithLogPrefix(j jobs.Job, pfx string) jobs.Job
- func WithLogPrefix(ctx context.Context, pfx string) context.Context
- type Agent
- type Atom
- type Backup
- type Config
- type ConfigManager
- type Dataset
- type DatasetSpec
- type FindRequest
- type Handler
- type HandlerSpec
- type JobStatus
- type Manager
- type MetadataStore
- type Params
- type Repository
- type RepositorySpec
- type RunningJobStatus
- type RuntimeContext
- type Shell
- func (s *Shell) Output(ctx context.Context, arg string) ([]byte, error)
- func (s *Shell) Run(ctx context.Context, arg string) error
- func (s *Shell) RunWithStdoutCallback(ctx context.Context, arg string, stdoutCallback func([]byte)) error
- func (s *Shell) SetIOClass(n int)
- func (s *Shell) SetNiceLevel(n int)
- type SourceSpec
- type UpdateActiveJobStatusRequest
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Agent ¶
type Agent struct {
// contains filtered or unexported fields
}
Agent holds a Manager and a Scheduler together, and runs periodic backup jobs for all known sources.
func NewAgent ¶
func NewAgent(ctx context.Context, configMgr *ConfigManager, ms MetadataStore) (*Agent, error)
NewAgent creates a new Agent with the specified config.
func (*Agent) Handler ¶
Handler returns a HTTP handler implementing the debug HTTP server.
type Atom ¶
type Atom struct { // Name (path-like). Name string `json:"name"` // Special attribute for the 'file' handler (path relative to // source root path). Path string `json:"path,omitempty"` }
An Atom is a bit of data that can be restored independently as part of a Dataset. Atoms are identified uniquely by their absolute path in the global atom namespace: this path is built by concatenating the source name, the dataset name, and the atom name.
type Backup ¶
type Backup struct { // Unique identifier. ID string `json:"id"` // Timestamp (backup start). Timestamp time.Time `json:"timestamp"` // Host. Host string `json:"host"` // Datasets. Datasets []*Dataset `json:"datasets"` }
Backup is the over-arching entity describing a high level backup operation. Backups are initiated autonomously by individual hosts, so each Backup belongs to a single Host.
type Config ¶
type Config struct { Hostname string `yaml:"hostname"` Queue *jobs.QueueSpec `yaml:"queue_config"` Repository RepositorySpec `yaml:"repository"` DryRun bool `yaml:"dry_run"` DefaultNiceLevel int `yaml:"default_nice_level"` DefaultIOClass int `yaml:"default_io_class"` WorkDir string `yaml:"work_dir"` RandomSeedFile string `yaml:"random_seed_file"` MetadataStoreBackend *clientutil.BackendConfig `yaml:"metadb"` HandlerSpecs []*HandlerSpec SourceSpecs []*SourceSpec }
Config is the global configuration object. While the actual configuration is spread over multiple files and directories, this holds it all together.
func ReadConfig ¶
ReadConfig reads the configuration from the given path. Sources and handlers are read from the 'sources' and 'handlers' subdirectories of the directory containing the main configuration file.
Performs a first level of static validation.
type ConfigManager ¶
type ConfigManager struct {
// contains filtered or unexported fields
}
ConfigManager holds all runtime data derived from the configuration itself, so it can be easily reloaded by calling Reload(). Listeners should register themselves with Notify() in order to be updated when the configuration changes (there is currently no way to unregister).
func NewConfigManager ¶
func NewConfigManager(config *Config) (*ConfigManager, error)
NewConfigManager creates a new ConfigManager.
func (*ConfigManager) Close ¶
func (m *ConfigManager) Close()
Close the ConfigManager and all associated resources.
func (*ConfigManager) NewRuntimeContext ¶
func (m *ConfigManager) NewRuntimeContext() RuntimeContext
NewRuntimeContext returns a new RuntimeContext, capturing current configuration and runtime assets.
func (*ConfigManager) Notify ¶
func (m *ConfigManager) Notify() <-chan struct{}
Notify the caller when the configuration is reloaded.
func (*ConfigManager) Reload ¶
func (m *ConfigManager) Reload(config *Config) error
Reload the configuration (at least, the parts of it that can be dynamically reloaded).
type Dataset ¶
type Dataset struct { // Unique identifier. ID string `json:"id"` // Source is the name of the source that created this Dataset, // stored so that the restore knows what to do. Source string `json:"source"` // Atoms that are part of this dataset. Atoms []Atom `json:"atoms"` // Snapshot ID (repository-specific). SnapshotID string `json:"snapshot_id"` // Number of files in this dataset. TotalFiles int64 `json:"total_files"` // Number of bytes in this dataset. TotalBytes int64 `json:"total_bytes"` // Number of bytes that were added / removed in this backup. BytesAdded int64 `json:"bytes_added"` // Duration in seconds. Duration int `json:"duration"` }
A Dataset describes a data set as a high level structure containing one or more atoms. The 1-to-many scenario is justified by the following use case: imagine a sql database server, we may want to back it up as a single operation, but it contains multiple databases (the atom we're interested in), which we might want to restore independently.
type DatasetSpec ¶
DatasetSpec describes a dataset in the configuration.
func (*DatasetSpec) Check ¶
func (spec *DatasetSpec) Check() error
Check syntactical validity of the DatasetSpec.
func (*DatasetSpec) Parse ¶
func (spec *DatasetSpec) Parse(ctx context.Context, src *SourceSpec) (*Dataset, error)
Parse a DatasetSpec and return a Dataset.
type FindRequest ¶
type FindRequest struct { Pattern string `json:"pattern"` Host string `json:"host"` NumVersions int `json:"num_versions"` OlderThan time.Time `json:"older_than,omitempty"` // contains filtered or unexported fields }
FindRequest specifies search criteria for atoms.
type Handler ¶
type Handler interface { BackupJob(RuntimeContext, *Backup, *Dataset) jobs.Job RestoreJob(RuntimeContext, *Backup, *Dataset, string) jobs.Job }
Handler can backup and restore a specific class of datasets.
type HandlerSpec ¶
type HandlerSpec struct { // Handler name (unique global identifier). Name string `yaml:"name"` // Handler type, one of the known types. Type string `yaml:"type"` Params Params `yaml:"params"` }
HandlerSpec defines the configuration for a handler.
func (*HandlerSpec) Parse ¶
func (spec *HandlerSpec) Parse(src *SourceSpec) (Handler, error)
Parse a HandlerSpec and return a Handler instance.
type JobStatus ¶
type JobStatus struct { Host string `json:"host"` JobID string `json:"job_id"` BackupID string `json:"backup_id"` DatasetID string `json:"dataset_id"` DatasetSource string `json:"dataset_source"` Status *RunningJobStatus `json:"status"` }
JobStatus has contextual information about a backup job that is currently running.
type Manager ¶
type Manager interface { BackupJob(context.Context, *SourceSpec) (*Backup, jobs.Job, error) Backup(context.Context, *SourceSpec) (*Backup, error) RestoreJob(context.Context, *FindRequest, string) (jobs.Job, error) Restore(context.Context, *FindRequest, string) error Close() error // Debug interface. GetStatus() ([]jobs.Status, []jobs.Status, []jobs.Status) }
Manager for backups and restores.
func NewManager ¶
func NewManager(ctx context.Context, configMgr *ConfigManager, ms MetadataStore) (Manager, error)
NewManager creates a new Manager.
type MetadataStore ¶
type MetadataStore interface { // Find the datasets that match a specific criteria. Only // atoms matching the criteria will be included in the Dataset // objects in the response. FindAtoms(context.Context, *FindRequest) ([]*Backup, error) // Add a dataset entry (the Backup might already exist). AddDataset(context.Context, *Backup, *Dataset) error // StartUpdates spawns a goroutine that periodically sends // active job status updates to the metadata server. StartUpdates(context.Context, func() *UpdateActiveJobStatusRequest) }
MetadataStore is the client interface to the global metadata store.
type Params ¶
type Params map[string]interface{}
Params are configurable parameters in a format friendly to YAML representation.
func (Params) GetBool ¶
GetBool returns a boolean value for a parameter (may be a string). Returns value and presence.
type Repository ¶
type Repository interface { Init(context.Context, RuntimeContext) error RunBackup(context.Context, *Shell, *Backup, *Dataset, string, []string) error RunStreamBackup(context.Context, *Shell, *Backup, *Dataset, string, string) error RunRestore(context.Context, *Shell, *Backup, *Dataset, []string, string) error RunStreamRestore(context.Context, *Shell, *Backup, *Dataset, string, string) error Close() error }
Repository is the interface to a remote repository.
type RepositorySpec ¶
type RepositorySpec struct { Name string `yaml:"name"` Type string `yaml:"type"` Params Params `yaml:"params"` }
RepositorySpec defines the configuration of a repository.
func (*RepositorySpec) Parse ¶
func (spec *RepositorySpec) Parse() (Repository, error)
Parse a RepositorySpec and return a Repository instance.
type RunningJobStatus ¶
type RunningJobStatus resticStatusMessage
RunningJobStatus has information about a backup job that is currently running.
type RuntimeContext ¶
type RuntimeContext interface { Shell() *Shell Repo() Repository QueueSpec() *jobs.QueueSpec Seed() int64 WorkDir() string SourceSpecs() []*SourceSpec FindSource(string) *SourceSpec HandlerSpec(string) *HandlerSpec Close() }
RuntimeContext provides access to runtime objects whose lifetime is ultimately tied to the configuration. Configuration can change during the lifetime of the process, but we want backup jobs to have a consistent view of the configuration while they execute, so access to the current version of the configuration is controlled to the ConfigManager.
type Shell ¶
type Shell struct {
// contains filtered or unexported fields
}
Shell runs commands, with some options (a global dry-run flag preventing all executions, nice level, i/o class). As one may guess by the name, commands are run using the shell, so variable substitutions and other shell features are available.
func (*Shell) Output ¶
Output runs a command and returns the standard output.
func (*Shell) Run ¶
Run a command. Log its standard output and error.
func (*Shell) RunWithStdoutCallback ¶
func (s *Shell) RunWithStdoutCallback(ctx context.Context, arg string, stdoutCallback func([]byte)) error
RunWithStdoutCallback executes a command and invokes a callback on every line read from its standard output. Stdandard output and error are still logged normally as in Run().
func (*Shell) SetIOClass ¶
SetIOClass sets the ionice(1) i/o class.
type SourceSpec ¶
type SourceSpec struct { Name string `yaml:"name"` Handler string `yaml:"handler"` // Schedule to run the backup on. Schedule string `yaml:"schedule"` // Define Datasets statically, or use a script to generate them // dynamically on every new backup. Datasets []*DatasetSpec `yaml:"datasets"` DatasetsCommand string `yaml:"datasets_command"` // Commands to run before and after operations on the source. PreBackupCommand string `yaml:"pre_backup_command"` PostBackupCommand string `yaml:"post_backup_command"` PreRestoreCommand string `yaml:"pre_restore_command"` PostRestoreCommand string `yaml:"post_restore_command"` Params Params `yaml:"params"` // Timeout for execution of the entire backup operation. Timeout time.Duration `yaml:"timeout"` }
SourceSpec defines the configuration for a data source. Data sources can dynamically or statically generate one or more Datasets, each containing one or more Atoms.
Handlers are launched once per Dataset, and they know how to deal with backing up / restoring individual Atoms.
func (*SourceSpec) Check ¶
func (spec *SourceSpec) Check(handlers map[string]*HandlerSpec) error
Check syntactical validity of the SourceSpec. Not an alternative to validation at usage time, but it provides an early warning to the user. Checks the handler name against a string set of handler names.
type UpdateActiveJobStatusRequest ¶
type UpdateActiveJobStatusRequest struct { Host string `json:"host"` ActiveJobs []*JobStatus `json:"active_jobs,omitempty"` }
UpdateActiveJobStatusRequest is the periodic "ping" sent by agents (unique host names are assumed) containing information about currently running jobs.