Documentation ¶
Index ¶
- Variables
- func LazyWrite(ctx context.Context, sink Sink, relativePath string, r io.Reader) error
- type Backup
- type Command
- type CreateCommand
- type CreateRequest
- type FilesystemSink
- type LegacyLocator
- func (l LegacyLocator) BeginFull(ctx context.Context, repo *gitalypb.Repository, backupID string) *Step
- func (l LegacyLocator) BeginIncremental(ctx context.Context, repo *gitalypb.Repository, backupID string) (*Step, error)
- func (l LegacyLocator) Commit(ctx context.Context, full *Step) error
- func (l LegacyLocator) FindLatest(ctx context.Context, repo *gitalypb.Repository) (*Backup, error)
- type Locator
- type LoggingPipeline
- type Manager
- type ParallelPipeline
- type Pipeline
- type PipelineError
- type PointerLocator
- func (l PointerLocator) BeginFull(ctx context.Context, repo *gitalypb.Repository, backupID string) *Step
- func (l PointerLocator) BeginIncremental(ctx context.Context, repo *gitalypb.Repository, fallbackBackupID string) (*Step, error)
- func (l PointerLocator) Commit(ctx context.Context, step *Step) error
- func (l PointerLocator) FindLatest(ctx context.Context, repo *gitalypb.Repository) (*Backup, error)
- type RestoreCommand
- type RestoreRequest
- type Sink
- type Step
- type StorageServiceSink
- type Strategy
Constants ¶
This section is empty.
Variables ¶
var ( // ErrSkipped means the repository was skipped because there was nothing to backup ErrSkipped = errors.New("repository skipped") // ErrDoesntExist means that the data was not found. ErrDoesntExist = errors.New("doesn't exist") )
Functions ¶
Types ¶
type Backup ¶
type Backup struct { // Steps are the ordered list of steps required to restore this backup Steps []Step }
Backup represents all the information needed to restore a backup for a repository
type Command ¶
type Command interface { Repository() *gitalypb.Repository Name() string Execute(context.Context) error }
Command handles a specific backup operation
type CreateCommand ¶
type CreateCommand struct {
// contains filtered or unexported fields
}
CreateCommand creates a backup for a repository
func NewCreateCommand ¶
func NewCreateCommand(strategy Strategy, server storage.ServerInfo, repo *gitalypb.Repository, incremental bool) *CreateCommand
NewCreateCommand builds a CreateCommand
func (CreateCommand) Execute ¶
func (cmd CreateCommand) Execute(ctx context.Context) error
Execute performs the backup
func (CreateCommand) Repository ¶
func (cmd CreateCommand) Repository() *gitalypb.Repository
Repository is the repository that will be acted on
type CreateRequest ¶
type CreateRequest struct { Server storage.ServerInfo Repository *gitalypb.Repository Incremental bool }
CreateRequest is the request to create a backup
type FilesystemSink ¶
type FilesystemSink struct {
// contains filtered or unexported fields
}
FilesystemSink is a sink for creating and restoring backups from the local filesystem.
func NewFilesystemSink ¶
func NewFilesystemSink(path string) *FilesystemSink
NewFilesystemSink returns a sink that uses a local filesystem to work with data.
func (*FilesystemSink) GetReader ¶
func (fs *FilesystemSink) GetReader(ctx context.Context, relativePath string) (io.ReadCloser, error)
GetReader returns a reader of the requested file path. It's the caller's responsibility to Close returned reader once it is not needed anymore. If relativePath doesn't exist the ErrDoesntExist is returned.
type LegacyLocator ¶
type LegacyLocator struct{}
LegacyLocator locates backup paths for historic backups. This is the structure that gitlab used before incremental backups were introduced.
Existing backup files are expected to be overwritten by the latest backup files.
Structure:
<repo relative path>.bundle <repo relative path>.refs <repo relative path>/custom_hooks.tar
func (LegacyLocator) BeginFull ¶
func (l LegacyLocator) BeginFull(ctx context.Context, repo *gitalypb.Repository, backupID string) *Step
BeginFull returns the static paths for a legacy repository backup
func (LegacyLocator) BeginIncremental ¶
func (l LegacyLocator) BeginIncremental(ctx context.Context, repo *gitalypb.Repository, backupID string) (*Step, error)
BeginIncremental is not supported for legacy backups
func (LegacyLocator) Commit ¶
func (l LegacyLocator) Commit(ctx context.Context, full *Step) error
Commit is unused as the locations are static
func (LegacyLocator) FindLatest ¶
func (l LegacyLocator) FindLatest(ctx context.Context, repo *gitalypb.Repository) (*Backup, error)
FindLatest returns the static paths for a legacy repository backup
type Locator ¶
type Locator interface { // BeginFull returns a tentative first step needed to create a new full backup. BeginFull(ctx context.Context, repo *gitalypb.Repository, backupID string) *Step // BeginIncremental returns a tentative step needed to create a new incremental backup. BeginIncremental(ctx context.Context, repo *gitalypb.Repository, backupID string) (*Step, error) // Commit persists the step so that it can be looked up by FindLatest Commit(ctx context.Context, step *Step) error // FindLatest returns the latest backup that was written by Commit FindLatest(ctx context.Context, repo *gitalypb.Repository) (*Backup, error) }
Locator finds sink backup paths for repositories
type LoggingPipeline ¶
type LoggingPipeline struct {
// contains filtered or unexported fields
}
LoggingPipeline outputs logging for each command executed
func NewLoggingPipeline ¶
func NewLoggingPipeline(log logrus.FieldLogger) *LoggingPipeline
NewLoggingPipeline creates a new logging pipeline
func (*LoggingPipeline) Done ¶
func (p *LoggingPipeline) Done() error
Done indicates that the pipeline is complete and returns any accumulated errors
type Manager ¶
type Manager struct {
// contains filtered or unexported fields
}
Manager manages process of the creating/restoring backups.
func NewManager ¶
NewManager creates and returns initialized *Manager instance.
type ParallelPipeline ¶
type ParallelPipeline struct {
// contains filtered or unexported fields
}
ParallelPipeline is a pipeline that executes commands in parallel
func NewParallelPipeline ¶
func NewParallelPipeline(next Pipeline, parallel, parallelStorage int) *ParallelPipeline
NewParallelPipeline creates a new ParallelPipeline where all commands are passed onto `next` to be processed, `parallel` is the maximum number of parallel backups that will run and `parallelStorage` is the maximum number of parallel backups that will run per storage. Since the number of storages is unknown at initialisation, workers are created lazily as new storage names are encountered.
Note: When both `parallel` and `parallelStorage` are zero or less no workers are created and the pipeline will block forever.
func (*ParallelPipeline) Done ¶
func (p *ParallelPipeline) Done() error
Done waits for any in progress calls to `next` to complete then reports any accumulated errors
type Pipeline ¶
Pipeline executes a series of commands and encapsulates error handling for the caller.
type PipelineError ¶
type PipelineError []error
PipelineError represents a summary of errors by repository
func (*PipelineError) AddError ¶
func (e *PipelineError) AddError(repo *gitalypb.Repository, err error)
AddError adds an error associated with a repository to the summary.
func (PipelineError) Error ¶
func (e PipelineError) Error() string
type PointerLocator ¶
PointerLocator locates backup paths where each full backup is put into a unique timestamp directory and the latest backup taken is pointed to by a file named LATEST.
Structure:
<repo relative path>/LATEST <repo relative path>/<backup id>/LATEST <repo relative path>/<backup id>/<nnn>.bundle <repo relative path>/<backup id>/<nnn>.refs <repo relative path>/<backup id>/<nnn>.custom_hooks.tar
func (PointerLocator) BeginFull ¶
func (l PointerLocator) BeginFull(ctx context.Context, repo *gitalypb.Repository, backupID string) *Step
BeginFull returns a tentative first step needed to create a new full backup.
func (PointerLocator) BeginIncremental ¶
func (l PointerLocator) BeginIncremental(ctx context.Context, repo *gitalypb.Repository, fallbackBackupID string) (*Step, error)
BeginIncremental returns a tentative step needed to create a new incremental backup. The incremental backup is always based off of the latest full backup. If there is no latest backup, a new full backup step is returned using fallbackBackupID
func (PointerLocator) Commit ¶
func (l PointerLocator) Commit(ctx context.Context, step *Step) error
Commit persists the step so that it can be looked up by FindLatest
func (PointerLocator) FindLatest ¶
func (l PointerLocator) FindLatest(ctx context.Context, repo *gitalypb.Repository) (*Backup, error)
FindLatest returns the paths committed by the latest call to CommitFull.
If there is no `LATEST` file, the result of the `Fallback` is used.
type RestoreCommand ¶
type RestoreCommand struct {
// contains filtered or unexported fields
}
RestoreCommand restores a backup for a repository
func NewRestoreCommand ¶
func NewRestoreCommand(strategy Strategy, server storage.ServerInfo, repo *gitalypb.Repository, alwaysCreate bool) *RestoreCommand
NewRestoreCommand builds a RestoreCommand
func (RestoreCommand) Execute ¶
func (cmd RestoreCommand) Execute(ctx context.Context) error
Execute performs the restore
func (RestoreCommand) Name ¶
func (cmd RestoreCommand) Name() string
Name is the name of the command
func (RestoreCommand) Repository ¶
func (cmd RestoreCommand) Repository() *gitalypb.Repository
Repository is the repository that will be acted on
type RestoreRequest ¶
type RestoreRequest struct { Server storage.ServerInfo Repository *gitalypb.Repository AlwaysCreate bool }
RestoreRequest is the request to restore from a backup
type Sink ¶
type Sink interface { // Write saves all the data from the r by relativePath. Write(ctx context.Context, relativePath string, r io.Reader) error // GetReader returns a reader that servers the data stored by relativePath. // If relativePath doesn't exists the ErrDoesntExist will be returned. GetReader(ctx context.Context, relativePath string) (io.ReadCloser, error) }
Sink is an abstraction over the real storage used for storing/restoring backups.
type Step ¶
type Step struct { // BundlePath is the path of the bundle BundlePath string // SkippableOnNotFound defines if the bundle can be skipped when it does // not exist. This allows us to maintain legacy behaviour where we always // check a specific location for a bundle without knowing if it exists. SkippableOnNotFound bool // RefPath is the path of the ref file RefPath string // PreviousRefPath is the path of the previous ref file PreviousRefPath string // CustomHooksPath is the path of the custom hooks archive CustomHooksPath string }
Step represents an incremental step that makes up a complete backup for a repository
type StorageServiceSink ¶
type StorageServiceSink struct {
// contains filtered or unexported fields
}
StorageServiceSink uses a storage engine that can be defined by the construction url on creation.
func NewStorageServiceSink ¶
func NewStorageServiceSink(ctx context.Context, url string) (*StorageServiceSink, error)
NewStorageServiceSink returns initialized instance of StorageServiceSink instance. The storage engine is chosen based on the provided url value and a set of pre-registered blank imports in that file. It is the caller's responsibility to provide all required environment variables in order to get properly initialized storage engine driver.
func (*StorageServiceSink) Close ¶
func (s *StorageServiceSink) Close() error
Close releases resources associated with the bucket communication.
func (*StorageServiceSink) GetReader ¶
func (s *StorageServiceSink) GetReader(ctx context.Context, relativePath string) (io.ReadCloser, error)
GetReader returns a reader to consume the data from the configured bucket. It is the caller's responsibility to Close the reader after usage.
type Strategy ¶
type Strategy interface { Create(context.Context, *CreateRequest) error Restore(context.Context, *RestoreRequest) error }
Strategy used to create/restore backups