Documentation ¶
Index ¶
- Variables
- func Decompress(r io.Reader, c pbm.CompressionType) (io.ReadCloser, error)
- func GetMetaFromStore(stg storage.Storage, bcpName string) (*pbm.BackupMeta, error)
- type Oplog
- type PhysRestore
- type Restore
- func (r *Restore) Close()
- func (r *Restore) Done() error
- func (r *Restore) MarkFailed(e error) error
- func (r *Restore) PITR(cmd pbm.PITRestoreCmd, opid pbm.OPID, l *log.Event) (err error)
- func (r *Restore) ReplayOplog(cmd pbm.ReplayCmd, opid pbm.OPID, l *log.Event) (err error)
- func (r *Restore) RunSnapshot(dump string, bcp *pbm.BackupMeta) (err error)
- func (r *Restore) Snapshot(cmd pbm.RestoreCmd, opid pbm.OPID, l *log.Event) (err error)
- func (r *Restore) SnapshotMeta(backupName string) (bcp *pbm.BackupMeta, err error)
Constants ¶
This section is empty.
Variables ¶
var ErrNoDataForShard = errors.New("no data for shard")
Functions ¶
func Decompress ¶ added in v1.2.0
func Decompress(r io.Reader, c pbm.CompressionType) (io.ReadCloser, error)
Decompress wraps given reader by the decompressing io.ReadCloser
func GetMetaFromStore ¶ added in v1.7.0
Types ¶
type Oplog ¶
type Oplog struct {
// contains filtered or unexported fields
}
Oplog is the oplog applyer
func NewOplog ¶
func NewOplog(dst *pbm.Node, sv *pbm.MongoVersion, unsafe, preserveUUID bool, ctxn chan pbm.RestoreTxn, txnErr chan error) (*Oplog, error)
NewOplog creates an object for an oplog applying
func (*Oplog) SetTimeframe ¶ added in v1.7.0
SetTimeframe sets boundaries for the replayed operations. All operations that happened before `start` and after `end` are going to be discarded. Zero `end` (primitive.Timestamp{T:0}) means all chunks will be replayed utill the end (no tail trim).
type PhysRestore ¶ added in v1.7.0
type PhysRestore struct {
// contains filtered or unexported fields
}
func NewPhysical ¶ added in v1.7.0
func (*PhysRestore) MarkFailed ¶ added in v1.7.0
func (r *PhysRestore) MarkFailed(e error) error
MarkFailed sets the restore and rs state as failed with the given message
func (*PhysRestore) Snapshot ¶ added in v1.7.0
func (r *PhysRestore) Snapshot(cmd pbm.RestoreCmd, opid pbm.OPID, l *log.Event) (err error)
Snapshot restores data from the physical snapshot.
Initial sync and coordination between nodes happens via `admin.pbmRestore` metadata as with logical restores. But later, since mongod being shutdown, status sync going via storage (see `PhysRestore.toState`)
Unlike in logical restore, _every_ node of the replicaset is taking part in a physical restore. In that way, we avoid logical resync between nodes after the restore. Each node in the cluster does:
- Stores current replicset config and mongod port.
- Checks backup and stops all routine writes to the db.
- Stops mongod and wipes out datadir.
- Copies backup data to the datadir.
- Starts standalone mongod on ephemeral (tmp) port and reset some internal data also setting one-node replicaset and sets oplogTruncateAfterPoint to the backup's `last write`. `oplogTruncateAfterPoint` set the time up to which journals would be replayed.
- Starts standalone mongod to recover oplog from journals.
- Cleans up data and resets replicaset config to the working state.
- Shuts down mongod and agent (the leader also dumps metadata to the storage).
type Restore ¶ added in v1.1.2
type Restore struct {
// contains filtered or unexported fields
}
func (*Restore) Close ¶ added in v1.3.0
func (r *Restore) Close()
Close releases object resources. Should be run to avoid leaks.
func (*Restore) Done ¶ added in v1.3.0
Done waits for the replicas to finish the job and marks restore as done
func (*Restore) MarkFailed ¶ added in v1.1.2
MarkFailed sets the restore and rs state as failed with the given message
func (*Restore) ReplayOplog ¶ added in v1.7.0
func (*Restore) RunSnapshot ¶ added in v1.3.0
func (r *Restore) RunSnapshot(dump string, bcp *pbm.BackupMeta) (err error)
func (*Restore) SnapshotMeta ¶ added in v1.7.0
func (r *Restore) SnapshotMeta(backupName string) (bcp *pbm.BackupMeta, err error)