Documentation ¶
Index ¶
- Constants
- func ApplyRetentionPolicyByResolution(ctx context.Context, logger log.Logger, bkt objstore.Bucket, ...) error
- func BestEffortCleanAbortedPartialUploads(ctx context.Context, logger log.Logger, fetcher block.MetadataFetcher, ...)
- func GroupKey(meta metadata.Thanos) string
- func IsHaltError(err error) bool
- func IsIssue347Error(err error) bool
- func IsRetryError(err error) bool
- func RepairIssue347(ctx context.Context, logger log.Logger, bkt objstore.Bucket, issue347Err error) error
- func UntilNextDownsampling(m *metadata.Meta) (time.Duration, error)
- type BucketCompactor
- type Group
- type HaltError
- type Issue347Error
- type ResolutionLevel
- type RetryError
- type Syncer
Constants ¶
const ( ResolutionLevelRaw = ResolutionLevel(downsample.ResLevel0) ResolutionLevel5m = ResolutionLevel(downsample.ResLevel1) ResolutionLevel1h = ResolutionLevel(downsample.ResLevel2) )
const ( // PartialUploadThresholdAge is a time after partial block is assumed aborted and ready to be cleaned. // Keep it long as it is based on block creation time not upload start time. PartialUploadThresholdAge = 2 * 24 * time.Hour )
Variables ¶
This section is empty.
Functions ¶
func ApplyRetentionPolicyByResolution ¶
func ApplyRetentionPolicyByResolution(ctx context.Context, logger log.Logger, bkt objstore.Bucket, fetcher block.MetadataFetcher, retentionByResolution map[ResolutionLevel]time.Duration) error
ApplyRetentionPolicyByResolution removes blocks depending on the specified retentionByResolution based on blocks MaxTime. A value of 0 disables the retention for its resolution.
func BestEffortCleanAbortedPartialUploads ¶ added in v0.10.0
func BestEffortCleanAbortedPartialUploads(ctx context.Context, logger log.Logger, fetcher block.MetadataFetcher, bkt objstore.Bucket, deleteAttempts prometheus.Counter)
func GroupKey ¶
GroupKey returns a unique identifier for the group the block belongs to. It considers the downsampling resolution and the block's labels.
func IsHaltError ¶
IsHaltError returns true if the base error is a HaltError. If a multierror is passed, any halt error will return true.
func IsIssue347Error ¶
IsIssue347Error returns true if the base error is a Issue347Error.
func IsRetryError ¶
IsRetryError returns true if the base error is a RetryError. If a multierror is passed, all errors must be retriable.
Types ¶
type BucketCompactor ¶
type BucketCompactor struct {
// contains filtered or unexported fields
}
BucketCompactor compacts blocks in a bucket.
type Group ¶
type Group struct {
// contains filtered or unexported fields
}
Group captures a set of blocks that have the same origin labels and downsampling resolution. Those blocks generally contain the same series and can thus efficiently be compacted.
func (*Group) Compact ¶
func (cg *Group) Compact(ctx context.Context, dir string, comp tsdb.Compactor) (bool, ulid.ULID, error)
Compact plans and runs a single compaction against the group. The compacted result is uploaded into the bucket the blocks were retrieved from.
func (*Group) Resolution ¶
Resolution returns the common downsampling resolution of blocks in the group.
type HaltError ¶
type HaltError struct {
// contains filtered or unexported fields
}
HaltError is a type wrapper for errors that should halt any further progress on compactions.
type Issue347Error ¶
type Issue347Error struct {
// contains filtered or unexported fields
}
Issue347Error is a type wrapper for errors that should invoke repair process for broken block.
func (Issue347Error) Error ¶
func (e Issue347Error) Error() string
type ResolutionLevel ¶
type ResolutionLevel int64
type RetryError ¶
type RetryError struct {
// contains filtered or unexported fields
}
RetryError is a type wrapper for errors that should trigger warning log and retry whole compaction loop, but aborting current compaction further progress.
func (RetryError) Error ¶
func (e RetryError) Error() string
type Syncer ¶
type Syncer struct {
// contains filtered or unexported fields
}
Syncer synchronizes block metas from a bucket into a local directory. It sorts them into compaction groups based on equal label sets.
func NewSyncer ¶
func NewSyncer(logger log.Logger, reg prometheus.Registerer, bkt objstore.Bucket, fetcher block.MetadataFetcher, blockSyncConcurrency int, acceptMalformedIndex bool, enableVerticalCompaction bool) (*Syncer, error)
NewMetaSyncer returns a new Syncer for the given Bucket and directory. Blocks must be at least as old as the sync delay for being considered.
func (*Syncer) GarbageBlocks ¶
func (*Syncer) GarbageCollect ¶
GarbageCollect deletes blocks from the bucket if their data is available as part of a block with a higher compaction level.