Documentation ¶
Index ¶
- Constants
- func ApplyRetentionPolicyByResolution(ctx context.Context, logger log.Logger, bkt objstore.Bucket, ...) error
- func BestEffortCleanAbortedPartialUploads(ctx context.Context, logger log.Logger, partial map[ulid.ULID]error, ...)
- func DefaultGroupKey(meta metadata.Thanos) string
- func IsHaltError(err error) bool
- func IsIssue347Error(err error) bool
- func IsRetryError(err error) bool
- func RepairIssue347(ctx context.Context, logger log.Logger, bkt objstore.Bucket, ...) error
- func UntilNextDownsampling(m *metadata.Meta) (time.Duration, error)
- type BlocksCleaner
- type BucketCompactor
- type DefaultGrouper
- type Group
- func (cg *Group) Add(meta *metadata.Meta) error
- func (cg *Group) Compact(ctx context.Context, dir string, comp tsdb.Compactor) (shouldRerun bool, compID ulid.ULID, rerr error)
- func (cg *Group) IDs() (ids []ulid.ULID)
- func (cg *Group) Key() string
- func (cg *Group) Labels() labels.Labels
- func (cg *Group) MaxTime() int64
- func (cg *Group) MinTime() int64
- func (cg *Group) Resolution() int64
- type Grouper
- type HaltError
- type Issue347Error
- type ResolutionLevel
- type RetryError
- type Syncer
Constants ¶
const ( ResolutionLevelRaw = ResolutionLevel(downsample.ResLevel0) ResolutionLevel5m = ResolutionLevel(downsample.ResLevel1) ResolutionLevel1h = ResolutionLevel(downsample.ResLevel2) )
const ( // PartialUploadThresholdAge is a time after partial block is assumed aborted and ready to be cleaned. // Keep it long as it is based on block creation time not upload start time. PartialUploadThresholdAge = 2 * 24 * time.Hour )
Variables ¶
This section is empty.
Functions ¶
func ApplyRetentionPolicyByResolution ¶
func ApplyRetentionPolicyByResolution( ctx context.Context, logger log.Logger, bkt objstore.Bucket, metas map[ulid.ULID]*metadata.Meta, retentionByResolution map[ResolutionLevel]time.Duration, blocksMarkedForDeletion prometheus.Counter, ) error
ApplyRetentionPolicyByResolution removes blocks depending on the specified retentionByResolution based on blocks MaxTime. A value of 0 disables the retention for its resolution.
func BestEffortCleanAbortedPartialUploads ¶ added in v0.10.0
func BestEffortCleanAbortedPartialUploads( ctx context.Context, logger log.Logger, partial map[ulid.ULID]error, bkt objstore.Bucket, deleteAttempts prometheus.Counter, blockCleanups prometheus.Counter, blockCleanupFailures prometheus.Counter, )
func DefaultGroupKey ¶ added in v0.14.0
DefaultGroupKey returns a unique identifier for the group the block belongs to, based on the DefaultGrouper logic. It considers the downsampling resolution and the block's labels.
func IsHaltError ¶
IsHaltError returns true if the base error is a HaltError. If a multierror is passed, any halt error will return true.
func IsIssue347Error ¶
IsIssue347Error returns true if the base error is a Issue347Error.
func IsRetryError ¶
IsRetryError returns true if the base error is a RetryError. If a multierror is passed, all errors must be retriable.
func RepairIssue347 ¶
func RepairIssue347(ctx context.Context, logger log.Logger, bkt objstore.Bucket, blocksMarkedForDeletion prometheus.Counter, issue347Err error) error
RepairIssue347 repairs the https://github.com/prometheus/tsdb/issues/347 issue when having issue347Error.
Types ¶
type BlocksCleaner ¶ added in v0.12.0
type BlocksCleaner struct {
// contains filtered or unexported fields
}
BlocksCleaner is a struct that deletes blocks from bucket which are marked for deletion.
func NewBlocksCleaner ¶ added in v0.12.0
func NewBlocksCleaner(logger log.Logger, bkt objstore.Bucket, ignoreDeletionMarkFilter *block.IgnoreDeletionMarkFilter, deleteDelay time.Duration, blocksCleaned prometheus.Counter, blockCleanupFailures prometheus.Counter) *BlocksCleaner
NewBlocksCleaner creates a new BlocksCleaner.
func (*BlocksCleaner) DeleteMarkedBlocks ¶ added in v0.12.0
func (s *BlocksCleaner) DeleteMarkedBlocks(ctx context.Context) error
DeleteMarkedBlocks uses ignoreDeletionMarkFilter to gather the blocks that are marked for deletion and deletes those if older than given deleteDelay.
type BucketCompactor ¶
type BucketCompactor struct {
// contains filtered or unexported fields
}
BucketCompactor compacts blocks in a bucket.
type DefaultGrouper ¶ added in v0.14.0
type DefaultGrouper struct {
// contains filtered or unexported fields
}
DefaultGrouper is the Thanos built-in grouper. It groups blocks based on downsample resolution and block's labels.
func NewDefaultGrouper ¶ added in v0.14.0
func NewDefaultGrouper( logger log.Logger, bkt objstore.Bucket, acceptMalformedIndex bool, enableVerticalCompaction bool, reg prometheus.Registerer, blocksMarkedForDeletion prometheus.Counter, garbageCollectedBlocks prometheus.Counter, ) *DefaultGrouper
NewDefaultGrouper makes a new DefaultGrouper.
type Group ¶
type Group struct {
// contains filtered or unexported fields
}
Group captures a set of blocks that have the same origin labels and downsampling resolution. Those blocks generally contain the same series and can thus efficiently be compacted.
func NewGroup ¶ added in v0.14.0
func NewGroup( logger log.Logger, bkt objstore.Bucket, key string, lset labels.Labels, resolution int64, acceptMalformedIndex bool, enableVerticalCompaction bool, compactions prometheus.Counter, compactionRunsStarted prometheus.Counter, compactionRunsCompleted prometheus.Counter, compactionFailures prometheus.Counter, verticalCompactions prometheus.Counter, groupGarbageCollectedBlocks prometheus.Counter, blocksMarkedForDeletion prometheus.Counter, ) (*Group, error)
NewGroup returns a new compaction group.
func (*Group) Compact ¶
func (cg *Group) Compact(ctx context.Context, dir string, comp tsdb.Compactor) (shouldRerun bool, compID ulid.ULID, rerr error)
Compact plans and runs a single compaction against the group. The compacted result is uploaded into the bucket the blocks were retrieved from.
func (*Group) Resolution ¶
Resolution returns the common downsampling resolution of blocks in the group.
type Grouper ¶ added in v0.14.0
type Grouper interface { // Groups returns the compaction groups for all blocks currently known to the syncer. // It creates all groups from the scratch on every call. Groups(blocks map[ulid.ULID]*metadata.Meta) (res []*Group, err error) }
Grouper is responsible to group all known blocks into sub groups which are safe to be compacted concurrently.
type HaltError ¶
type HaltError struct {
// contains filtered or unexported fields
}
HaltError is a type wrapper for errors that should halt any further progress on compactions.
type Issue347Error ¶
type Issue347Error struct {
// contains filtered or unexported fields
}
Issue347Error is a type wrapper for errors that should invoke repair process for broken block.
func (Issue347Error) Error ¶
func (e Issue347Error) Error() string
type ResolutionLevel ¶
type ResolutionLevel int64
type RetryError ¶
type RetryError struct {
// contains filtered or unexported fields
}
RetryError is a type wrapper for errors that should trigger warning log and retry whole compaction loop, but aborting current compaction further progress.
func (RetryError) Error ¶
func (e RetryError) Error() string
type Syncer ¶
type Syncer struct {
// contains filtered or unexported fields
}
Syncer synchronizes block metas from a bucket into a local directory. It sorts them into compaction groups based on equal label sets.
func NewSyncer ¶
func NewSyncer(logger log.Logger, reg prometheus.Registerer, bkt objstore.Bucket, fetcher block.MetadataFetcher, duplicateBlocksFilter *block.DeduplicateFilter, ignoreDeletionMarkFilter *block.IgnoreDeletionMarkFilter, blocksMarkedForDeletion prometheus.Counter, garbageCollectedBlocks prometheus.Counter, blockSyncConcurrency int) (*Syncer, error)
NewMetaSyncer returns a new Syncer for the given Bucket and directory. Blocks must be at least as old as the sync delay for being considered.
func (*Syncer) GarbageCollect ¶
GarbageCollect marks blocks for deletion from bucket if their data is available as part of a block with a higher compaction level. Call to SyncMetas function is required to populate duplicateIDs in duplicateBlocksFilter.