Documentation ¶
Index ¶
- Constants
- func ApplyRetentionPolicyByResolution(ctx context.Context, logger log.Logger, bkt objstore.Bucket, ...) error
- func BestEffortCleanAbortedPartialUploads(ctx context.Context, logger log.Logger, partial map[ulid.ULID]error, ...)
- func IsHaltError(err error) bool
- func IsIssue347Error(err error) bool
- func IsOutOfOrderChunkError(err error) bool
- func IsRetryError(err error) bool
- func NewPlanner(logger log.Logger, ranges []int64, noCompBlocks *GatherNoCompactionMarkFilter) *tsdbBasedPlanner
- func NewRetryError(err error) error
- func NewTSDBBasedPlanner(logger log.Logger, ranges []int64) *tsdbBasedPlanner
- func RepairIssue347(ctx context.Context, logger log.Logger, bkt objstore.Bucket, ...) error
- func UntilNextDownsampling(m *metadata.Meta) (time.Duration, error)
- func WithLargeTotalIndexSizeFilter(with *tsdbBasedPlanner, bkt objstore.Bucket, totalMaxIndexSizeBytes int64, ...) *largeTotalIndexSizeFilter
- type BlockDeletableChecker
- type BlocksCleaner
- type BucketCompactor
- type CompactProgressMetrics
- type CompactionLifecycleCallback
- type CompactionProgressCalculator
- type Compactor
- type DefaultBlockDeletableChecker
- type DefaultCompactionLifecycleCallback
- func (c DefaultCompactionLifecycleCallback) GetBlockPopulator(_ context.Context, _ log.Logger, _ *Group) (tsdb.BlockPopulator, error)
- func (c DefaultCompactionLifecycleCallback) PostCompactionCallback(_ context.Context, _ log.Logger, _ *Group, _ ulid.ULID) error
- func (c DefaultCompactionLifecycleCallback) PreCompactionCallback(_ context.Context, logger log.Logger, cg *Group, ...) error
- type DefaultGrouper
- type DownsampleProgressCalculator
- type DownsampleProgressMetrics
- type GatherNoCompactionMarkFilter
- type Group
- func (cg *Group) AppendMeta(meta *metadata.Meta) error
- func (cg *Group) Compact(ctx context.Context, dir string, planner Planner, comp Compactor, ...) (shouldRerun bool, compIDs []ulid.ULID, rerr error)
- func (cg *Group) Extensions() any
- func (cg *Group) IDs() (ids []ulid.ULID)
- func (cg *Group) Key() string
- func (cg *Group) Labels() labels.Labels
- func (cg *Group) MaxTime() int64
- func (cg *Group) MinTime() int64
- func (cg *Group) Resolution() int64
- func (cg *Group) SetExtensions(extensions any)
- type Grouper
- type HaltError
- type Issue347Error
- type OutOfOrderChunksError
- type Planner
- type ProgressCalculator
- type ResolutionLevel
- type RetentionProgressCalculator
- type RetentionProgressMetrics
- type RetryError
- type Syncer
- type SyncerMetrics
Constants ¶
const ( ResolutionLevelRaw = ResolutionLevel(downsample.ResLevel0) ResolutionLevel5m = ResolutionLevel(downsample.ResLevel1) ResolutionLevel1h = ResolutionLevel(downsample.ResLevel2) )
const ( // DedupAlgorithmPenalty is the penalty based compactor series merge algorithm. // This is the same as the online deduplication of querier except counter reset handling. DedupAlgorithmPenalty = "penalty" )
const ( // PartialUploadThresholdAge is a time after partial block is assumed aborted and ready to be cleaned. // Keep it long as it is based on block creation time not upload start time. PartialUploadThresholdAge = 2 * 24 * time.Hour )
Variables ¶
This section is empty.
Functions ¶
func ApplyRetentionPolicyByResolution ¶
func ApplyRetentionPolicyByResolution( ctx context.Context, logger log.Logger, bkt objstore.Bucket, metas map[ulid.ULID]*metadata.Meta, retentionByResolution map[ResolutionLevel]time.Duration, blocksMarkedForDeletion prometheus.Counter, ) error
ApplyRetentionPolicyByResolution removes blocks depending on the specified retentionByResolution based on blocks MaxTime. A value of 0 disables the retention for its resolution.
func BestEffortCleanAbortedPartialUploads ¶ added in v0.10.0
func BestEffortCleanAbortedPartialUploads( ctx context.Context, logger log.Logger, partial map[ulid.ULID]error, bkt objstore.Bucket, deleteAttempts prometheus.Counter, blockCleanups prometheus.Counter, blockCleanupFailures prometheus.Counter, )
func IsHaltError ¶
IsHaltError returns true if the base error is a HaltError. If a multierror is passed, any halt error will return true.
func IsIssue347Error ¶
IsIssue347Error returns true if the base error is a Issue347Error.
func IsOutOfOrderChunkError ¶ added in v0.23.0
IsOutOfOrderChunkError returns true if the base error is a OutOfOrderChunkError.
func IsRetryError ¶
IsRetryError returns true if the base error is a RetryError. If a multierror is passed, all errors must be retriable.
func NewPlanner ¶ added in v0.17.0
func NewPlanner(logger log.Logger, ranges []int64, noCompBlocks *GatherNoCompactionMarkFilter) *tsdbBasedPlanner
NewPlanner is a default Thanos planner with the same functionality as Prometheus' TSDB plus special handling of excluded blocks. It's the same functionality just without accessing filesystem, and special handling of excluded blocks.
func NewRetryError ¶ added in v0.35.0
func NewTSDBBasedPlanner ¶ added in v0.17.0
NewTSDBBasedPlanner is planner with the same functionality as Prometheus' TSDB. TODO(bwplotka): Consider upstreaming this to Prometheus. It's the same functionality just without accessing filesystem.
func RepairIssue347 ¶
func RepairIssue347(ctx context.Context, logger log.Logger, bkt objstore.Bucket, blocksMarkedForDeletion prometheus.Counter, issue347Err error) error
RepairIssue347 repairs the https://github.com/prometheus/tsdb/issues/347 issue when having issue347Error.
func UntilNextDownsampling ¶ added in v0.8.0
UntilNextDownsampling calculates how long it will take until the next downsampling operation. Returns an error if there will be no downsampling.
func WithLargeTotalIndexSizeFilter ¶ added in v0.17.0
func WithLargeTotalIndexSizeFilter(with *tsdbBasedPlanner, bkt objstore.Bucket, totalMaxIndexSizeBytes int64, markedForNoCompact prometheus.Counter) *largeTotalIndexSizeFilter
WithLargeTotalIndexSizeFilter wraps Planner with largeTotalIndexSizeFilter that checks the given plans and estimates total index size. When found, it marks block for no compaction by placing no-compact-mark.json and updating cache. NOTE: The estimation is very rough as it assumes extreme cases of indexes sharing no bytes, thus summing all source index sizes. Adjust limit accordingly reducing to some % of actual limit you want to give. TODO(bwplotka): This is short term fix for https://github.com/thanos-io/thanos/issues/1424, replace with vertical block sharding https://github.com/thanos-io/thanos/pull/3390.
Types ¶
type BlockDeletableChecker ¶ added in v0.32.0
type BlocksCleaner ¶ added in v0.12.0
type BlocksCleaner struct {
// contains filtered or unexported fields
}
BlocksCleaner is a struct that deletes blocks from bucket which are marked for deletion.
func NewBlocksCleaner ¶ added in v0.12.0
func NewBlocksCleaner(logger log.Logger, bkt objstore.Bucket, ignoreDeletionMarkFilter *block.IgnoreDeletionMarkFilter, deleteDelay time.Duration, blocksCleaned, blockCleanupFailures prometheus.Counter) *BlocksCleaner
NewBlocksCleaner creates a new BlocksCleaner.
func (*BlocksCleaner) DeleteMarkedBlocks ¶ added in v0.12.0
func (s *BlocksCleaner) DeleteMarkedBlocks(ctx context.Context) error
DeleteMarkedBlocks uses ignoreDeletionMarkFilter to gather the blocks that are marked for deletion and deletes those if older than given deleteDelay.
type BucketCompactor ¶
type BucketCompactor struct {
// contains filtered or unexported fields
}
BucketCompactor compacts blocks in a bucket.
func NewBucketCompactor ¶
func NewBucketCompactor( logger log.Logger, sy *Syncer, grouper Grouper, planner Planner, comp Compactor, compactDir string, bkt objstore.Bucket, concurrency int, skipBlocksWithOutOfOrderChunks bool, ) (*BucketCompactor, error)
NewBucketCompactor creates a new bucket compactor.
func NewBucketCompactorWithCheckerAndCallback ¶ added in v0.32.0
func NewBucketCompactorWithCheckerAndCallback( logger log.Logger, sy *Syncer, grouper Grouper, planner Planner, comp Compactor, blockDeletableChecker BlockDeletableChecker, compactionLifecycleCallback CompactionLifecycleCallback, compactDir string, bkt objstore.Bucket, concurrency int, skipBlocksWithOutOfOrderChunks bool, ) (*BucketCompactor, error)
type CompactProgressMetrics ¶ added in v0.24.0
type CompactProgressMetrics struct { NumberOfCompactionRuns prometheus.Gauge NumberOfCompactionBlocks prometheus.Gauge }
CompactProgressMetrics contains Prometheus metrics related to compaction progress.
type CompactionLifecycleCallback ¶ added in v0.32.0
type CompactionLifecycleCallback interface { PreCompactionCallback(ctx context.Context, logger log.Logger, group *Group, toCompactBlocks []*metadata.Meta) error PostCompactionCallback(ctx context.Context, logger log.Logger, group *Group, blockID ulid.ULID) error GetBlockPopulator(ctx context.Context, logger log.Logger, group *Group) (tsdb.BlockPopulator, error) }
type CompactionProgressCalculator ¶ added in v0.24.0
type CompactionProgressCalculator struct { *CompactProgressMetrics // contains filtered or unexported fields }
CompactionProgressCalculator contains a planner and ProgressMetrics, which are updated during the compaction simulation process.
func NewCompactionProgressCalculator ¶ added in v0.24.0
func NewCompactionProgressCalculator(reg prometheus.Registerer, planner *tsdbBasedPlanner) *CompactionProgressCalculator
NewCompactProgressCalculator creates a new CompactionProgressCalculator.
func (*CompactionProgressCalculator) ProgressCalculate ¶ added in v0.24.0
func (ps *CompactionProgressCalculator) ProgressCalculate(ctx context.Context, groups []*Group) error
ProgressCalculate calculates the number of blocks and compaction runs in the planning process of the given groups.
type Compactor ¶ added in v0.17.0
type Compactor interface { // Compact runs compaction against the provided directories. Must // only be called concurrently with results of Plan(). // Can optionally pass a list of already open blocks, // to avoid having to reopen them. // Prometheus always return one or no block. The interface allows returning more than one // block for downstream users to experiment with compactor. // When one resulting Block has 0 samples // * No block is written. // * The source dirs are marked Deletable. // * Block is not included in the result. Compact(dest string, dirs []string, open []*tsdb.Block) ([]ulid.ULID, error) CompactWithBlockPopulator(dest string, dirs []string, open []*tsdb.Block, blockPopulator tsdb.BlockPopulator) ([]ulid.ULID, error) }
Compactor provides compaction against an underlying storage of time series data. It is similar to tsdb.Compactor but only relevant methods are kept. Plan and Write are removed. TODO(bwplotka): Split the Planner from Compactor on upstream as well, so we can import it.
type DefaultBlockDeletableChecker ¶ added in v0.32.0
type DefaultBlockDeletableChecker struct { }
type DefaultCompactionLifecycleCallback ¶ added in v0.32.0
type DefaultCompactionLifecycleCallback struct { }
func (DefaultCompactionLifecycleCallback) GetBlockPopulator ¶ added in v0.32.0
func (c DefaultCompactionLifecycleCallback) GetBlockPopulator(_ context.Context, _ log.Logger, _ *Group) (tsdb.BlockPopulator, error)
func (DefaultCompactionLifecycleCallback) PostCompactionCallback ¶ added in v0.32.0
type DefaultGrouper ¶ added in v0.14.0
type DefaultGrouper struct {
// contains filtered or unexported fields
}
DefaultGrouper is the Thanos built-in grouper. It groups blocks based on downsample resolution and block's labels.
func NewDefaultGrouper ¶ added in v0.14.0
func NewDefaultGrouper( logger log.Logger, bkt objstore.Bucket, acceptMalformedIndex bool, enableVerticalCompaction bool, reg prometheus.Registerer, blocksMarkedForDeletion prometheus.Counter, garbageCollectedBlocks prometheus.Counter, blocksMarkedForNoCompact prometheus.Counter, hashFunc metadata.HashFunc, blockFilesConcurrency int, compactBlocksFetchConcurrency int, ) *DefaultGrouper
NewDefaultGrouper makes a new DefaultGrouper.
func NewDefaultGrouperWithMetrics ¶ added in v0.33.0
func NewDefaultGrouperWithMetrics( logger log.Logger, bkt objstore.Bucket, acceptMalformedIndex bool, enableVerticalCompaction bool, compactions *prometheus.CounterVec, compactionRunsStarted *prometheus.CounterVec, compactionRunsCompleted *prometheus.CounterVec, compactionFailures *prometheus.CounterVec, verticalCompactions *prometheus.CounterVec, blocksMarkedForDeletion prometheus.Counter, garbageCollectedBlocks prometheus.Counter, blocksMarkedForNoCompact prometheus.Counter, hashFunc metadata.HashFunc, blockFilesConcurrency int, compactBlocksFetchConcurrency int, ) *DefaultGrouper
NewDefaultGrouperWithMetrics makes a new DefaultGrouper.
type DownsampleProgressCalculator ¶ added in v0.24.0
type DownsampleProgressCalculator struct {
*DownsampleProgressMetrics
}
DownsampleProgressCalculator contains DownsampleMetrics, which are updated during the downsampling simulation process.
func NewDownsampleProgressCalculator ¶ added in v0.24.0
func NewDownsampleProgressCalculator(reg prometheus.Registerer) *DownsampleProgressCalculator
NewDownsampleProgressCalculator creates a new DownsampleProgressCalculator.
func (*DownsampleProgressCalculator) ProgressCalculate ¶ added in v0.24.0
func (ds *DownsampleProgressCalculator) ProgressCalculate(ctx context.Context, groups []*Group) error
ProgressCalculate calculates the number of blocks to be downsampled for the given groups.
type DownsampleProgressMetrics ¶ added in v0.24.0
type DownsampleProgressMetrics struct {
NumberOfBlocksDownsampled prometheus.Gauge
}
DownsampleProgressMetrics contains Prometheus metrics related to downsampling progress.
type GatherNoCompactionMarkFilter ¶ added in v0.17.0
type GatherNoCompactionMarkFilter struct {
// contains filtered or unexported fields
}
GatherNoCompactionMarkFilter is a block.Fetcher filter that passes all metas. While doing it, it gathers all no-compact-mark.json markers. Not go routine safe. TODO(bwplotka): Add unit test.
func NewGatherNoCompactionMarkFilter ¶ added in v0.17.0
func NewGatherNoCompactionMarkFilter(logger log.Logger, bkt objstore.InstrumentedBucketReader, concurrency int) *GatherNoCompactionMarkFilter
NewGatherNoCompactionMarkFilter creates GatherNoCompactionMarkFilter.
func (*GatherNoCompactionMarkFilter) Filter ¶ added in v0.17.0
func (f *GatherNoCompactionMarkFilter) Filter(ctx context.Context, metas map[ulid.ULID]*metadata.Meta, synced block.GaugeVec, modified block.GaugeVec) error
Filter passes all metas, while gathering no compact markers.
func (*GatherNoCompactionMarkFilter) NoCompactMarkedBlocks ¶ added in v0.17.0
func (f *GatherNoCompactionMarkFilter) NoCompactMarkedBlocks() map[ulid.ULID]*metadata.NoCompactMark
NoCompactMarkedBlocks returns block ids that were marked for no compaction.
type Group ¶
type Group struct {
// contains filtered or unexported fields
}
Group captures a set of blocks that have the same origin labels and downsampling resolution. Those blocks generally contain the same series and can thus efficiently be compacted.
func NewGroup ¶ added in v0.14.0
func NewGroup( logger log.Logger, bkt objstore.Bucket, key string, lset labels.Labels, resolution int64, acceptMalformedIndex bool, enableVerticalCompaction bool, compactions prometheus.Counter, compactionRunsStarted prometheus.Counter, compactionRunsCompleted prometheus.Counter, compactionFailures prometheus.Counter, verticalCompactions prometheus.Counter, groupGarbageCollectedBlocks prometheus.Counter, blocksMarkedForDeletion prometheus.Counter, blocksMarkedForNoCompact prometheus.Counter, hashFunc metadata.HashFunc, blockFilesConcurrency int, compactBlocksFetchConcurrency int, ) (*Group, error)
NewGroup returns a new compaction group.
func (*Group) AppendMeta ¶ added in v0.20.0
AppendMeta the block with the given meta to the group.
func (*Group) Compact ¶
func (cg *Group) Compact(ctx context.Context, dir string, planner Planner, comp Compactor, blockDeletableChecker BlockDeletableChecker, compactionLifecycleCallback CompactionLifecycleCallback) (shouldRerun bool, compIDs []ulid.ULID, rerr error)
Compact plans and runs a single compaction against the group. The compacted result is uploaded into the bucket the blocks were retrieved from.
func (*Group) Extensions ¶ added in v0.32.0
func (*Group) Resolution ¶
Resolution returns the common downsampling resolution of blocks in the group.
func (*Group) SetExtensions ¶ added in v0.32.0
type Grouper ¶ added in v0.14.0
type Grouper interface { // Groups returns the compaction groups for all blocks currently known to the syncer. // It creates all groups from the scratch on every call. Groups(blocks map[ulid.ULID]*metadata.Meta) (res []*Group, err error) }
Grouper is responsible to group all known blocks into sub groups which are safe to be compacted concurrently.
type HaltError ¶
type HaltError struct {
// contains filtered or unexported fields
}
HaltError is a type wrapper for errors that should halt any further progress on compactions.
type Issue347Error ¶
type Issue347Error struct {
// contains filtered or unexported fields
}
Issue347Error is a type wrapper for errors that should invoke repair process for broken block.
func (Issue347Error) Error ¶
func (e Issue347Error) Error() string
type OutOfOrderChunksError ¶ added in v0.23.0
type OutOfOrderChunksError struct {
// contains filtered or unexported fields
}
OutOfOrderChunkError is a type wrapper for OOO chunk error from validating block index.
func (OutOfOrderChunksError) Error ¶ added in v0.23.0
func (e OutOfOrderChunksError) Error() string
type Planner ¶ added in v0.17.0
type Planner interface { // Plan returns a list of blocks that should be compacted into single one. // The blocks can be overlapping. The provided metadata has to be ordered by minTime. Plan(ctx context.Context, metasByMinTime []*metadata.Meta, errChan chan error, extensions any) ([]*metadata.Meta, error) }
Planner returns blocks to compact.
func WithVerticalCompactionDownsampleFilter ¶ added in v0.36.0
func WithVerticalCompactionDownsampleFilter(with *largeTotalIndexSizeFilter, bkt objstore.Bucket, markedForNoCompact prometheus.Counter) Planner
type ProgressCalculator ¶ added in v0.24.0
ProgressCalculator calculates the progress of the compaction process for a given slice of Groups.
type ResolutionLevel ¶
type ResolutionLevel int64
type RetentionProgressCalculator ¶ added in v0.24.0
type RetentionProgressCalculator struct { *RetentionProgressMetrics // contains filtered or unexported fields }
RetentionProgressCalculator contains RetentionProgressMetrics, which are updated during the retention simulation process.
func NewRetentionProgressCalculator ¶ added in v0.24.0
func NewRetentionProgressCalculator(reg prometheus.Registerer, retentionByResolution map[ResolutionLevel]time.Duration) *RetentionProgressCalculator
NewRetentionProgressCalculator creates a new RetentionProgressCalculator.
func (*RetentionProgressCalculator) ProgressCalculate ¶ added in v0.24.0
func (rs *RetentionProgressCalculator) ProgressCalculate(ctx context.Context, groups []*Group) error
ProgressCalculate calculates the number of blocks to be retained for the given groups.
type RetentionProgressMetrics ¶ added in v0.24.0
type RetentionProgressMetrics struct {
NumberOfBlocksToDelete prometheus.Gauge
}
RetentionProgressMetrics contains Prometheus metrics related to retention progress.
type RetryError ¶
type RetryError struct {
// contains filtered or unexported fields
}
RetryError is a type wrapper for errors that should trigger warning log and retry whole compaction loop, but aborting current compaction further progress.
func (RetryError) Error ¶
func (e RetryError) Error() string
func (RetryError) Unwrap ¶ added in v0.35.0
func (e RetryError) Unwrap() error
type Syncer ¶
type Syncer struct {
// contains filtered or unexported fields
}
Syncer synchronizes block metas from a bucket into a local directory. It sorts them into compaction groups based on equal label sets.
func NewMetaSyncer ¶ added in v0.20.0
func NewMetaSyncer(logger log.Logger, reg prometheus.Registerer, bkt objstore.Bucket, fetcher block.MetadataFetcher, duplicateBlocksFilter block.DeduplicateFilter, ignoreDeletionMarkFilter *block.IgnoreDeletionMarkFilter, blocksMarkedForDeletion, garbageCollectedBlocks prometheus.Counter) (*Syncer, error)
NewMetaSyncer returns a new Syncer for the given Bucket and directory. Blocks must be at least as old as the sync delay for being considered.
func NewMetaSyncerWithMetrics ¶ added in v0.33.0
func NewMetaSyncerWithMetrics(logger log.Logger, metrics *SyncerMetrics, bkt objstore.Bucket, fetcher block.MetadataFetcher, duplicateBlocksFilter block.DeduplicateFilter, ignoreDeletionMarkFilter *block.IgnoreDeletionMarkFilter) (*Syncer, error)
func (*Syncer) GarbageCollect ¶
GarbageCollect marks blocks for deletion from bucket if their data is available as part of a block with a higher compaction level. Call to SyncMetas function is required to populate duplicateIDs in duplicateBlocksFilter.
type SyncerMetrics ¶ added in v0.33.0
type SyncerMetrics struct { GarbageCollectedBlocks prometheus.Counter GarbageCollections prometheus.Counter GarbageCollectionFailures prometheus.Counter GarbageCollectionDuration prometheus.Observer BlocksMarkedForDeletion prometheus.Counter }
SyncerMetrics holds metrics tracked by the syncer. This struct and its fields are exported to allow depending projects (eg. Cortex) to implement their own custom syncer while tracking compatible metrics.
func NewSyncerMetrics ¶ added in v0.33.0
func NewSyncerMetrics(reg prometheus.Registerer, blocksMarkedForDeletion, garbageCollectedBlocks prometheus.Counter) *SyncerMetrics