compact

package
v0.20.0-rc.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 23, 2021 License: Apache-2.0 Imports: 25 Imported by: 4

Documentation

Index

Constants

View Source
const (
	ResolutionLevelRaw = ResolutionLevel(downsample.ResLevel0)
	ResolutionLevel5m  = ResolutionLevel(downsample.ResLevel1)
	ResolutionLevel1h  = ResolutionLevel(downsample.ResLevel2)
)
View Source
const (
	// PartialUploadThresholdAge is a time after partial block is assumed aborted and ready to be cleaned.
	// Keep it long as it is based on block creation time not upload start time.
	PartialUploadThresholdAge = 2 * 24 * time.Hour
)

Variables

This section is empty.

Functions

func ApplyRetentionPolicyByResolution

func ApplyRetentionPolicyByResolution(
	ctx context.Context,
	logger log.Logger,
	bkt objstore.Bucket,
	metas map[ulid.ULID]*metadata.Meta,
	retentionByResolution map[ResolutionLevel]time.Duration,
	blocksMarkedForDeletion prometheus.Counter,
) error

ApplyRetentionPolicyByResolution removes blocks depending on the specified retentionByResolution based on blocks MaxTime. A value of 0 disables the retention for its resolution.

func BestEffortCleanAbortedPartialUploads added in v0.10.0

func BestEffortCleanAbortedPartialUploads(
	ctx context.Context,
	logger log.Logger,
	partial map[ulid.ULID]error,
	bkt objstore.Bucket,
	deleteAttempts prometheus.Counter,
	blockCleanups prometheus.Counter,
	blockCleanupFailures prometheus.Counter,
)

func DefaultGroupKey added in v0.14.0

func DefaultGroupKey(meta metadata.Thanos) string

DefaultGroupKey returns a unique identifier for the group the block belongs to, based on the DefaultGrouper logic. It considers the downsampling resolution and the block's labels.

func IsHaltError

func IsHaltError(err error) bool

IsHaltError returns true if the base error is a HaltError. If a multierror is passed, any halt error will return true.

func IsIssue347Error

func IsIssue347Error(err error) bool

IsIssue347Error returns true if the base error is a Issue347Error.

func IsRetryError

func IsRetryError(err error) bool

IsRetryError returns true if the base error is a RetryError. If a multierror is passed, all errors must be retriable.

func NewPlanner added in v0.17.0

func NewPlanner(logger log.Logger, ranges []int64, noCompBlocks *GatherNoCompactionMarkFilter) *tsdbBasedPlanner

NewPlanner is a default Thanos planner with the same functionality as Prometheus' TSDB plus special handling of excluded blocks. It's the same functionality just without accessing filesystem, and special handling of excluded blocks.

func NewTSDBBasedPlanner added in v0.17.0

func NewTSDBBasedPlanner(logger log.Logger, ranges []int64) *tsdbBasedPlanner

NewTSDBBasedPlanner is planner with the same functionality as Prometheus' TSDB. TODO(bwplotka): Consider upstreaming this to Prometheus. It's the same functionality just without accessing filesystem.

func RepairIssue347

func RepairIssue347(ctx context.Context, logger log.Logger, bkt objstore.Bucket, blocksMarkedForDeletion prometheus.Counter, issue347Err error) error

RepairIssue347 repairs the https://github.com/prometheus/tsdb/issues/347 issue when having issue347Error.

func UntilNextDownsampling added in v0.8.0

func UntilNextDownsampling(m *metadata.Meta) (time.Duration, error)

UntilNextDownsampling calculates how long it will take until the next downsampling operation. Returns an error if there will be no downsampling.

func WithLargeTotalIndexSizeFilter added in v0.17.0

func WithLargeTotalIndexSizeFilter(with *tsdbBasedPlanner, bkt objstore.Bucket, totalMaxIndexSizeBytes int64, markedForNoCompact prometheus.Counter) *largeTotalIndexSizeFilter

WithLargeTotalIndexSizeFilter wraps Planner with largeTotalIndexSizeFilter that checks the given plans and estimates total index size. When found, it marks block for no compaction by placing no-compact-mark.json and updating cache. NOTE: The estimation is very rough as it assumes extreme cases of indexes sharing no bytes, thus summing all source index sizes. Adjust limit accordingly reducing to some % of actual limit you want to give. TODO(bwplotka): This is short term fix for https://github.com/thanos-io/thanos/issues/1424, replace with vertical block sharding https://github.com/thanos-io/thanos/pull/3390.

Types

type BlocksCleaner added in v0.12.0

type BlocksCleaner struct {
	// contains filtered or unexported fields
}

BlocksCleaner is a struct that deletes blocks from bucket which are marked for deletion.

func NewBlocksCleaner added in v0.12.0

func NewBlocksCleaner(logger log.Logger, bkt objstore.Bucket, ignoreDeletionMarkFilter *block.IgnoreDeletionMarkFilter, deleteDelay time.Duration, blocksCleaned prometheus.Counter, blockCleanupFailures prometheus.Counter) *BlocksCleaner

NewBlocksCleaner creates a new BlocksCleaner.

func (*BlocksCleaner) DeleteMarkedBlocks added in v0.12.0

func (s *BlocksCleaner) DeleteMarkedBlocks(ctx context.Context) error

DeleteMarkedBlocks uses ignoreDeletionMarkFilter to gather the blocks that are marked for deletion and deletes those if older than given deleteDelay.

type BucketCompactor

type BucketCompactor struct {
	// contains filtered or unexported fields
}

BucketCompactor compacts blocks in a bucket.

func NewBucketCompactor

func NewBucketCompactor(
	logger log.Logger,
	sy *Syncer,
	grouper Grouper,
	planner Planner,
	comp Compactor,
	compactDir string,
	bkt objstore.Bucket,
	concurrency int,
) (*BucketCompactor, error)

NewBucketCompactor creates a new bucket compactor.

func (*BucketCompactor) Compact

func (c *BucketCompactor) Compact(ctx context.Context) (rerr error)

Compact runs compaction over bucket.

type Compactor added in v0.17.0

type Compactor interface {
	// Write persists a Block into a directory.
	// No Block is written when resulting Block has 0 samples, and returns empty ulid.ULID{}.
	Write(dest string, b tsdb.BlockReader, mint, maxt int64, parent *tsdb.BlockMeta) (ulid.ULID, error)

	// Compact runs compaction against the provided directories. Must
	// only be called concurrently with results of Plan().
	// Can optionally pass a list of already open blocks,
	// to avoid having to reopen them.
	// When resulting Block has 0 samples
	//  * No block is written.
	//  * The source dirs are marked Deletable.
	//  * Returns empty ulid.ULID{}.
	Compact(dest string, dirs []string, open []*tsdb.Block) (ulid.ULID, error)
}

Compactor provides compaction against an underlying storage of time series data. This is similar to tsdb.Compactor just without Plan method. TODO(bwplotka): Split the Planner from Compactor on upstream as well, so we can import it.

type DefaultGrouper added in v0.14.0

type DefaultGrouper struct {
	// contains filtered or unexported fields
}

DefaultGrouper is the Thanos built-in grouper. It groups blocks based on downsample resolution and block's labels.

func NewDefaultGrouper added in v0.14.0

func NewDefaultGrouper(
	logger log.Logger,
	bkt objstore.Bucket,
	acceptMalformedIndex bool,
	enableVerticalCompaction bool,
	reg prometheus.Registerer,
	blocksMarkedForDeletion prometheus.Counter,
	garbageCollectedBlocks prometheus.Counter,
	hashFunc metadata.HashFunc,
) *DefaultGrouper

NewDefaultGrouper makes a new DefaultGrouper.

func (*DefaultGrouper) Groups added in v0.14.0

func (g *DefaultGrouper) Groups(blocks map[ulid.ULID]*metadata.Meta) (res []*Group, err error)

Groups returns the compaction groups for all blocks currently known to the syncer. It creates all groups from the scratch on every call.

type GatherNoCompactionMarkFilter added in v0.17.0

type GatherNoCompactionMarkFilter struct {
	// contains filtered or unexported fields
}

GatherNoCompactionMarkFilter is a block.Fetcher filter that passes all metas. While doing it, it gathers all no-compact-mark.json markers. Not go routine safe. TODO(bwplotka): Add unit test.

func NewGatherNoCompactionMarkFilter added in v0.17.0

func NewGatherNoCompactionMarkFilter(logger log.Logger, bkt objstore.InstrumentedBucketReader, concurrency int) *GatherNoCompactionMarkFilter

NewGatherNoCompactionMarkFilter creates GatherNoCompactionMarkFilter.

func (*GatherNoCompactionMarkFilter) Filter added in v0.17.0

Filter passes all metas, while gathering no compact markers.

func (*GatherNoCompactionMarkFilter) NoCompactMarkedBlocks added in v0.17.0

func (f *GatherNoCompactionMarkFilter) NoCompactMarkedBlocks() map[ulid.ULID]*metadata.NoCompactMark

NoCompactMarkedBlocks returns block ids that were marked for no compaction.

type Group

type Group struct {
	// contains filtered or unexported fields
}

Group captures a set of blocks that have the same origin labels and downsampling resolution. Those blocks generally contain the same series and can thus efficiently be compacted.

func NewGroup added in v0.14.0

func NewGroup(
	logger log.Logger,
	bkt objstore.Bucket,
	key string,
	lset labels.Labels,
	resolution int64,
	acceptMalformedIndex bool,
	enableVerticalCompaction bool,
	compactions prometheus.Counter,
	compactionRunsStarted prometheus.Counter,
	compactionRunsCompleted prometheus.Counter,
	compactionFailures prometheus.Counter,
	verticalCompactions prometheus.Counter,
	groupGarbageCollectedBlocks prometheus.Counter,
	blocksMarkedForDeletion prometheus.Counter,
	hashFunc metadata.HashFunc,
) (*Group, error)

NewGroup returns a new compaction group.

func (*Group) AppendMeta added in v0.20.0

func (cg *Group) AppendMeta(meta *metadata.Meta) error

AppendMeta the block with the given meta to the group.

func (*Group) Compact

func (cg *Group) Compact(ctx context.Context, dir string, planner Planner, comp Compactor) (shouldRerun bool, compID ulid.ULID, rerr error)

Compact plans and runs a single compaction against the group. The compacted result is uploaded into the bucket the blocks were retrieved from.

func (*Group) IDs

func (cg *Group) IDs() (ids []ulid.ULID)

IDs returns all sorted IDs of blocks in the group.

func (*Group) Key

func (cg *Group) Key() string

Key returns an identifier for the group.

func (*Group) Labels

func (cg *Group) Labels() labels.Labels

Labels returns the labels that all blocks in the group share.

func (*Group) MaxTime added in v0.14.0

func (cg *Group) MaxTime() int64

MaxTime returns the max time across all group's blocks.

func (*Group) MinTime added in v0.14.0

func (cg *Group) MinTime() int64

MinTime returns the min time across all group's blocks.

func (*Group) Resolution

func (cg *Group) Resolution() int64

Resolution returns the common downsampling resolution of blocks in the group.

type Grouper added in v0.14.0

type Grouper interface {
	// Groups returns the compaction groups for all blocks currently known to the syncer.
	// It creates all groups from the scratch on every call.
	Groups(blocks map[ulid.ULID]*metadata.Meta) (res []*Group, err error)
}

Grouper is responsible to group all known blocks into sub groups which are safe to be compacted concurrently.

type HaltError

type HaltError struct {
	// contains filtered or unexported fields
}

HaltError is a type wrapper for errors that should halt any further progress on compactions.

func (HaltError) Error

func (e HaltError) Error() string

type Issue347Error

type Issue347Error struct {
	// contains filtered or unexported fields
}

Issue347Error is a type wrapper for errors that should invoke repair process for broken block.

func (Issue347Error) Error

func (e Issue347Error) Error() string

type Planner added in v0.17.0

type Planner interface {
	// Plan returns a list of blocks that should be compacted into single one.
	// The blocks can be overlapping. The provided metadata has to be ordered by minTime.
	Plan(ctx context.Context, metasByMinTime []*metadata.Meta) ([]*metadata.Meta, error)
}

Planner returns blocks to compact.

type ResolutionLevel

type ResolutionLevel int64

type RetryError

type RetryError struct {
	// contains filtered or unexported fields
}

RetryError is a type wrapper for errors that should trigger warning log and retry whole compaction loop, but aborting current compaction further progress.

func (RetryError) Error

func (e RetryError) Error() string

type Syncer

type Syncer struct {
	// contains filtered or unexported fields
}

Syncer synchronizes block metas from a bucket into a local directory. It sorts them into compaction groups based on equal label sets.

func NewMetaSyncer added in v0.20.0

func NewMetaSyncer(logger log.Logger, reg prometheus.Registerer, bkt objstore.Bucket, fetcher block.MetadataFetcher, duplicateBlocksFilter *block.DeduplicateFilter, ignoreDeletionMarkFilter *block.IgnoreDeletionMarkFilter, blocksMarkedForDeletion prometheus.Counter, garbageCollectedBlocks prometheus.Counter, blockSyncConcurrency int) (*Syncer, error)

NewMetaSyncer returns a new Syncer for the given Bucket and directory. Blocks must be at least as old as the sync delay for being considered.

func (*Syncer) GarbageCollect

func (s *Syncer) GarbageCollect(ctx context.Context) error

GarbageCollect marks blocks for deletion from bucket if their data is available as part of a block with a higher compaction level. Call to SyncMetas function is required to populate duplicateIDs in duplicateBlocksFilter.

func (*Syncer) Metas added in v0.12.0

func (s *Syncer) Metas() map[ulid.ULID]*metadata.Meta

Metas returns loaded metadata blocks since last sync.

func (*Syncer) Partial added in v0.12.0

func (s *Syncer) Partial() map[ulid.ULID]error

Partial returns partial blocks since last sync.

func (*Syncer) SyncMetas

func (s *Syncer) SyncMetas(ctx context.Context) error

SyncMetas synchronizes local state of block metas with what we have in the bucket.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL