Documentation ¶
Overview ¶
Package block contains common functionality for interacting with TSDB blocks in the context of Thanos.
Index ¶
- Constants
- Variables
- func DefaultModifiedLabelValues() [][]string
- func DefaultSyncedStateLabelValues() [][]string
- func Delete(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID) error
- func Download(ctx context.Context, logger log.Logger, bucket objstore.Bucket, id ulid.ULID, ...) error
- func DownloadMeta(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID) (metadata.Meta, error)
- func GatherFileStats(blockDir string, hf metadata.HashFunc, logger log.Logger) (res []metadata.File, _ error)
- func GetSegmentFiles(blockDir string) []string
- func IgnoreCompleteOutsideChunk(mint, maxt int64, _, curr *chunks.Meta) (bool, error)
- func IgnoreDuplicateOutsideChunk(_, _ int64, last, curr *chunks.Meta) (bool, error)
- func IgnoreIssue347OutsideChunk(_, maxt int64, _, curr *chunks.Meta) (bool, error)
- func IsBlockDir(path string) (id ulid.ULID, ok bool)
- func IsBlockMetaFile(path string) bool
- func MarkForDeletion(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, ...) error
- func MarkForNoCompact(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, ...) error
- func MarkForNoDownsample(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, ...) error
- func ParseRelabelConfig(contentYaml []byte, supportedActions map[relabel.Action]struct{}) ([]*relabel.Config, error)
- func RemoveMark(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, ...) error
- func Repair(ctx context.Context, logger log.Logger, dir string, id ulid.ULID, ...) (resid ulid.ULID, err error)
- func Upload(ctx context.Context, logger log.Logger, bkt objstore.Bucket, bdir string, ...) error
- func UploadPromBlock(ctx context.Context, logger log.Logger, bkt objstore.Bucket, bdir string, ...) error
- func VerifyIndex(ctx context.Context, logger log.Logger, fn string, minTime, maxTime int64) error
- type BaseFetcher
- type BaseFetcherMetrics
- type ConcurrentLister
- type ConsistencyDelayMetaFilter
- type DeduplicateFilter
- type DefaultDeduplicateFilter
- type DiskWriter
- func (s *DiskWriter) AddSeries(ref storage.SeriesRef, l labels.Labels, chks ...chunks.Meta) error
- func (s *DiskWriter) AddSymbol(sym string) error
- func (s DiskWriter) Close() error
- func (d *DiskWriter) Flush() (_ tsdb.BlockStats, err error)
- func (s *DiskWriter) WriteChunks(chks ...chunks.Meta) error
- type FetcherMetrics
- type GaugeVec
- type HealthStats
- type IgnoreDeletionMarkFilter
- type LabelShardedMetaFilter
- type Lister
- type MetaFetcher
- func NewMetaFetcher(logger log.Logger, concurrency int, bkt objstore.InstrumentedBucketReader, ...) (*MetaFetcher, error)
- func NewMetaFetcherWithMetrics(logger log.Logger, concurrency int, bkt objstore.InstrumentedBucketReader, ...) (*MetaFetcher, error)
- func NewRawMetaFetcher(logger log.Logger, bkt objstore.InstrumentedBucketReader, ...) (*MetaFetcher, error)
- type MetadataFetcher
- type MetadataFilter
- type Reader
- type RecursiveLister
- type ReplicaLabelRemover
- type SeriesWriter
- type TimePartitionMetaFilter
- type Writer
Constants ¶
const ( // MetaFilename is the known JSON filename for meta information. MetaFilename = "meta.json" // IndexFilename is the known index file for block index. IndexFilename = "index" // IndexHeaderFilename is the canonical name for binary index header file that stores essential information. IndexHeaderFilename = "index-header" // ChunksDirname is the known dir name for chunks with compressed samples. ChunksDirname = "chunks" // DebugMetas is a directory for debug meta files that happen in the past. Useful for debugging. DebugMetas = "debug/metas" )
const ( FetcherSubSys = "blocks_meta" CorruptedMeta = "corrupted-meta-json" NoMeta = "no-meta-json" LoadedMeta = "loaded" FailedMeta = "failed" // Blocks that are marked for deletion can be loaded as well. This is done to make sure that we load blocks that are meant to be deleted, // but don't have a replacement block yet. MarkedForDeletionMeta = "marked-for-deletion" // MarkedForNoCompactionMeta is label for blocks which are loaded but also marked for no compaction. This label is also counted in `loaded` label metric. MarkedForNoCompactionMeta = "marked-for-no-compact" // MarkedForNoDownsampleMeta is label for blocks which are loaded but also marked for no downsample. This label is also counted in `loaded` label metric. MarkedForNoDownsampleMeta = "marked-for-no-downsample" )
const BlockIDLabel = "__block_id"
Special label that will have an ULID of the meta.json being referenced to.
const FetcherConcurrency = 32
Variables ¶
var ( ErrorSyncMetaNotFound = errors.New("meta.json not found") ErrorSyncMetaCorrupted = errors.New("meta.json corrupted") )
Functions ¶
func DefaultModifiedLabelValues ¶ added in v0.33.0
func DefaultModifiedLabelValues() [][]string
func DefaultSyncedStateLabelValues ¶ added in v0.33.0
func DefaultSyncedStateLabelValues() [][]string
func Delete ¶
Delete removes directory that is meant to be block directory. NOTE: Always prefer this method for deleting blocks.
- We have to delete block's files in the certain order (meta.json first and deletion-mark.json last) to ensure we don't end up with malformed partial blocks. Thanos system handles well partial blocks only if they don't have meta.json. If meta.json is present Thanos assumes valid block.
- This avoids deleting empty dir (whole bucket) by mistake.
func Download ¶
func Download(ctx context.Context, logger log.Logger, bucket objstore.Bucket, id ulid.ULID, dst string, options ...objstore.DownloadOption) error
Download downloads directory that is mean to be block directory. If any of the files have a hash calculated in the meta file and it matches with what is in the destination path then we do not download it. We always re-download the meta file.
func DownloadMeta ¶
func DownloadMeta(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID) (metadata.Meta, error)
DownloadMeta downloads only meta file from bucket by block ID. TODO(bwplotka): Differentiate between network error & partial upload.
func GatherFileStats ¶ added in v0.27.0
func GatherFileStats(blockDir string, hf metadata.HashFunc, logger log.Logger) (res []metadata.File, _ error)
GatherFileStats returns metadata.File entry for files inside TSDB block (index, chunks, meta.json).
func GetSegmentFiles ¶ added in v0.17.0
GetSegmentFiles returns list of segment files for given block. Paths are relative to the chunks directory. In case of errors, nil is returned.
func IsBlockMetaFile ¶ added in v0.32.0
func MarkForDeletion ¶ added in v0.12.0
func MarkForDeletion(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, details string, markedForDeletion prometheus.Counter) error
MarkForDeletion creates a file which stores information about when the block was marked for deletion.
func MarkForNoCompact ¶ added in v0.17.0
func MarkForNoCompact(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, reason metadata.NoCompactReason, details string, markedForNoCompact prometheus.Counter) error
MarkForNoCompact creates a file which marks block to be not compacted.
func MarkForNoDownsample ¶ added in v0.30.0
func MarkForNoDownsample(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, reason metadata.NoDownsampleReason, details string, markedForNoDownsample prometheus.Counter) error
MarkForNoDownsample creates a file which marks block to be not downsampled.
func ParseRelabelConfig ¶ added in v0.15.0
func ParseRelabelConfig(contentYaml []byte, supportedActions map[relabel.Action]struct{}) ([]*relabel.Config, error)
ParseRelabelConfig parses relabel configuration. If supportedActions not specified, all relabel actions are valid.
func RemoveMark ¶ added in v0.30.0
func RemoveMark(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID, removeMark prometheus.Counter, markedFilename string) error
RemoveMark removes the file which marked the block for deletion, no-downsample or no-compact.
func Repair ¶
func Repair(ctx context.Context, logger log.Logger, dir string, id ulid.ULID, source metadata.SourceType, ignoreChkFns ...ignoreFnType) (resid ulid.ULID, err error)
Repair open the block with given id in dir and creates a new one with fixed data. It: - removes out of order duplicates - all "complete" outsiders (they will not accessed anyway) - removes all near "complete" outside chunks introduced by https://github.com/prometheus/tsdb/issues/347. Fixable inconsistencies are resolved in the new block. TODO(bplotka): https://github.com/thanos-io/thanos/issues/378.
func Upload ¶
func Upload(ctx context.Context, logger log.Logger, bkt objstore.Bucket, bdir string, hf metadata.HashFunc, options ...objstore.UploadOption) error
Upload uploads a TSDB block to the object storage. It verifies basic features of Thanos block.
func UploadPromBlock ¶ added in v0.20.0
func UploadPromBlock(ctx context.Context, logger log.Logger, bkt objstore.Bucket, bdir string, hf metadata.HashFunc, options ...objstore.UploadOption) error
UploadPromBlock uploads a TSDB block to the object storage. It assumes the block is used in Prometheus so it doesn't check Thanos external labels.
Types ¶
type BaseFetcher ¶ added in v0.12.0
type BaseFetcher struct {
// contains filtered or unexported fields
}
BaseFetcher is a struct that synchronizes filtered metadata of all block in the object storage with the local state. Go-routine safe.
func NewBaseFetcher ¶ added in v0.12.0
func NewBaseFetcher(logger log.Logger, concurrency int, bkt objstore.InstrumentedBucketReader, blockIDsFetcher Lister, dir string, reg prometheus.Registerer) (*BaseFetcher, error)
NewBaseFetcher constructs BaseFetcher.
func NewBaseFetcherWithMetrics ¶ added in v0.33.0
func NewBaseFetcherWithMetrics(logger log.Logger, concurrency int, bkt objstore.InstrumentedBucketReader, blockIDsLister Lister, dir string, metrics *BaseFetcherMetrics) (*BaseFetcher, error)
NewBaseFetcherWithMetrics constructs BaseFetcher.
func (*BaseFetcher) NewMetaFetcher ¶ added in v0.12.0
func (f *BaseFetcher) NewMetaFetcher(reg prometheus.Registerer, filters []MetadataFilter, logTags ...interface{}) *MetaFetcher
NewMetaFetcher transforms BaseFetcher into actually usable *MetaFetcher.
func (*BaseFetcher) NewMetaFetcherWithMetrics ¶ added in v0.33.0
func (f *BaseFetcher) NewMetaFetcherWithMetrics(fetcherMetrics *FetcherMetrics, filters []MetadataFilter, logTags ...interface{}) *MetaFetcher
NewMetaFetcherWithMetrics transforms BaseFetcher into actually usable *MetaFetcher.
type BaseFetcherMetrics ¶ added in v0.33.0
type BaseFetcherMetrics struct {
Syncs prometheus.Counter
}
BaseFetcherMetrics holds metrics tracked by the base fetcher. This struct and its fields are exported to allow depending projects (eg. Cortex) to implement their own custom metadata fetcher while tracking compatible metrics.
func NewBaseFetcherMetrics ¶ added in v0.33.0
func NewBaseFetcherMetrics(reg prometheus.Registerer) *BaseFetcherMetrics
type ConcurrentLister ¶ added in v0.35.0
type ConcurrentLister struct {
// contains filtered or unexported fields
}
ConcurrentLister lists block IDs by doing a top level iteration of the bucket followed by one Exists call for each discovered block to detect partial blocks.
func NewConcurrentLister ¶ added in v0.35.0
func NewConcurrentLister(logger log.Logger, bkt objstore.InstrumentedBucketReader) *ConcurrentLister
type ConsistencyDelayMetaFilter ¶ added in v0.11.0
type ConsistencyDelayMetaFilter struct {
// contains filtered or unexported fields
}
ConsistencyDelayMetaFilter is a BaseFetcher filter that filters out blocks that are created before a specified consistency delay. Not go-routine safe.
func NewConsistencyDelayMetaFilter ¶ added in v0.11.0
func NewConsistencyDelayMetaFilter(logger log.Logger, consistencyDelay time.Duration, reg prometheus.Registerer) *ConsistencyDelayMetaFilter
NewConsistencyDelayMetaFilter creates ConsistencyDelayMetaFilter.
func NewConsistencyDelayMetaFilterWithoutMetrics ¶ added in v0.33.0
func NewConsistencyDelayMetaFilterWithoutMetrics(logger log.Logger, consistencyDelay time.Duration) *ConsistencyDelayMetaFilter
NewConsistencyDelayMetaFilterWithoutMetrics creates ConsistencyDelayMetaFilter.
func (*ConsistencyDelayMetaFilter) Filter ¶ added in v0.11.0
func (f *ConsistencyDelayMetaFilter) Filter(_ context.Context, metas map[ulid.ULID]*metadata.Meta, synced GaugeVec, modified GaugeVec) error
Filter filters out blocks that filters blocks that have are created before a specified consistency delay.
type DeduplicateFilter ¶ added in v0.11.0
type DefaultDeduplicateFilter ¶ added in v0.32.0
type DefaultDeduplicateFilter struct {
// contains filtered or unexported fields
}
DefaultDeduplicateFilter is a BaseFetcher filter that filters out older blocks that have exactly the same data. Not go-routine safe.
func NewDeduplicateFilter ¶ added in v0.11.0
func NewDeduplicateFilter(concurrency int) *DefaultDeduplicateFilter
NewDeduplicateFilter creates DefaultDeduplicateFilter.
func (*DefaultDeduplicateFilter) DuplicateIDs ¶ added in v0.32.0
func (f *DefaultDeduplicateFilter) DuplicateIDs() []ulid.ULID
DuplicateIDs returns slice of block ids that are filtered out by DefaultDeduplicateFilter.
func (*DefaultDeduplicateFilter) Filter ¶ added in v0.32.0
func (f *DefaultDeduplicateFilter) Filter(_ context.Context, metas map[ulid.ULID]*metadata.Meta, synced GaugeVec, modified GaugeVec) error
Filter filters out duplicate blocks that can be formed from two or more overlapping blocks that fully submatches the source blocks of the older blocks.
type DiskWriter ¶ added in v0.18.0
type DiskWriter struct {
// contains filtered or unexported fields
}
func NewDiskWriter ¶ added in v0.18.0
NewDiskWriter allows to write single TSDB block to disk and returns statistics. Destination block directory has to exists.
func (*DiskWriter) Flush ¶ added in v0.18.0
func (d *DiskWriter) Flush() (_ tsdb.BlockStats, err error)
func (*DiskWriter) WriteChunks ¶ added in v0.18.0
type FetcherMetrics ¶ added in v0.19.0
type FetcherMetrics struct { Syncs prometheus.Counter SyncFailures prometheus.Counter SyncDuration prometheus.Observer Synced *extprom.TxGaugeVec Modified *extprom.TxGaugeVec }
FetcherMetrics holds metrics tracked by the metadata fetcher. This struct and its fields are exported to allow depending projects (eg. Cortex) to implement their own custom metadata fetcher while tracking compatible metrics.
func NewFetcherMetrics ¶ added in v0.19.0
func NewFetcherMetrics(reg prometheus.Registerer, syncedExtraLabels, modifiedExtraLabels [][]string) *FetcherMetrics
func (*FetcherMetrics) ResetTx ¶ added in v0.19.0
func (s *FetcherMetrics) ResetTx()
ResetTx starts new transaction for metrics tracked by transaction GaugeVec.
func (*FetcherMetrics) Submit ¶ added in v0.19.0
func (s *FetcherMetrics) Submit()
Submit applies new values for metrics tracked by transaction GaugeVec.
type GaugeVec ¶ added in v0.29.0
type GaugeVec interface {
WithLabelValues(lvs ...string) prometheus.Gauge
}
GaugeVec hides something like a Prometheus GaugeVec or an extprom.TxGaugeVec.
type HealthStats ¶ added in v0.18.0
type HealthStats struct { // TotalSeries represents total number of series in block. TotalSeries int64 // OutOfOrderSeries represents number of series that have out of order chunks. OutOfOrderSeries int // OutOfOrderChunks represents number of chunks that are out of order (older time range is after younger one). OutOfOrderChunks int // DuplicatedChunks represents number of chunks with same time ranges within same series, potential duplicates. DuplicatedChunks int // OutsideChunks represents number of all chunks that are before or after time range specified in block meta. OutsideChunks int // CompleteOutsideChunks is subset of OutsideChunks that will be never accessed. They are completely out of time range specified in block meta. CompleteOutsideChunks int // Issue347OutsideChunks represents subset of OutsideChunks that are outsiders caused by https://github.com/prometheus/tsdb/issues/347 // and is something that Thanos handle. // // Specifically we mean here chunks with minTime == block.maxTime and maxTime > block.MaxTime. These are // are segregated into separate counters. These chunks are safe to be deleted, since they are duplicated across 2 blocks. Issue347OutsideChunks int // OutOfOrderLabels represents the number of postings that contained out // of order labels, a bug present in Prometheus 2.8.0 and below. OutOfOrderLabels int // Debug Statistics. SeriesMinLifeDuration time.Duration SeriesAvgLifeDuration time.Duration SeriesMaxLifeDuration time.Duration SeriesMinLifeDurationWithoutSingleSampleSeries time.Duration SeriesAvgLifeDurationWithoutSingleSampleSeries time.Duration SeriesMaxLifeDurationWithoutSingleSampleSeries time.Duration SeriesMinChunks int64 SeriesAvgChunks int64 SeriesMaxChunks int64 TotalChunks int64 ChunkMinDuration time.Duration ChunkAvgDuration time.Duration ChunkMaxDuration time.Duration ChunkMinSize int64 ChunkAvgSize int64 ChunkMaxSize int64 SeriesMinSize int64 SeriesAvgSize int64 SeriesMaxSize int64 SingleSampleSeries int64 SingleSampleChunks int64 LabelNamesCount int64 MetricLabelValuesCount int64 }
func GatherIndexHealthStats ¶ added in v0.18.0
func GatherIndexHealthStats(ctx context.Context, logger log.Logger, fn string, minTime, maxTime int64) (stats HealthStats, err error)
GatherIndexHealthStats returns useful counters as well as outsider chunks (chunks outside of block time range) that helps to assess index health. It considers https://github.com/prometheus/tsdb/issues/347 as something that Thanos can handle. See HealthStats.Issue347OutsideChunks for details.
func (HealthStats) AnyErr ¶ added in v0.18.0
func (i HealthStats) AnyErr() error
AnyErr returns error if stats indicates any block issue.
func (HealthStats) CriticalErr ¶ added in v0.18.0
func (i HealthStats) CriticalErr() error
CriticalErr returns error if stats indicates critical block issue, that might solved only by manual repair procedure.
func (HealthStats) Issue347OutsideChunksErr ¶ added in v0.18.0
func (i HealthStats) Issue347OutsideChunksErr() error
Issue347OutsideChunksErr returns error if stats indicates issue347 block issue, that is repaired explicitly before compaction (on plan block).
func (HealthStats) OutOfOrderChunksErr ¶ added in v0.23.0
func (i HealthStats) OutOfOrderChunksErr() error
func (HealthStats) OutOfOrderLabelsErr ¶ added in v0.28.0
func (i HealthStats) OutOfOrderLabelsErr() error
OutOfOrderLabelsErr returns an error if the HealthStats object indicates postings with out of order labels. This is corrected by Prometheus Issue #5372 and affects Prometheus versions 2.8.0 and below.
type IgnoreDeletionMarkFilter ¶ added in v0.12.0
type IgnoreDeletionMarkFilter struct {
// contains filtered or unexported fields
}
IgnoreDeletionMarkFilter is a filter that filters out the blocks that are marked for deletion after a given delay. The delay duration is to make sure that the replacement block can be fetched before we filter out the old block. Delay is not considered when computing DeletionMarkBlocks map. Not go-routine safe.
func NewIgnoreDeletionMarkFilter ¶ added in v0.12.0
func NewIgnoreDeletionMarkFilter(logger log.Logger, bkt objstore.InstrumentedBucketReader, delay time.Duration, concurrency int) *IgnoreDeletionMarkFilter
NewIgnoreDeletionMarkFilter creates IgnoreDeletionMarkFilter.
func (*IgnoreDeletionMarkFilter) DeletionMarkBlocks ¶ added in v0.12.0
func (f *IgnoreDeletionMarkFilter) DeletionMarkBlocks() map[ulid.ULID]*metadata.DeletionMark
DeletionMarkBlocks returns block ids that were marked for deletion.
func (*IgnoreDeletionMarkFilter) Filter ¶ added in v0.12.0
func (f *IgnoreDeletionMarkFilter) Filter(ctx context.Context, metas map[ulid.ULID]*metadata.Meta, synced GaugeVec, modified GaugeVec) error
Filter filters out blocks that are marked for deletion after a given delay. It also returns the blocks that can be deleted since they were uploaded delay duration before current time.
type LabelShardedMetaFilter ¶ added in v0.10.0
type LabelShardedMetaFilter struct {
// contains filtered or unexported fields
}
LabelShardedMetaFilter represents struct that allows sharding. Not go-routine safe.
func NewLabelShardedMetaFilter ¶ added in v0.10.0
func NewLabelShardedMetaFilter(relabelConfig []*relabel.Config) *LabelShardedMetaFilter
NewLabelShardedMetaFilter creates LabelShardedMetaFilter.
type Lister ¶ added in v0.35.0
type Lister interface { // GetActiveAndPartialBlockIDs GetActiveBlocksIDs returning it via channel (streaming) and response. // Active blocks are blocks which contain meta.json, while partial blocks are blocks without meta.json GetActiveAndPartialBlockIDs(ctx context.Context, ch chan<- ulid.ULID) (partialBlocks map[ulid.ULID]bool, err error) }
Lister lists block IDs from a bucket.
type MetaFetcher ¶ added in v0.10.0
type MetaFetcher struct {
// contains filtered or unexported fields
}
func NewMetaFetcher ¶ added in v0.10.0
func NewMetaFetcher(logger log.Logger, concurrency int, bkt objstore.InstrumentedBucketReader, blockIDsFetcher Lister, dir string, reg prometheus.Registerer, filters []MetadataFilter) (*MetaFetcher, error)
NewMetaFetcher returns meta fetcher.
func NewMetaFetcherWithMetrics ¶ added in v0.33.0
func NewMetaFetcherWithMetrics(logger log.Logger, concurrency int, bkt objstore.InstrumentedBucketReader, blockIDsFetcher Lister, dir string, baseFetcherMetrics *BaseFetcherMetrics, fetcherMetrics *FetcherMetrics, filters []MetadataFilter) (*MetaFetcher, error)
NewMetaFetcherWithMetrics returns meta fetcher.
func NewRawMetaFetcher ¶ added in v0.19.0
func NewRawMetaFetcher(logger log.Logger, bkt objstore.InstrumentedBucketReader, blockIDsFetcher Lister) (*MetaFetcher, error)
NewRawMetaFetcher returns basic meta fetcher without proper handling for eventual consistent backends or partial uploads. NOTE: Not suitable to use in production.
func (*MetaFetcher) Fetch ¶ added in v0.10.0
func (f *MetaFetcher) Fetch(ctx context.Context) (metas map[ulid.ULID]*metadata.Meta, partial map[ulid.ULID]error, err error)
Fetch returns all block metas as well as partial blocks (blocks without or with corrupted meta file) from the bucket. It's caller responsibility to not change the returned metadata files. Maps can be modified.
Returned error indicates a failure in fetching metadata. Returned meta can be assumed as correct, with some blocks missing.
func (*MetaFetcher) UpdateOnChange ¶ added in v0.12.0
func (f *MetaFetcher) UpdateOnChange(listener func([]metadata.Meta, error))
UpdateOnChange allows to add listener that will be update on every change.
type MetadataFetcher ¶ added in v0.10.0
type MetadataFilter ¶ added in v0.12.0
type MetadataFilter interface {
Filter(ctx context.Context, metas map[ulid.ULID]*metadata.Meta, synced GaugeVec, modified GaugeVec) error
}
Filter allows filtering or modifying metas from the provided map or returns error.
type Reader ¶ added in v0.18.0
type Reader interface { // Index returns an IndexReader over the block's data. Index() (tsdb.IndexReader, error) // Chunks returns a ChunkReader over the block's data. Chunks() (tsdb.ChunkReader, error) // Meta returns block metadata file. Meta() tsdb.BlockMeta }
Reader is like tsdb.BlockReader but without tombstones and size methods.
type RecursiveLister ¶ added in v0.35.0
type RecursiveLister struct {
// contains filtered or unexported fields
}
RecursiveLister lists block IDs by recursively iterating through a bucket.
func NewRecursiveLister ¶ added in v0.35.0
func NewRecursiveLister(logger log.Logger, bkt objstore.InstrumentedBucketReader) *RecursiveLister
type ReplicaLabelRemover ¶ added in v0.12.0
type ReplicaLabelRemover struct {
// contains filtered or unexported fields
}
ReplicaLabelRemover is a BaseFetcher filter that modifies external labels of existing blocks, it removes given replica labels from the metadata of blocks that have it.
func NewReplicaLabelRemover ¶ added in v0.12.0
func NewReplicaLabelRemover(logger log.Logger, replicaLabels []string) *ReplicaLabelRemover
NewReplicaLabelRemover creates a ReplicaLabelRemover.
func (*ReplicaLabelRemover) Filter ¶ added in v0.25.0
func (r *ReplicaLabelRemover) Filter(_ context.Context, metas map[ulid.ULID]*metadata.Meta, synced GaugeVec, modified GaugeVec) error
Filter modifies external labels of existing blocks, it removes given replica labels from the metadata of blocks that have it.
type SeriesWriter ¶ added in v0.18.0
type SeriesWriter interface { tsdb.IndexWriter tsdb.ChunkWriter }
SeriesWriter is interface for writing series into one or multiple Blocks. Statistics has to be counted by implementation.
type TimePartitionMetaFilter ¶ added in v0.10.0
type TimePartitionMetaFilter struct {
// contains filtered or unexported fields
}
TimePartitionMetaFilter is a BaseFetcher filter that filters out blocks that are outside of specified time range. Not go-routine safe.
func NewTimePartitionMetaFilter ¶ added in v0.10.0
func NewTimePartitionMetaFilter(MinTime, MaxTime model.TimeOrDurationValue) *TimePartitionMetaFilter
NewTimePartitionMetaFilter creates TimePartitionMetaFilter.
type Writer ¶ added in v0.18.0
type Writer interface { SeriesWriter Flush() (tsdb.BlockStats, error) }
Writer is interface for creating block(s).