Documentation ¶
Index ¶
- Constants
- Variables
- func Downsample(logger log.Logger, origMeta *metadata.Meta, b tsdb.BlockReader, dir string, ...) (id ulid.ULID, err error)
- func DownsampleRaw(data []sample, resolution int64) []chunks.Meta
- func NewPool() chunkenc.Pool
- func NewStreamedBlockWriter(blockDir string, indexReader tsdb.IndexReader, logger log.Logger, ...) (w *streamedBlockWriter, err error)
- func SamplesFromTSDBSamples(samples []tsdbutil.Sample) []sample
- type AggrChunk
- func (c AggrChunk) Appender() (chunkenc.Appender, error)
- func (c AggrChunk) Bytes() []byte
- func (c AggrChunk) Compact()
- func (c AggrChunk) Encoding() chunkenc.Encoding
- func (c AggrChunk) Get(t AggrType) (chunkenc.Chunk, error)
- func (c AggrChunk) Iterator(_ chunkenc.Iterator) chunkenc.Iterator
- func (c AggrChunk) NumSamples() int
- type AggrType
- type ApplyCounterResetsSeriesIterator
- type AverageChunkIterator
Constants ¶
const ( ResLevel0 = int64(0) // Raw data. ResLevel1 = int64(5 * 60 * 1000) // 5 minutes in milliseconds. ResLevel2 = int64(60 * 60 * 1000) // 1 hour in milliseconds. )
Standard downsampling resolution levels in Thanos.
const ( ResLevel1DownsampleRange = 40 * 60 * 60 * 1000 // 40 hours. ResLevel2DownsampleRange = 10 * 24 * 60 * 60 * 1000 // 10 days. )
Downsampling ranges i.e. minimum block size after which we start to downsample blocks (in seconds).
const ChunkEncAggr = chunkenc.Encoding(0xff)
ChunkEncAggr is the top level encoding byte for the AggrChunk. It picks the highest number possible to prevent future collisions with wrapped encodings.
Variables ¶
var ErrAggrNotExist = errors.New("aggregate does not exist")
ErrAggrNotExist is returned if a requested aggregation is not present in an AggrChunk.
Functions ¶
func Downsample ¶
func Downsample( logger log.Logger, origMeta *metadata.Meta, b tsdb.BlockReader, dir string, resolution int64, ) (id ulid.ULID, err error)
Downsample downsamples the given block. It writes a new block into dir and returns its ID.
func DownsampleRaw ¶ added in v0.26.0
DownsampleRaw create a series of aggregation chunks for the given sample data.
func NewStreamedBlockWriter ¶ added in v0.3.0
func NewStreamedBlockWriter( blockDir string, indexReader tsdb.IndexReader, logger log.Logger, originMeta metadata.Meta, ) (w *streamedBlockWriter, err error)
NewStreamedBlockWriter returns streamedBlockWriter instance, it's not concurrency safe. Caller is responsible to Close all io.Closers by calling the Close when downsampling is done. In case if error happens outside of the StreamedBlockWriter during the processing, index and meta files will be written anyway, so the caller is always responsible for removing block directory with a garbage on error. This approach simplifies StreamedBlockWriter interface, which is a best trade-off taking into account the error is an exception, not a general case.
func SamplesFromTSDBSamples ¶ added in v0.26.0
SamplesFromTSDBSamples converts tsdbutil.Sample slice to samples.
Types ¶
type AggrChunk ¶
type AggrChunk []byte
AggrChunk is a chunk that is composed of a set of aggregates for the same underlying data. Not all aggregates must be present.
func EncodeAggrChunk ¶
EncodeAggrChunk encodes a new aggregate chunk from the array of chunks for each aggregate. Each array entry corresponds to the respective AggrType number.
func (AggrChunk) NumSamples ¶
type ApplyCounterResetsSeriesIterator ¶ added in v0.26.0
type ApplyCounterResetsSeriesIterator struct {
// contains filtered or unexported fields
}
ApplyCounterResetsSeriesIterator generates monotonically increasing values by iterating over an ordered sequence of chunks, which should be raw or aggregated chunks of counter values. The generated samples can be used by PromQL functions like 'rate' that calculate differences between counter values. Stale Markers are removed as well.
Counter aggregation chunks must have the first and last values from their original raw series: the first raw value should be the first value encoded in the chunk, and the last raw value is encoded by the duplication of the previous sample's timestamp. As iteration occurs between chunks, the comparison between the last raw value of the earlier chunk and the first raw value of the later chunk ensures that counter resets between chunks are recognized and that the correct value delta is calculated.
It handles overlapped chunks (removes overlaps). NOTE: It is important to deduplicate with care ensuring that you don't hit issue https://github.com/thanos-io/thanos/issues/2401#issuecomment-621958839. NOTE(bwplotka): This hides resets from PromQL engine. This means it will not work for PromQL resets function.
func NewApplyCounterResetsIterator ¶ added in v0.26.0
func NewApplyCounterResetsIterator(chks ...chunkenc.Iterator) *ApplyCounterResetsSeriesIterator
func (*ApplyCounterResetsSeriesIterator) At ¶ added in v0.26.0
func (it *ApplyCounterResetsSeriesIterator) At() (t int64, v float64)
func (*ApplyCounterResetsSeriesIterator) Err ¶ added in v0.26.0
func (it *ApplyCounterResetsSeriesIterator) Err() error
func (*ApplyCounterResetsSeriesIterator) Next ¶ added in v0.26.0
func (it *ApplyCounterResetsSeriesIterator) Next() bool
func (*ApplyCounterResetsSeriesIterator) Seek ¶ added in v0.26.0
func (it *ApplyCounterResetsSeriesIterator) Seek(x int64) bool
type AverageChunkIterator ¶
type AverageChunkIterator struct {
// contains filtered or unexported fields
}
AverageChunkIterator emits an artificial series of average samples based in aggregate chunks with sum and count aggregates.
func NewAverageChunkIterator ¶
func NewAverageChunkIterator(cnt, sum chunkenc.Iterator) *AverageChunkIterator
func (*AverageChunkIterator) At ¶
func (it *AverageChunkIterator) At() (int64, float64)
func (*AverageChunkIterator) Err ¶
func (it *AverageChunkIterator) Err() error
func (*AverageChunkIterator) Next ¶
func (it *AverageChunkIterator) Next() bool
func (*AverageChunkIterator) Seek ¶ added in v0.26.0
func (it *AverageChunkIterator) Seek(t int64) bool