Documentation ¶
Index ¶
- Constants
- Variables
- func Downsample(logger log.Logger, origMeta *metadata.Meta, b tsdb.BlockReader, dir string, ...) (id ulid.ULID, err error)
- func NewPool() chunkenc.Pool
- func NewStreamedBlockWriter(blockDir string, indexReader tsdb.IndexReader, logger log.Logger, ...) (w *streamedBlockWriter, err error)
- type AggrChunk
- type AggrType
- type AverageChunkIterator
- type CounterSeriesIterator
Constants ¶
const ( ResLevel0 = int64(0) // raw data ResLevel1 = int64(5 * 60 * 1000) // 5 minutes in milliseconds ResLevel2 = int64(60 * 60 * 1000) // 1 hour in milliseconds )
Standard downsampling resolution levels in Thanos.
const ChunkEncAggr = chunkenc.Encoding(0xff)
ChunkEncAggr is the top level encoding byte for the AggrChunk. It picks the highest number possible to prevent future collisions with wrapped encodings.
Variables ¶
var ErrAggrNotExist = errors.New("aggregate does not exist")
ErrAggrNotExist is returned if a requested aggregation is not present in an AggrChunk.
Functions ¶
func Downsample ¶
func Downsample( logger log.Logger, origMeta *metadata.Meta, b tsdb.BlockReader, dir string, resolution int64, ) (id ulid.ULID, err error)
Downsample downsamples the given block. It writes a new block into dir and returns its ID.
func NewStreamedBlockWriter ¶ added in v0.3.0
func NewStreamedBlockWriter( blockDir string, indexReader tsdb.IndexReader, logger log.Logger, originMeta metadata.Meta, ) (w *streamedBlockWriter, err error)
NewStreamedBlockWriter returns streamedBlockWriter instance, it's not concurrency safe. Caller is responsible to Close all io.Closers by calling the Close when downsampling is done. In case if error happens outside of the StreamedBlockWriter during the processing, index and meta files will be written anyway, so the caller is always responsible for removing block directory with a garbage on error. This approach simplifies StreamedBlockWriter interface, which is a best trade-off taking into account the error is an exception, not a general case.
Types ¶
type AggrChunk ¶
type AggrChunk []byte
AggrChunk is a chunk that is composed of a set of aggregates for the same underlying data. Not all aggregates must be present.
func EncodeAggrChunk ¶
EncodeAggrChunk encodes a new aggregate chunk from the array of chunks for each aggregate. Each array entry corresponds to the respective AggrType number.
func (AggrChunk) NumSamples ¶
type AverageChunkIterator ¶
type AverageChunkIterator struct {
// contains filtered or unexported fields
}
AverageChunkIterator emits an artificial series of average samples based in aggregate chunks with sum and count aggregates.
func NewAverageChunkIterator ¶
func NewAverageChunkIterator(cnt, sum chunkenc.Iterator) *AverageChunkIterator
func (*AverageChunkIterator) At ¶
func (it *AverageChunkIterator) At() (int64, float64)
func (*AverageChunkIterator) Err ¶
func (it *AverageChunkIterator) Err() error
func (*AverageChunkIterator) Next ¶
func (it *AverageChunkIterator) Next() bool
type CounterSeriesIterator ¶
type CounterSeriesIterator struct {
// contains filtered or unexported fields
}
CounterSeriesIterator iterates over an ordered sequence of chunks and treats decreasing values as counter reset. Additionally, it can deal with downsampled counter chunks, which set the last value of a chunk to the original last value. The last value can be detected by checking whether the timestamp did not increase w.r.t to the previous sample
func NewCounterSeriesIterator ¶
func NewCounterSeriesIterator(chks ...chunkenc.Iterator) *CounterSeriesIterator
func (*CounterSeriesIterator) At ¶
func (it *CounterSeriesIterator) At() (t int64, v float64)
func (*CounterSeriesIterator) Err ¶
func (it *CounterSeriesIterator) Err() error
func (*CounterSeriesIterator) Next ¶
func (it *CounterSeriesIterator) Next() bool
func (*CounterSeriesIterator) Seek ¶
func (it *CounterSeriesIterator) Seek(x int64) bool