downsample

package
v0.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 13, 2020 License: Apache-2.0 Imports: 25 Imported by: 15

Documentation

Index

Constants

View Source
const (
	ResLevel0 = int64(0)              // Raw data.
	ResLevel1 = int64(5 * 60 * 1000)  // 5 minutes in milliseconds.
	ResLevel2 = int64(60 * 60 * 1000) // 1 hour in milliseconds.
)

Standard downsampling resolution levels in Thanos.

View Source
const (
	DownsampleRange0 = 40 * 60 * 60 * 1000      // 40 hours.
	DownsampleRange1 = 10 * 24 * 60 * 60 * 1000 // 10 days.
)

Downsampling ranges i.e. minimum block size after which we start to downsample blocks (in seconds).

View Source
const ChunkEncAggr = chunkenc.Encoding(0xff)

ChunkEncAggr is the top level encoding byte for the AggrChunk. It picks the highest number possible to prevent future collisions with wrapped encodings.

Variables

View Source
var ErrAggrNotExist = errors.New("aggregate does not exist")

ErrAggrNotExist is returned if a requested aggregation is not present in an AggrChunk.

Functions

func Downsample

func Downsample(
	logger log.Logger,
	origMeta *metadata.Meta,
	b tsdb.BlockReader,
	dir string,
	resolution int64,
) (id ulid.ULID, err error)

Downsample downsamples the given block. It writes a new block into dir and returns its ID.

func NewPool

func NewPool() chunkenc.Pool

TODO(bwplotka): Add reasonable limits to our sync pooling them to detect OOMs early.

func NewStreamedBlockWriter added in v0.3.0

func NewStreamedBlockWriter(
	blockDir string,
	indexReader tsdb.IndexReader,
	logger log.Logger,
	originMeta metadata.Meta,
) (w *streamedBlockWriter, err error)

NewStreamedBlockWriter returns streamedBlockWriter instance, it's not concurrency safe. Caller is responsible to Close all io.Closers by calling the Close when downsampling is done. In case if error happens outside of the StreamedBlockWriter during the processing, index and meta files will be written anyway, so the caller is always responsible for removing block directory with a garbage on error. This approach simplifies StreamedBlockWriter interface, which is a best trade-off taking into account the error is an exception, not a general case.

Types

type AggrChunk

type AggrChunk []byte

AggrChunk is a chunk that is composed of a set of aggregates for the same underlying data. Not all aggregates must be present.

func EncodeAggrChunk

func EncodeAggrChunk(chks [5]chunkenc.Chunk) *AggrChunk

EncodeAggrChunk encodes a new aggregate chunk from the array of chunks for each aggregate. Each array entry corresponds to the respective AggrType number.

func (AggrChunk) Appender

func (c AggrChunk) Appender() (chunkenc.Appender, error)

func (AggrChunk) Bytes

func (c AggrChunk) Bytes() []byte

func (AggrChunk) Encoding

func (c AggrChunk) Encoding() chunkenc.Encoding

func (AggrChunk) Get

func (c AggrChunk) Get(t AggrType) (chunkenc.Chunk, error)

Get returns the sub-chunk for the given aggregate type if it exists.

func (AggrChunk) Iterator

func (AggrChunk) NumSamples

func (c AggrChunk) NumSamples() int

type AggrType

type AggrType uint8

AggrType represents an aggregation type.

const (
	AggrCount AggrType = iota
	AggrSum
	AggrMin
	AggrMax
	AggrCounter
)

Valid aggregations.

func (AggrType) String

func (t AggrType) String() string

type AverageChunkIterator

type AverageChunkIterator struct {
	// contains filtered or unexported fields
}

AverageChunkIterator emits an artificial series of average samples based in aggregate chunks with sum and count aggregates.

func NewAverageChunkIterator

func NewAverageChunkIterator(cnt, sum chunkenc.Iterator) *AverageChunkIterator

func (*AverageChunkIterator) At

func (it *AverageChunkIterator) At() (int64, float64)

func (*AverageChunkIterator) Err

func (it *AverageChunkIterator) Err() error

func (*AverageChunkIterator) Next

func (it *AverageChunkIterator) Next() bool

type CounterSeriesIterator

type CounterSeriesIterator struct {
	// contains filtered or unexported fields
}

CounterSeriesIterator generates monotonically increasing values by iterating over an ordered sequence of chunks, which should be raw or aggregated chunks of counter values. The generated samples can be used by PromQL functions like 'rate' that calculate differences between counter values.

Counter aggregation chunks must have the first and last values from their original raw series: the first raw value should be the first value encoded in the chunk, and the last raw value is encoded by the duplication of the previous sample's timestamp. As iteration occurs between chunks, the comparison between the last raw value of the earlier chunk and the first raw value of the later chunk ensures that counter resets between chunks are recognized and that the correct value delta is calculated.

func NewCounterSeriesIterator

func NewCounterSeriesIterator(chks ...chunkenc.Iterator) *CounterSeriesIterator

func (*CounterSeriesIterator) At

func (it *CounterSeriesIterator) At() (t int64, v float64)

func (*CounterSeriesIterator) Err

func (it *CounterSeriesIterator) Err() error

func (*CounterSeriesIterator) Next

func (it *CounterSeriesIterator) Next() bool

func (*CounterSeriesIterator) Seek

func (it *CounterSeriesIterator) Seek(x int64) bool

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL