storage

package
v0.37.8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 5, 2023 License: Apache-2.0 Imports: 17 Imported by: 1,620

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNotFound                    = errors.New("not found")
	ErrOutOfOrderSample            = errors.New("out of order sample")
	ErrDuplicateSampleForTimestamp = errors.New("duplicate sample for timestamp")
	ErrOutOfBounds                 = errors.New("out of bounds")
	ErrOutOfOrderExemplar          = errors.New("out of order exemplar")
	ErrDuplicateExemplar           = errors.New("duplicate exemplar")
	ErrExemplarLabelLength         = fmt.Errorf("label length for exemplar exceeds maximum of %d UTF-8 characters", exemplar.ExemplarMaxLabelSetLength)
	ErrExemplarsDisabled           = fmt.Errorf("exemplar storage is disabled or max exemplars is less than or equal to 0")
)

The errors exposed.

Functions

func ExpandChunks

func ExpandChunks(iter chunks.Iterator) ([]chunks.Meta, error)

ExpandChunks iterates over all chunks in the iterator, buffering all in slice.

func ExpandSamples

func ExpandSamples(iter chunkenc.Iterator, newSampleFn func(t int64, v float64) tsdbutil.Sample) ([]tsdbutil.Sample, error)

ExpandSamples iterates over all samples in the iterator, buffering all in slice. Optionally it takes samples constructor, useful when you want to compare sample slices with different sample implementations. if nil, sample type from this package will be used.

func NewChainSampleIterator

func NewChainSampleIterator(iterators []chunkenc.Iterator) chunkenc.Iterator

NewChainSampleIterator returns a single iterator that iterates over the samples from the given iterators in a sorted fashion. If samples overlap, one sample from overlapped ones is kept (randomly) and all others with the same timestamp are dropped.

func NewListChunkSeriesIterator

func NewListChunkSeriesIterator(chks ...chunks.Meta) chunks.Iterator

NewListChunkSeriesIterator returns listChunkSeriesIterator that allows to iterate over provided chunks.

func NewListSeriesIterator

func NewListSeriesIterator(samples Samples) chunkenc.Iterator

NewListSeriesIterator returns listSeriesIterator that allows to iterate over provided samples.

Types

type Appendable

type Appendable interface {
	// Appender returns a new appender for the storage. The implementation
	// can choose whether or not to use the context, for deadlines or to check
	// for errors.
	Appender(ctx context.Context) Appender
}

Appendable allows creating appenders.

type Appender

type Appender interface {
	// Append adds a sample pair for the given series.
	// An optional series reference can be provided to accelerate calls.
	// A series reference number is returned which can be used to add further
	// samples to the given series in the same or later transactions.
	// Returned reference numbers are ephemeral and may be rejected in calls
	// to Append() at any point. Adding the sample via Append() returns a new
	// reference number.
	// If the reference is 0 it must not be used for caching.
	Append(ref SeriesRef, l labels.Labels, t int64, v float64) (SeriesRef, error)

	// Commit submits the collected samples and purges the batch. If Commit
	// returns a non-nil error, it also rolls back all modifications made in
	// the appender so far, as Rollback would do. In any case, an Appender
	// must not be used anymore after Commit has been called.
	Commit() error

	// Rollback rolls back all modifications made in the appender so far.
	// Appender has to be discarded after rollback.
	Rollback() error
	ExemplarAppender
}

Appender provides batched appends against a storage. It must be completed with a call to Commit or Rollback and must not be reused afterwards.

Operations on the Appender interface are not goroutine-safe.

type BufferedSeriesIterator

type BufferedSeriesIterator struct {
	// contains filtered or unexported fields
}

BufferedSeriesIterator wraps an iterator with a look-back buffer.

func NewBuffer

func NewBuffer(delta int64) *BufferedSeriesIterator

NewBuffer returns a new iterator that buffers the values within the time range of the current element and the duration of delta before, initialized with an empty iterator. Use Reset() to set an actual iterator to be buffered.

func NewBufferIterator

func NewBufferIterator(it chunkenc.Iterator, delta int64) *BufferedSeriesIterator

NewBufferIterator returns a new iterator that buffers the values within the time range of the current element and the duration of delta before.

func (*BufferedSeriesIterator) At

At returns the current element of the iterator.

func (*BufferedSeriesIterator) Buffer

Buffer returns an iterator over the buffered data. Invalidates previously returned iterators.

func (*BufferedSeriesIterator) Err

func (b *BufferedSeriesIterator) Err() error

Err returns the last encountered error.

func (*BufferedSeriesIterator) Next

func (b *BufferedSeriesIterator) Next() bool

Next advances the iterator to the next element.

func (*BufferedSeriesIterator) PeekBack

func (b *BufferedSeriesIterator) PeekBack(n int) (t int64, v float64, ok bool)

PeekBack returns the nth previous element of the iterator. If there is none buffered, ok is false.

func (*BufferedSeriesIterator) ReduceDelta

func (b *BufferedSeriesIterator) ReduceDelta(delta int64) bool

ReduceDelta lowers the buffered time delta, for the current SeriesIterator only.

func (*BufferedSeriesIterator) Reset

Reset re-uses the buffer with a new iterator, resetting the buffered time delta to its original value.

func (*BufferedSeriesIterator) Seek

func (b *BufferedSeriesIterator) Seek(t int64) bool

Seek advances the iterator to the element at time t or greater.

type ChunkIterable

type ChunkIterable interface {
	// Iterator returns a new, independent iterator that iterates over potentially overlapping
	// chunks of the series, sorted by min time.
	Iterator() chunks.Iterator
}

type ChunkQuerier

type ChunkQuerier interface {
	LabelQuerier

	// Select returns a set of series that matches the given label matchers.
	// Caller can specify if it requires returned series to be sorted. Prefer not requiring sorting for better performance.
	// It allows passing hints that can help in optimising select, but it's up to implementation how this is used if used at all.
	Select(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) ChunkSeriesSet
}

ChunkQuerier provides querying access over time series data of a fixed time range.

func NewMergeChunkQuerier

func NewMergeChunkQuerier(primaries, secondaries []ChunkQuerier, mergeFn VerticalChunkSeriesMergeFunc) ChunkQuerier

NewMergeChunkQuerier returns a new Chunk Querier that merges results of given primary and secondary chunk queriers. See NewFanout commentary to learn more about primary vs secondary differences.

In case of overlaps between the data given by primaries' and secondaries' Selects, merge function will be used. TODO(bwplotka): Currently merge will compact overlapping chunks with bigger chunk, without limit. Split it: https://github.com/prometheus/tsdb/issues/670

func NoopChunkedQuerier

func NoopChunkedQuerier() ChunkQuerier

NoopChunkedQuerier is a ChunkQuerier that does nothing.

type ChunkQueryable

type ChunkQueryable interface {
	// ChunkQuerier returns a new ChunkQuerier on the storage.
	ChunkQuerier(ctx context.Context, mint, maxt int64) (ChunkQuerier, error)
}

A ChunkQueryable handles queries against a storage. Use it when you need to have access to samples in encoded format.

type ChunkSeries

type ChunkSeries interface {
	Labels
	ChunkIterable
}

ChunkSeries exposes a single time series and allows iterating over chunks.

func NewSeriesToChunkEncoder

func NewSeriesToChunkEncoder(series Series) ChunkSeries

NewSeriesToChunkEncoder encodes samples to chunks with 120 samples limit.

type ChunkSeriesEntry

type ChunkSeriesEntry struct {
	Lset            labels.Labels
	ChunkIteratorFn func() chunks.Iterator
}

func NewListChunkSeriesFromSamples

func NewListChunkSeriesFromSamples(lset labels.Labels, samples ...[]tsdbutil.Sample) *ChunkSeriesEntry

NewListChunkSeriesFromSamples returns chunk series entry that allows to iterate over provided samples. NOTE: It uses inefficient chunks encoding implementation, not caring about chunk size.

func (*ChunkSeriesEntry) Iterator

func (s *ChunkSeriesEntry) Iterator() chunks.Iterator

func (*ChunkSeriesEntry) Labels

func (s *ChunkSeriesEntry) Labels() labels.Labels

type ChunkSeriesSet

type ChunkSeriesSet interface {
	Next() bool
	// At returns full chunk series. Returned series should be iterable even after Next is called.
	At() ChunkSeries
	// The error that iteration has failed with.
	// When an error occurs, set cannot continue to iterate.
	Err() error
	// A collection of warnings for the whole set.
	// Warnings could be return even iteration has not failed with error.
	Warnings() Warnings
}

ChunkSeriesSet contains a set of chunked series.

func EmptyChunkSeriesSet

func EmptyChunkSeriesSet() ChunkSeriesSet

EmptyChunkSeriesSet returns a chunk series set that's always empty.

func ErrChunkSeriesSet

func ErrChunkSeriesSet(err error) ChunkSeriesSet

ErrChunkSeriesSet returns a chunk series set that wraps an error.

func NewMergeChunkSeriesSet

func NewMergeChunkSeriesSet(sets []ChunkSeriesSet, mergeFunc VerticalChunkSeriesMergeFunc) ChunkSeriesSet

NewMergeChunkSeriesSet returns a new ChunkSeriesSet that merges many SeriesSet together.

func NewSeriesSetToChunkSet

func NewSeriesSetToChunkSet(chk SeriesSet) ChunkSeriesSet

NewSeriesSetToChunkSet converts SeriesSet to ChunkSeriesSet by encoding chunks from samples.

func NoopChunkedSeriesSet

func NoopChunkedSeriesSet() ChunkSeriesSet

NoopChunkedSeriesSet is a ChunkSeriesSet that does nothing.

type ExemplarAppender

type ExemplarAppender interface {
	// AppendExemplar adds an exemplar for the given series labels.
	// An optional reference number can be provided to accelerate calls.
	// A reference number is returned which can be used to add further
	// exemplars in the same or later transactions.
	// Returned reference numbers are ephemeral and may be rejected in calls
	// to Append() at any point. Adding the sample via Append() returns a new
	// reference number.
	// If the reference is 0 it must not be used for caching.
	// Note that in our current implementation of Prometheus' exemplar storage
	// calls to Append should generate the reference numbers, AppendExemplar
	// generating a new reference number should be considered possible erroneous behaviour and be logged.
	AppendExemplar(ref SeriesRef, l labels.Labels, e exemplar.Exemplar) (SeriesRef, error)
}

ExemplarAppender provides an interface for adding samples to exemplar storage, which within Prometheus is in-memory only.

type ExemplarQuerier

type ExemplarQuerier interface {
	// Select all the exemplars that match the matchers.
	// Within a single slice of matchers, it is an intersection. Between the slices, it is a union.
	Select(start, end int64, matchers ...[]*labels.Matcher) ([]exemplar.QueryResult, error)
}

ExemplarQuerier provides reading access to time series data.

type ExemplarQueryable

type ExemplarQueryable interface {
	// ExemplarQuerier returns a new ExemplarQuerier on the storage.
	ExemplarQuerier(ctx context.Context) (ExemplarQuerier, error)
}

type ExemplarStorage

type ExemplarStorage interface {
	ExemplarQueryable
	ExemplarAppender
}

ExemplarStorage ingests and manages exemplars, along with various indexes. All methods are goroutine-safe. ExemplarStorage implements storage.ExemplarAppender and storage.ExemplarQuerier.

type GetRef

type GetRef interface {
	// Returns reference number that can be used to pass to Appender.Append(),
	// and a set of labels that will not cause another copy when passed to Appender.Append().
	// 0 means the appender does not have a reference to this series.
	GetRef(lset labels.Labels) (SeriesRef, labels.Labels)
}

GetRef is an extra interface on Appenders used by downstream projects (e.g. Cortex) to avoid maintaining a parallel set of references.

type LabelQuerier

type LabelQuerier interface {
	// LabelValues returns all potential values for a label name.
	// It is not safe to use the strings beyond the lifetime of the querier.
	// If matchers are specified the returned result set is reduced
	// to label values of metrics matching the matchers.
	LabelValues(name string, matchers ...*labels.Matcher) ([]string, Warnings, error)

	// LabelNames returns all the unique label names present in the block in sorted order.
	// If matchers are specified the returned result set is reduced
	// to label names of metrics matching the matchers.
	LabelNames(matchers ...*labels.Matcher) ([]string, Warnings, error)

	// Close releases the resources of the Querier.
	Close() error
}

LabelQuerier provides querying access over labels.

type Labels

type Labels interface {
	// Labels returns the complete set of labels. For series it means all labels identifying the series.
	Labels() labels.Labels
}

Labels represents an item that has labels e.g. time series.

type MemoizedSeriesIterator

type MemoizedSeriesIterator struct {
	// contains filtered or unexported fields
}

MemoizedSeriesIterator wraps an iterator with a buffer to look back the previous element.

func NewMemoizedEmptyIterator

func NewMemoizedEmptyIterator(delta int64) *MemoizedSeriesIterator

NewMemoizedEmptyIterator is like NewMemoizedIterator but it's initialised with an empty iterator.

func NewMemoizedIterator

func NewMemoizedIterator(it chunkenc.Iterator, delta int64) *MemoizedSeriesIterator

NewMemoizedIterator returns a new iterator that buffers the values within the time range of the current element and the duration of delta before.

func (*MemoizedSeriesIterator) At

At returns the current element of the iterator.

func (*MemoizedSeriesIterator) Err

func (b *MemoizedSeriesIterator) Err() error

Err returns the last encountered error.

func (*MemoizedSeriesIterator) Next

func (b *MemoizedSeriesIterator) Next() bool

Next advances the iterator to the next element.

func (*MemoizedSeriesIterator) PeekPrev

func (b *MemoizedSeriesIterator) PeekPrev() (t int64, v float64, ok bool)

PeekPrev returns the previous element of the iterator. If there is none buffered, ok is false.

func (*MemoizedSeriesIterator) Reset

Reset the internal state to reuse the wrapper with the provided iterator.

func (*MemoizedSeriesIterator) Seek

func (b *MemoizedSeriesIterator) Seek(t int64) bool

Seek advances the iterator to the element at time t or greater.

type MockQuerier

type MockQuerier struct {
	SelectMockFunction func(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) SeriesSet
}

MockQuerier is used for test purposes to mock the selected series that is returned.

func (*MockQuerier) Close

func (q *MockQuerier) Close() error

func (*MockQuerier) LabelNames

func (q *MockQuerier) LabelNames(matchers ...*labels.Matcher) ([]string, Warnings, error)

func (*MockQuerier) LabelValues

func (q *MockQuerier) LabelValues(name string, matchers ...*labels.Matcher) ([]string, Warnings, error)

func (*MockQuerier) Select

func (q *MockQuerier) Select(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) SeriesSet

type MockQueryable

type MockQueryable struct {
	MockQuerier Querier
}

A MockQueryable is used for testing purposes so that a mock Querier can be used.

func (*MockQueryable) Querier

func (q *MockQueryable) Querier(ctx context.Context, mint, maxt int64) (Querier, error)

type Querier

type Querier interface {
	LabelQuerier

	// Select returns a set of series that matches the given label matchers.
	// Caller can specify if it requires returned series to be sorted. Prefer not requiring sorting for better performance.
	// It allows passing hints that can help in optimising select, but it's up to implementation how this is used if used at all.
	Select(sortSeries bool, hints *SelectHints, matchers ...*labels.Matcher) SeriesSet
}

Querier provides querying access over time series data of a fixed time range.

func NewMergeQuerier

func NewMergeQuerier(primaries, secondaries []Querier, mergeFn VerticalSeriesMergeFunc) Querier

NewMergeQuerier returns a new Querier that merges results of given primary and secondary queriers. See NewFanout commentary to learn more about primary vs secondary differences.

In case of overlaps between the data given by primaries' and secondaries' Selects, merge function will be used.

func NoopQuerier

func NoopQuerier() Querier

NoopQuerier is a Querier that does nothing.

type Queryable

type Queryable interface {
	// Querier returns a new Querier on the storage.
	Querier(ctx context.Context, mint, maxt int64) (Querier, error)
}

A Queryable handles queries against a storage. Use it when you need to have access to all samples without chunk encoding abstraction e.g promQL.

type QueryableFunc

type QueryableFunc func(ctx context.Context, mint, maxt int64) (Querier, error)

TODO(bwplotka): Move to promql/engine_test.go? QueryableFunc is an adapter to allow the use of ordinary functions as Queryables. It follows the idea of http.HandlerFunc.

func (QueryableFunc) Querier

func (f QueryableFunc) Querier(ctx context.Context, mint, maxt int64) (Querier, error)

Querier calls f() with the given parameters.

type SampleAndChunkQueryable

type SampleAndChunkQueryable interface {
	Queryable
	ChunkQueryable
}

SampleAndChunkQueryable allows retrieving samples as well as encoded samples in form of chunks.

type SampleIterable

type SampleIterable interface {
	// Iterator returns a new, independent iterator of the data of the series.
	Iterator() chunkenc.Iterator
}

type Samples

type Samples interface {
	Get(i int) tsdbutil.Sample
	Len() int
}

Samples interface allows to work on arrays of types that are compatible with tsdbutil.Sample.

type SelectHints

type SelectHints struct {
	Start int64 // Start time in milliseconds for this select.
	End   int64 // End time in milliseconds for this select.

	Step int64  // Query step size in milliseconds.
	Func string // String representation of surrounding function or aggregation.

	Grouping []string // List of label names used in aggregation.
	By       bool     // Indicate whether it is without or by.
	Range    int64    // Range vector selector range in milliseconds.

	// DisableTrimming allows to disable trimming of matching series chunks based on query Start and End time.
	// When disabled, the result may contain samples outside the queried time range but Select() performances
	// may be improved.
	DisableTrimming bool
}

SelectHints specifies hints passed for data selections. This is used only as an option for implementation to use.

type Series

type Series interface {
	Labels
	SampleIterable
}

Series exposes a single time series and allows iterating over samples.

func ChainedSeriesMerge

func ChainedSeriesMerge(series ...Series) Series

ChainedSeriesMerge returns single series from many same, potentially overlapping series by chaining samples together. If one or more samples overlap, one sample from random overlapped ones is kept and all others with the same timestamp are dropped.

This works the best with replicated series, where data from two series are exactly the same. This does not work well with "almost" the same data, e.g. from 2 Prometheus HA replicas. This is fine, since from the Prometheus perspective this never happens.

It's optimized for non-overlap cases as well.

func MockSeries

func MockSeries(timestamps []int64, values []float64, labelSet []string) Series

MockSeries returns a series with custom timestamps, values and labelSet.

type SeriesEntry

type SeriesEntry struct {
	Lset             labels.Labels
	SampleIteratorFn func() chunkenc.Iterator
}

func NewListSeries

func NewListSeries(lset labels.Labels, s []tsdbutil.Sample) *SeriesEntry

NewListSeries returns series entry with iterator that allows to iterate over provided samples.

func (*SeriesEntry) Iterator

func (s *SeriesEntry) Iterator() chunkenc.Iterator

func (*SeriesEntry) Labels

func (s *SeriesEntry) Labels() labels.Labels

type SeriesRef

type SeriesRef uint64

SeriesRef is a generic series reference. In prometheus it is either a HeadSeriesRef or BlockSeriesRef, though other implementations may have their own reference types.

type SeriesSet

type SeriesSet interface {
	Next() bool
	// At returns full series. Returned series should be iterable even after Next is called.
	At() Series
	// The error that iteration as failed with.
	// When an error occurs, set cannot continue to iterate.
	Err() error
	// A collection of warnings for the whole set.
	// Warnings could be return even iteration has not failed with error.
	Warnings() Warnings
}

SeriesSet contains a set of series.

func EmptySeriesSet

func EmptySeriesSet() SeriesSet

EmptySeriesSet returns a series set that's always empty.

func ErrSeriesSet

func ErrSeriesSet(err error) SeriesSet

ErrSeriesSet returns a series set that wraps an error.

func NewMergeSeriesSet

func NewMergeSeriesSet(sets []SeriesSet, mergeFunc VerticalSeriesMergeFunc) SeriesSet

NewMergeSeriesSet returns a new SeriesSet that merges many SeriesSets together.

func NewSeriesSetFromChunkSeriesSet

func NewSeriesSetFromChunkSeriesSet(chk ChunkSeriesSet) SeriesSet

NewSeriesSetFromChunkSeriesSet converts ChunkSeriesSet to SeriesSet by decoding chunks one by one.

func NoopSeriesSet

func NoopSeriesSet() SeriesSet

NoopSeriesSet is a SeriesSet that does nothing.

func TestSeriesSet

func TestSeriesSet(series Series) SeriesSet

TestSeriesSet returns a mock series set

type Storage

type Storage interface {
	SampleAndChunkQueryable
	Appendable

	// StartTime returns the oldest timestamp stored in the storage.
	StartTime() (int64, error)

	// Close closes the storage and all its underlying resources.
	Close() error
}

Storage ingests and manages samples, along with various indexes. All methods are goroutine-safe. Storage implements storage.Appender.

func NewFanout

func NewFanout(logger log.Logger, primary Storage, secondaries ...Storage) Storage

NewFanout returns a new fanout Storage, which proxies reads and writes through to multiple underlying storages.

The difference between primary and secondary Storage is only for read (Querier) path and it goes as follows: * If the primary querier returns an error, then any of the Querier operations will fail. * If any secondary querier returns an error the result from that queries is discarded. The overall operation will succeed, and the error from the secondary querier will be returned as a warning.

NOTE: In the case of Prometheus, it treats all remote storages as secondary / best effort.

type VerticalChunkSeriesMergeFunc

type VerticalChunkSeriesMergeFunc func(...ChunkSeries) ChunkSeries

VerticalChunkSeriesMergeFunc returns merged chunk series implementation that merges potentially time-overlapping chunk series with the same labels into single ChunkSeries.

NOTE: It's up to implementation how series are vertically merged (if chunks are sorted, re-encoded etc).

func NewCompactingChunkSeriesMerger

func NewCompactingChunkSeriesMerger(mergeFunc VerticalSeriesMergeFunc) VerticalChunkSeriesMergeFunc

NewCompactingChunkSeriesMerger returns VerticalChunkSeriesMergeFunc that merges the same chunk series into single chunk series. In case of the chunk overlaps, it compacts those into one or more time-ordered non-overlapping chunks with merged data. Samples from overlapped chunks are merged using series vertical merge func. It expects the same labels for each given series.

NOTE: Use the returned merge function only when you see potentially overlapping series, as this introduces small a overhead to handle overlaps between series.

type VerticalSeriesMergeFunc

type VerticalSeriesMergeFunc func(...Series) Series

VerticalSeriesMergeFunc returns merged series implementation that merges series with same labels together. It has to handle time-overlapped series as well.

type Warnings

type Warnings []error

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL