Documentation ¶
Index ¶
- Constants
- func ExtractMetricNameFromMatchers(matchers []*metric.LabelMatcher) (model.LabelValue, []*metric.LabelMatcher, error)
- func ExtractMetricNameFromMetric(m model.Metric) (model.LabelValue, error)
- func FromMetricsForLabelMatchersRequest(req *cortex.MetricsForLabelMatchersRequest) (model.Time, model.Time, []metric.LabelMatchers, error)
- func FromMetricsForLabelMatchersResponse(resp *cortex.MetricsForLabelMatchersResponse) []model.Metric
- func FromQueryRequest(req *cortex.QueryRequest) (model.Time, model.Time, []*metric.LabelMatcher, error)
- func FromQueryResponse(resp *cortex.QueryResponse) model.Matrix
- func FromWriteRequest(req *cortex.WriteRequest) []model.Sample
- func Max64(a, b int64) int64
- func MergeSamples(a, b []model.SamplePair) []model.SamplePair
- func Min(a, b int) int
- func Min64(a, b int64) int64
- func RegisterFlags(rs ...Registerer)
- func SplitFiltersAndMatchers(allMatchers []*metric.LabelMatcher) (filters, matchers []*metric.LabelMatcher)
- func ToMetricsForLabelMatchersRequest(from, to model.Time, matchersSet []metric.LabelMatchers) (*cortex.MetricsForLabelMatchersRequest, error)
- func ToMetricsForLabelMatchersResponse(metrics []model.Metric) *cortex.MetricsForLabelMatchersResponse
- func ToQueryRequest(from, to model.Time, matchers []*metric.LabelMatcher) (*cortex.QueryRequest, error)
- func ToQueryResponse(matrix model.Matrix) *cortex.QueryResponse
- func ToWriteRequest(samples []model.Sample) *cortex.WriteRequest
- func ValidateSample(s *model.Sample) error
- type DayValue
- type Error
- type HashBucketHistogram
- type HashBucketHistogramOpts
- type Op
- type PriorityQueue
- type Registerer
- type URLValue
Constants ¶
const ( ErrMissingMetricName = Error("sample missing metric name") ErrInvalidMetricName = Error("sample invalid metric name") ErrInvalidLabel = Error("sample invalid label") ErrUserSeriesLimitExceeded = Error("per-user series limit exceeded") ErrMetricSeriesLimitExceeded = Error("per-metric series limit exceeded") )
Errors returned by Cortex components.
Variables ¶
This section is empty.
Functions ¶
func ExtractMetricNameFromMatchers ¶
func ExtractMetricNameFromMatchers(matchers []*metric.LabelMatcher) (model.LabelValue, []*metric.LabelMatcher, error)
ExtractMetricNameFromMatchers extracts the metric name from a set of matchers
func ExtractMetricNameFromMetric ¶
func ExtractMetricNameFromMetric(m model.Metric) (model.LabelValue, error)
ExtractMetricNameFromMetric extract the metric name from a model.Metric
func FromMetricsForLabelMatchersRequest ¶
func FromMetricsForLabelMatchersRequest(req *cortex.MetricsForLabelMatchersRequest) (model.Time, model.Time, []metric.LabelMatchers, error)
FromMetricsForLabelMatchersRequest unpacks a MetricsForLabelMatchersRequest proto
func FromMetricsForLabelMatchersResponse ¶
func FromMetricsForLabelMatchersResponse(resp *cortex.MetricsForLabelMatchersResponse) []model.Metric
FromMetricsForLabelMatchersResponse unpacks a MetricsForLabelMatchersResponse proto
func FromQueryRequest ¶
func FromQueryRequest(req *cortex.QueryRequest) (model.Time, model.Time, []*metric.LabelMatcher, error)
FromQueryRequest unpacks a QueryRequest proto.
func FromQueryResponse ¶
func FromQueryResponse(resp *cortex.QueryResponse) model.Matrix
FromQueryResponse unpacks a QueryResponse proto.
func FromWriteRequest ¶
func FromWriteRequest(req *cortex.WriteRequest) []model.Sample
FromWriteRequest converts a WriteRequest proto into an array of samples.
func MergeSamples ¶
func MergeSamples(a, b []model.SamplePair) []model.SamplePair
MergeSamples merges and dedupes two sets of already sorted sample pairs.
func RegisterFlags ¶
func RegisterFlags(rs ...Registerer)
RegisterFlags registers flags with the provided Registerers
func SplitFiltersAndMatchers ¶
func SplitFiltersAndMatchers(allMatchers []*metric.LabelMatcher) (filters, matchers []*metric.LabelMatcher)
SplitFiltersAndMatchers splits empty matchers off, which are treated as filters, see #220
func ToMetricsForLabelMatchersRequest ¶
func ToMetricsForLabelMatchersRequest(from, to model.Time, matchersSet []metric.LabelMatchers) (*cortex.MetricsForLabelMatchersRequest, error)
ToMetricsForLabelMatchersRequest builds a MetricsForLabelMatchersRequest proto
func ToMetricsForLabelMatchersResponse ¶
func ToMetricsForLabelMatchersResponse(metrics []model.Metric) *cortex.MetricsForLabelMatchersResponse
ToMetricsForLabelMatchersResponse builds a MetricsForLabelMatchersResponse proto
func ToQueryRequest ¶
func ToQueryRequest(from, to model.Time, matchers []*metric.LabelMatcher) (*cortex.QueryRequest, error)
ToQueryRequest builds a QueryRequest proto.
func ToQueryResponse ¶
func ToQueryResponse(matrix model.Matrix) *cortex.QueryResponse
ToQueryResponse builds a QueryResponse proto.
func ToWriteRequest ¶
func ToWriteRequest(samples []model.Sample) *cortex.WriteRequest
ToWriteRequest converts an array of samples into a WriteRequest proto.
func ValidateSample ¶
ValidateSample returns an err if the sample is invalid
Types ¶
type DayValue ¶
DayValue is a model.Time that can be used as a flag. NB it only parses days!
func NewDayValue ¶
NewDayValue makes a new DayValue; will round t down to the nearest midnight.
type HashBucketHistogram ¶
type HashBucketHistogram interface { prometheus.Metric prometheus.Collector Observe(string, uint32) Stop() }
HashBucketHistogram is used to track a histogram of per-bucket rates.
For instance, I want to know that 50% of rows are getting X QPS or lower and 99% are getting Y QPS of lower. At first glance, this would involve tracking write rate per row, and periodically sticking those numbers in a histogram. To make this fit in memory: instead of per-row, we keep N buckets of counters and hash the key to a bucket. Then every second we update a histogram with the bucket values (and zero the buckets).
Note, we want this metric to be relatively independent of the number of hash buckets and QPS of the service - we're trying to measure how well load balanced the write load is. So we normalise the values in the hash buckets such that if all buckets are '1', then we have even load. We do this by multiplying the number of ops per bucket by the number of buckets, and dividing by the number of ops.
func NewHashBucketHistogram ¶
func NewHashBucketHistogram(opts HashBucketHistogramOpts) HashBucketHistogram
NewHashBucketHistogram makes a new HashBucketHistogram
type HashBucketHistogramOpts ¶
type HashBucketHistogramOpts struct { prometheus.HistogramOpts HashBuckets int }
HashBucketHistogramOpts are the options for making a HashBucketHistogram
type Op ¶
type Op interface { Key() string Priority() int64 // The larger the number the higher the priority. }
Op is an operation on the priority queue.
type PriorityQueue ¶
type PriorityQueue struct {
// contains filtered or unexported fields
}
PriorityQueue is a priority queue.
func NewPriorityQueue ¶
func NewPriorityQueue() *PriorityQueue
NewPriorityQueue makes a new priority queue.
func (*PriorityQueue) Close ¶
func (pq *PriorityQueue) Close()
Close signals that the queue is closed. A closed queue will not accept new items.
func (*PriorityQueue) Dequeue ¶
func (pq *PriorityQueue) Dequeue() Op
Dequeue will return the op with the highest priority; block if queue is empty; returns nil if queue is closed.
func (*PriorityQueue) Enqueue ¶
func (pq *PriorityQueue) Enqueue(op Op)
Enqueue adds an operation to the queue in priority order. If the operation is already on the queue, it will be ignored.
func (*PriorityQueue) Length ¶
func (pq *PriorityQueue) Length() int
Length returns the length of the queue.
type Registerer ¶
Registerer is a thing that can RegisterFlags