conf

package
v0.13.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 28, 2019 License: AGPL-3.0 Imports: 13 Imported by: 0

Documentation

Overview

Package conf reads config data from two of carbon's config files * storage-schemas.conf (old and new retention format) see https://graphite.readthedocs.io/en/0.9.9/config-carbon.html#storage-schemas-conf * storage-aggregation.conf see http://graphite.readthedocs.io/en/latest/config-carbon.html#storage-aggregation-conf as well as our own file index-rules.conf

it also adds defaults (the same ones as graphite), so that even if nothing is matched in the user provided schemas or aggregations, a setting is *always* found uses some modified snippets from github.com/lomik/go-carbon and github.com/lomik/go-whisper

Index

Constants

View Source
const Month_sec = 60 * 60 * 24 * 28

Variables

This section is empty.

Functions

This section is empty.

Types

type Aggregation

type Aggregation struct {
	Name              string
	Pattern           *regexp.Regexp
	XFilesFactor      float64
	AggregationMethod []Method
}

type Aggregations

type Aggregations struct {
	Data               []Aggregation
	DefaultAggregation Aggregation
}

Aggregations holds the aggregation definitions

func NewAggregations

func NewAggregations() Aggregations

NewAggregations create instance of Aggregations

func ReadAggregations

func ReadAggregations(file string) (Aggregations, error)

ReadAggregations returns the defined aggregations from a storage-aggregation.conf file and adds the default

func (Aggregations) Get

func (a Aggregations) Get(i uint16) Aggregation

Get returns the aggregation setting corresponding to the given index

func (Aggregations) Match

func (a Aggregations) Match(metric string) (uint16, Aggregation)

Match returns the correct aggregation setting for the given metric it can always find a valid setting, because there's a default catch all also returns the index of the setting, to efficiently reference it

type IndexRule

type IndexRule struct {
	Name     string
	Pattern  *regexp.Regexp
	MaxStale time.Duration
}

type IndexRules

type IndexRules struct {
	Rules   []IndexRule
	Default IndexRule
}

IndexRules holds the index rule definitions

func NewIndexRules

func NewIndexRules() IndexRules

NewIndexRules create instance of IndexRules it has a default catchall that doesn't prune

func ReadIndexRules

func ReadIndexRules(file string) (IndexRules, error)

ReadIndexRules returns the defined index rule from a index-rules.conf file and adds the default

func (IndexRules) Cutoffs

func (a IndexRules) Cutoffs(now time.Time) []int64

Cutoffs returns a set of cutoffs corresponding to a given timestamp and the set of all rules

func (IndexRules) Get

func (a IndexRules) Get(i uint16) IndexRule

Get returns the index rule setting corresponding to the given index

func (IndexRules) Match

func (a IndexRules) Match(metric string) (uint16, IndexRule)

Match returns the correct index rule setting for the given metric it can always find a valid setting, because there's a default catch all also returns the index of the setting, to efficiently reference it

func (IndexRules) Prunable

func (a IndexRules) Prunable() bool

Prunable returns whether there's any entries that require pruning

type Method

type Method int
const (
	Avg Method = iota + 1
	Sum
	Lst
	Max
	Min
)

type Retention

type Retention struct {
	SecondsPerPoint int    // interval in seconds
	NumberOfPoints  int    // ~ttl
	ChunkSpan       uint32 // duration of chunk of aggregated metric for storage, controls how many aggregated points go into 1 chunk
	NumChunks       uint32 // number of chunks to keep in memory. remember, for a query from now until 3 months ago, we will end up querying the memory server as well.
	Ready           uint32 // ready for reads for data as of this timestamp (or as of now-TTL, whichever is highest)
}

A retention level.

Retention levels describe a given archive in the database. How detailed it is and how far back it records.

func NewRetention

func NewRetention(secondsPerPoint, numberOfPoints int) Retention

func NewRetentionMT

func NewRetentionMT(secondsPerPoint int, ttl, chunkSpan, numChunks, ready uint32) Retention

func ParseRetentionNew

func ParseRetentionNew(def string) (Retention, error)

func (Retention) MaxRetention

func (r Retention) MaxRetention() int

func (Retention) String added in v0.13.1

func (r Retention) String() string

type Retentions

type Retentions struct {
	Orig string
	Rets []Retention
}

func BuildFromRetentions added in v0.13.1

func BuildFromRetentions(rets ...Retention) Retentions

func MustParseRetentions added in v0.13.1

func MustParseRetentions(defs string) Retentions

func ParseRetentions

func ParseRetentions(defs string) (Retentions, error)

ParseRetentions parses retention definitions into a Retentions structure

func (Retentions) Sub added in v0.13.1

func (r Retentions) Sub(pos int) Retentions

Sub returns a "subslice" of Retentions starting at the given pos.

func (Retentions) Validate

func (r Retentions) Validate() error

Validate assures the retentions are sane. As the whisper source code says: An ArchiveList must: 1. Have at least one archive config. Example: (60, 86400) 2. No archive may be a duplicate of another. 3. Higher precision archives' precision must evenly divide all lower precision archives' precision. 4. Lower precision archives must cover larger time intervals than higher precision archives. 5. Each archive must have at least enough points to consolidate to the next archive

type Schema

type Schema struct {
	Name               string
	Pattern            *regexp.Regexp
	Retentions         Retentions
	Priority           int64
	ReorderWindow      uint32
	ReorderAllowUpdate bool
}

Schema represents one schema setting

type SchemaSlice

type SchemaSlice []Schema

func (SchemaSlice) Len

func (s SchemaSlice) Len() int

func (SchemaSlice) Less

func (s SchemaSlice) Less(i, j int) bool

func (SchemaSlice) Swap

func (s SchemaSlice) Swap(i, j int)

type Schemas

type Schemas struct {
	DefaultSchema Schema
	// contains filtered or unexported fields
}

Schemas contains schema settings

func NewSchemas

func NewSchemas(schemas []Schema) Schemas

func ReadSchemas

func ReadSchemas(file string) (Schemas, error)

ReadSchemas reads and parses a storage-schemas.conf file and returns a sorted schemas structure see https://graphite.readthedocs.io/en/0.9.9/config-carbon.html#storage-schemas-conf

func (*Schemas) BuildIndex

func (s *Schemas) BuildIndex()

func (Schemas) Get

func (s Schemas) Get(i uint16) Schema

Get returns the schema setting corresponding to the given index

func (Schemas) List

func (s Schemas) List() ([]Schema, Schema)

func (Schemas) Match

func (s Schemas) Match(metric string, interval int) (uint16, Schema)

Match returns the correct schema setting for the given metric it can always find a valid setting, because there's a default catch all also returns the index of the setting, to efficiently reference it.

A schema is just a pattern + retention policy. A retention policy is just a list of retentions. The s.index slice contains a schema for each sublist of an original schemas retentions. So if an original schema had a retention policy of 1s:1d,1m:7d,1h:1y then 3 schemas would be added to the index with same pattern as the original but retention policies of "1s:1d,1m:7d,1h:1y", "1m:7d,1h:1y" and "1h:1y".

|---------------------------------------------------------------------| | pattern 1 | pattern 2 | pattern 3 | |---------------------------------------------------------------------| | ret0 | ret1 | ret0 | ret1 | ret2 | ret0 | ret1 | |---------------------------------------------------------------------| | schema0 | schema1 | schema2 | schema3 | schema4 | schema5 | schema6 | |---------------------------------------------------------------------|

When evaluating a match we start with the first schema in the index and compare the regex pattern.

  • If it matches we then just find the retention set with the best fit. The best fit is when the interval is >= the rawInterval (first retention) and less then the interval of the next rollup.
  • If the pattern doesnt match, then we skip ahead to the next pattern.

eg. from the above diagram we would compare the pattern for schame0

(pattern1), if it doesnt match we will then compare the pattern of
schema2 (pattern2) and if that doesnt match we would try schema5
(pattern3).

func (Schemas) MaxChunkSpan

func (schemas Schemas) MaxChunkSpan() uint32

MaxChunkSpan returns the largest chunkspan seen amongst all archives of all schemas

func (Schemas) TTLs

func (schemas Schemas) TTLs() []uint32

TTLs returns a slice of all TTL's seen amongst all archives of all schemas

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL