memory

package
v0.12.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 14, 2019 License: AGPL-3.0 Imports: 25 Imported by: 0

Documentation

Index

Constants

View Source
const (
	EQUAL      match = iota // =
	NOT_EQUAL               // !=
	MATCH                   // =~        regular expression
	MATCH_TAG               // __tag=~   relies on special key __tag. non-standard, required for `/metrics/tags` requests with "filter"
	NOT_MATCH               // !=~
	PREFIX                  // ^=        exact prefix, not regex. non-standard, required for auto complete of tag values
	PREFIX_TAG              // __tag^=   exact prefix with tag. non-standard, required for auto complete of tag keys
)

Variables

View Source
var (
	Enabled bool

	TagSupport      bool
	TagQueryWorkers int // number of workers to spin up when evaluation tag expressions

	IndexRules  conf.IndexRules
	Partitioned bool
)

Functions

func ConfigProcess

func ConfigProcess()

func ConfigSetup

func ConfigSetup()

Types

type FindCache added in v0.12.0

type FindCache struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

FindCache is a caching layer for the in-memory index. The cache provides per org LRU caches of patterns and the resulting []*Nodes from searches on the index. Users should call `InvalidateFor(orgId, path)` when new entries are added to, or removed from the index to invalidate any cached patterns that match the path. `invalidateQueueSize` sets the maximum number of invalidations for a specific orgId that can be running at any time. If this number is exceeded then the cache for that orgId will be immediately purged and disabled for `backoffTime`. This mechanism protects the instance from excessive resource usage when a large number of new series are added at once.

func NewFindCache added in v0.12.0

func NewFindCache(size, invalidateQueueSize, invalidateMaxSize int, invalidateMaxWait, backoffTime time.Duration) *FindCache

func (*FindCache) Add added in v0.12.0

func (c *FindCache) Add(orgId uint32, pattern string, nodes []*Node)

func (*FindCache) Get added in v0.12.0

func (c *FindCache) Get(orgId uint32, pattern string) ([]*Node, bool)

func (*FindCache) InvalidateFor added in v0.12.0

func (c *FindCache) InvalidateFor(orgId uint32, path string)

InvalidateFor removes entries from the cache for 'orgId' that match the provided path. If lots of InvalidateFor calls are made at once and we end up with `invalidateQueueSize` concurrent goroutines processing the invalidations, we purge the cache and disable it for `backoffTime`. Future InvalidateFor calls made during the backoff time will then return immediately.

func (*FindCache) Purge added in v0.12.0

func (c *FindCache) Purge(orgId uint32)

Purge clears the cache for the specified orgId

func (*FindCache) PurgeAll added in v0.12.0

func (c *FindCache) PurgeAll()

PurgeAll clears the caches for all orgIds

func (*FindCache) Shutdown added in v0.12.0

func (c *FindCache) Shutdown()

type IdSet

type IdSet map[schema.MKey]struct{} // set of ids

func (IdSet) String

func (ids IdSet) String() string

type KvByCost

type KvByCost []kv

func (KvByCost) Len

func (a KvByCost) Len() int

func (KvByCost) Less

func (a KvByCost) Less(i, j int) bool

func (KvByCost) Swap

func (a KvByCost) Swap(i, j int)

type KvReByCost

type KvReByCost []kvRe

func (KvReByCost) Len

func (a KvReByCost) Len() int

func (KvReByCost) Less

func (a KvReByCost) Less(i, j int) bool

func (KvReByCost) Swap

func (a KvReByCost) Swap(i, j int)

type MemoryIndex added in v0.12.0

type MemoryIndex interface {
	idx.MetricIndex
	LoadPartition(int32, []schema.MetricDefinition) int
	UpdateArchive(idx.Archive)

	PurgeFindCache()
	ForceInvalidationFindCache()
	// contains filtered or unexported methods
}

interface implemented by both UnpartitionedMemoryIdx and PartitionedMemoryIdx this is needed to support unit tests.

func New

func New() MemoryIndex

type Node

type Node struct {
	Path     string // branch or NameWithTags for leafs
	Children []string
	Defs     []schema.MKey
}

func (*Node) HasChildren

func (n *Node) HasChildren() bool

func (*Node) Leaf

func (n *Node) Leaf() bool

func (*Node) String

func (n *Node) String() string

type PartitionedMemoryIdx added in v0.12.0

type PartitionedMemoryIdx struct {
	Partition map[int32]*UnpartitionedMemoryIdx
}

Implements the the "MetricIndex" interface

func NewPartitionedMemoryIdx added in v0.12.0

func NewPartitionedMemoryIdx() *PartitionedMemoryIdx

func (*PartitionedMemoryIdx) AddOrUpdate added in v0.12.0

func (p *PartitionedMemoryIdx) AddOrUpdate(mkey schema.MKey, data *schema.MetricData, partition int32) (idx.Archive, int32, bool)

AddOrUpdate makes sure a metric is known in the index, and should be called for every received metric.

func (*PartitionedMemoryIdx) Delete added in v0.12.0

func (p *PartitionedMemoryIdx) Delete(orgId uint32, pattern string) ([]idx.Archive, error)

Delete deletes items from the index If the pattern matches a branch node, then all leaf nodes on that branch are deleted. So if the pattern is "*", all items in the index are deleted. It returns a copy of all of the Archives deleted.

func (*PartitionedMemoryIdx) DeleteTagged added in v0.12.0

func (p *PartitionedMemoryIdx) DeleteTagged(orgId uint32, paths []string) ([]idx.Archive, error)

DeleteTagged deletes the specified series from the tag index and also the DefById index.

func (*PartitionedMemoryIdx) Find added in v0.12.0

func (p *PartitionedMemoryIdx) Find(orgId uint32, pattern string, from int64) ([]idx.Node, error)

Find searches the index for matching nodes. * orgId describes the org to search in (public data in orgIdPublic is automatically included) * pattern is handled like graphite does. see https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards * from is a unix timestamp. series not updated since then are excluded.

func (*PartitionedMemoryIdx) FindByTag added in v0.12.0

func (p *PartitionedMemoryIdx) FindByTag(orgId uint32, expressions []string, from int64) ([]idx.Node, error)

FindByTag takes a list of expressions in the format key<operator>value. The allowed operators are: =, !=, =~, !=~. It returns a slice of Node structs that match the given conditions, the conditions are logically AND-ed. If the third argument is > 0 then the results will be filtered and only those where the LastUpdate time is >= from will be returned as results. The returned results are not deduplicated and in certain cases it is possible that duplicate entries will be returned.

func (*PartitionedMemoryIdx) FindTagValues added in v0.12.0

func (p *PartitionedMemoryIdx) FindTagValues(orgId uint32, tag string, prefix string, expressions []string, from int64, limit uint) ([]string, error)

FindTagValues generates a list of possible values that could complete a given value prefix. It requires a tag to be specified and only values of the given tag will be returned. It also accepts additional conditions to further narrow down the result set in the format of graphite's tag queries

func (*PartitionedMemoryIdx) FindTags added in v0.12.0

func (p *PartitionedMemoryIdx) FindTags(orgId uint32, prefix string, expressions []string, from int64, limit uint) ([]string, error)

FindTags generates a list of possible tags that could complete a given prefix. It also accepts additional tag conditions to further narrow down the result set in the format of graphite's tag queries

func (*PartitionedMemoryIdx) ForceInvalidationFindCache added in v0.12.0

func (p *PartitionedMemoryIdx) ForceInvalidationFindCache()

ForceInvalidationFindCache forces a full invalidation cycle of the find cache

func (*PartitionedMemoryIdx) Get added in v0.12.0

Get returns the archive for the requested id.

func (*PartitionedMemoryIdx) GetPath added in v0.12.0

func (p *PartitionedMemoryIdx) GetPath(orgId uint32, path string) []idx.Archive

GetPath returns the archives under the given path.

func (*PartitionedMemoryIdx) Init added in v0.12.0

func (p *PartitionedMemoryIdx) Init() error

Init initializes the index at startup and blocks until the index is ready for use.

func (*PartitionedMemoryIdx) List added in v0.12.0

func (p *PartitionedMemoryIdx) List(orgId uint32) []idx.Archive

List returns all Archives for the passed OrgId and the public orgId

func (*PartitionedMemoryIdx) LoadPartition added in v0.12.0

func (p *PartitionedMemoryIdx) LoadPartition(partition int32, defs []schema.MetricDefinition) int

Used to rebuild the index from an existing set of metricDefinitions.

func (*PartitionedMemoryIdx) MetaTagRecordList added in v0.12.0

func (p *PartitionedMemoryIdx) MetaTagRecordList(orgId uint32) []idx.MetaTagRecord

func (*PartitionedMemoryIdx) MetaTagRecordUpsert added in v0.12.0

func (p *PartitionedMemoryIdx) MetaTagRecordUpsert(orgId uint32, rawRecord idx.MetaTagRecord) (idx.MetaTagRecord, bool, error)

func (*PartitionedMemoryIdx) Prune added in v0.12.0

func (p *PartitionedMemoryIdx) Prune(oldest time.Time) ([]idx.Archive, error)

Prune deletes all metrics that haven't been seen since the given timestamp. It returns all Archives deleted and any error encountered.

func (*PartitionedMemoryIdx) PurgeFindCache added in v0.12.0

func (p *PartitionedMemoryIdx) PurgeFindCache()

PurgeFindCache purges the findCaches for all orgIds across all partitions

func (*PartitionedMemoryIdx) Stop added in v0.12.0

func (p *PartitionedMemoryIdx) Stop()

Stop shuts down the index.

func (*PartitionedMemoryIdx) TagDetails added in v0.12.0

func (p *PartitionedMemoryIdx) TagDetails(orgId uint32, key string, filter string, from int64) (map[string]uint64, error)

TagDetails returns a list of all values associated with a given tag key in the given org. The occurrences of each value is counted and the count is referred to by the metric names in the returned map. If the third parameter is not "" it will be used as a regular expression to filter the values before accounting for them. If the fourth parameter is > 0 then only those metrics of which the LastUpdate time is >= the from timestamp will be included.

func (*PartitionedMemoryIdx) Tags added in v0.12.0

func (p *PartitionedMemoryIdx) Tags(orgId uint32, filter string, from int64) ([]string, error)

Tags returns a list of all tag keys associated with the metrics of a given organization. The return values are filtered by the regex in the second parameter. If the third parameter is >0 then only metrics will be accounted of which the LastUpdate time is >= the given value.

func (*PartitionedMemoryIdx) Update added in v0.12.0

func (p *PartitionedMemoryIdx) Update(point schema.MetricPoint, partition int32) (idx.Archive, int32, bool)

Update updates an existing archive, if found. It returns whether it was found, and - if so - the (updated) existing archive and its old partition

func (*PartitionedMemoryIdx) UpdateArchive added in v0.12.0

func (p *PartitionedMemoryIdx) UpdateArchive(archive idx.Archive)

UpdateArchive updates the archive information

type TagIndex

type TagIndex map[string]TagValue // key -> list of values

type TagQuery

type TagQuery struct {
	// contains filtered or unexported fields
}

TagQuery runs a set of pattern or string matches on tag keys and values against the index. It is executed via: Run() which returns a set of matching MetricIDs RunGetTags() which returns a list of tags of the matching metrics

func NewTagQuery

func NewTagQuery(expressions []string, from int64) (TagQuery, error)

func (*TagQuery) Run

func (q *TagQuery) Run(index TagIndex, byId map[schema.MKey]*idx.Archive) IdSet

Run executes the tag query on the given index and returns a list of ids

func (*TagQuery) RunGetTags

func (q *TagQuery) RunGetTags(index TagIndex, byId map[schema.MKey]*idx.Archive) map[string]struct{}

RunGetTags executes the tag query and returns all the tags of the resulting metrics

type TagValue

type TagValue map[string]IdSet // value -> set of ids

type TimeLimiter

type TimeLimiter struct {
	// contains filtered or unexported fields
}

TimeLimiter limits the rate of a set of serial operations. It does this by tracking how much time has been spent (updated via Add()), and comparing this to the window size and the limit, slowing down further operations as soon as one Add() is called informing it the per-window allowed budget has been exceeded. Limitations: * the last operation is allowed to exceed the budget (but the next call will be delayed to compensate) * concurrency is not supported

For correctness, you should always follow up an Add() with a Wait()

func NewTimeLimiter

func NewTimeLimiter(window, limit time.Duration, now time.Time) *TimeLimiter

NewTimeLimiter creates a new TimeLimiter. limit must <= window

func (*TimeLimiter) Add

func (l *TimeLimiter) Add(d time.Duration)

Add increments the "time spent" counter by "d"

func (*TimeLimiter) Wait

func (l *TimeLimiter) Wait()

Wait returns when we are not rate limited

  • if we passed the window, we reset everything (this is only safe for callers that behave correctly, i.e. that wait the instructed time after each add)
  • if limit is not reached, no sleep is needed
  • if limit has been exceeded, sleep until next period + extra multiple to compensate this is perhaps best explained with an example: if window is 1s and limit 100ms, but we spent 250ms, then we spent effectively 2.5 seconds worth of work. let's say we are 800ms into the 1s window, that means we should sleep 2500-800 = 1.7s in order to maximize work while honoring the imposed limit.
  • if limit has been met exactly, sleep until next period (this is a special case of the above)

type Tree

type Tree struct {
	Items map[string]*Node // key is the full path of the node.
}

type UnpartitionedMemoryIdx added in v0.12.0

type UnpartitionedMemoryIdx struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

func NewUnpartitionedMemoryIdx added in v0.12.0

func NewUnpartitionedMemoryIdx() *UnpartitionedMemoryIdx

func (*UnpartitionedMemoryIdx) AddOrUpdate added in v0.12.0

func (m *UnpartitionedMemoryIdx) AddOrUpdate(mkey schema.MKey, data *schema.MetricData, partition int32) (idx.Archive, int32, bool)

AddOrUpdate returns the corresponding Archive for the MetricData. if it is existing -> updates lastUpdate based on .Time, and partition if was new -> adds new MetricDefinition to index

func (*UnpartitionedMemoryIdx) Delete added in v0.12.0

func (m *UnpartitionedMemoryIdx) Delete(orgId uint32, pattern string) ([]idx.Archive, error)

func (*UnpartitionedMemoryIdx) DeleteTagged added in v0.12.0

func (m *UnpartitionedMemoryIdx) DeleteTagged(orgId uint32, paths []string) ([]idx.Archive, error)

func (*UnpartitionedMemoryIdx) Find added in v0.12.0

func (m *UnpartitionedMemoryIdx) Find(orgId uint32, pattern string, from int64) ([]idx.Node, error)

func (*UnpartitionedMemoryIdx) FindByTag added in v0.12.0

func (m *UnpartitionedMemoryIdx) FindByTag(orgId uint32, expressions []string, from int64) ([]idx.Node, error)

func (*UnpartitionedMemoryIdx) FindTagValues added in v0.12.0

func (m *UnpartitionedMemoryIdx) FindTagValues(orgId uint32, tag, prefix string, expressions []string, from int64, limit uint) ([]string, error)

FindTagValues returns tag values matching the specified conditions tag: tag key match prefix: value prefix match expressions: tagdb expressions in the same format as graphite from: tags must have at least one metric with LastUpdate >= from limit: the maximum number of results to return

the results will always be sorted alphabetically for consistency

func (*UnpartitionedMemoryIdx) FindTags added in v0.12.0

func (m *UnpartitionedMemoryIdx) FindTags(orgId uint32, prefix string, expressions []string, from int64, limit uint) ([]string, error)

FindTags returns tags matching the specified conditions prefix: prefix match expressions: tagdb expressions in the same format as graphite from: tags must have at least one metric with LastUpdate >= from limit: the maximum number of results to return

the results will always be sorted alphabetically for consistency

func (*UnpartitionedMemoryIdx) ForceInvalidationFindCache added in v0.12.0

func (m *UnpartitionedMemoryIdx) ForceInvalidationFindCache()

ForceInvalidationFindCache forces a full invalidation cycle of the find cache

func (*UnpartitionedMemoryIdx) Get added in v0.12.0

func (*UnpartitionedMemoryIdx) GetPath added in v0.12.0

func (m *UnpartitionedMemoryIdx) GetPath(orgId uint32, path string) []idx.Archive

GetPath returns the node under the given org and path. this is an alternative to Find for when you have a path, not a pattern, and want to lookup in a specific org tree only.

func (*UnpartitionedMemoryIdx) Init added in v0.12.0

func (m *UnpartitionedMemoryIdx) Init() error

func (*UnpartitionedMemoryIdx) List added in v0.12.0

func (m *UnpartitionedMemoryIdx) List(orgId uint32) []idx.Archive

func (*UnpartitionedMemoryIdx) Load added in v0.12.0

Used to rebuild the index from an existing set of metricDefinitions.

func (*UnpartitionedMemoryIdx) LoadPartition added in v0.12.0

func (m *UnpartitionedMemoryIdx) LoadPartition(partition int32, defs []schema.MetricDefinition) int

Used to rebuild the index from an existing set of metricDefinitions for a specific paritition.

func (*UnpartitionedMemoryIdx) MetaTagRecordList added in v0.12.0

func (m *UnpartitionedMemoryIdx) MetaTagRecordList(orgId uint32) []idx.MetaTagRecord

func (*UnpartitionedMemoryIdx) MetaTagRecordUpsert added in v0.12.0

func (m *UnpartitionedMemoryIdx) MetaTagRecordUpsert(orgId uint32, rawRecord idx.MetaTagRecord) (idx.MetaTagRecord, bool, error)

MetaTagRecordUpsert inserts or updates a meta record, depending on whether it already exists or is new. The identity of a record is determined by its queries, if the set of queries in the given record already exists in another record, then the existing record will be updated, otherwise a new one gets created. The return values are: 1) The relevant meta record as it is after this operation 2) A bool that is true if the record has been created, or false if updated 3) An error which is nil if no error has occurred

func (*UnpartitionedMemoryIdx) Prune added in v0.12.0

func (m *UnpartitionedMemoryIdx) Prune(now time.Time) ([]idx.Archive, error)

Prune prunes series from the index if they have become stale per their index-rule

func (*UnpartitionedMemoryIdx) PurgeFindCache added in v0.12.0

func (m *UnpartitionedMemoryIdx) PurgeFindCache()

PurgeFindCache purges the findCaches for all orgIds

func (*UnpartitionedMemoryIdx) Stop added in v0.12.0

func (m *UnpartitionedMemoryIdx) Stop()

func (*UnpartitionedMemoryIdx) TagDetails added in v0.12.0

func (m *UnpartitionedMemoryIdx) TagDetails(orgId uint32, key, filter string, from int64) (map[string]uint64, error)

func (*UnpartitionedMemoryIdx) Tags added in v0.12.0

func (m *UnpartitionedMemoryIdx) Tags(orgId uint32, filter string, from int64) ([]string, error)

Tags returns a list of all tag keys associated with the metrics of a given organization. The return values are filtered by the regex in the second parameter. If the third parameter is >0 then only metrics will be accounted of which the LastUpdate time is >= the given value.

func (*UnpartitionedMemoryIdx) Update added in v0.12.0

func (m *UnpartitionedMemoryIdx) Update(point schema.MetricPoint, partition int32) (idx.Archive, int32, bool)

Update updates an existing archive, if found. It returns whether it was found, and - if so - the (updated) existing archive and its old partition

func (*UnpartitionedMemoryIdx) UpdateArchive added in v0.12.0

func (m *UnpartitionedMemoryIdx) UpdateArchive(archive idx.Archive)

UpdateArchive updates the archive information

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL