metricsext

package
v0.0.0-...-0b529bf Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 11, 2019 License: Apache-2.0 Imports: 6 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Counter

func Counter(a metrics.BaseRegistry, metricName string, dimensions map[string]string) metrics.Observer

Counter returns an observer set with the metric type counter

func CustomAggregation

CustomAggregation creates an aggregation source from an object that can collect metrics on demand

func Float

func Float(a metrics.BaseRegistry, metricName string, dimensions map[string]string) metrics.Observer

Float simply returns an observer for a time series with no special metadata

func Gauge

func Gauge(a metrics.BaseRegistry, metricName string, dimensions map[string]string) metrics.Observer

Gauge returns an observer set with the metric type Gauge

func GetRollups

func GetRollups(tsm metrics.TimeSeriesMetadata) [][]string

GetRollups returns existing rollups from time series metadata

func RegistryWithRollup

func RegistryWithRollup(a metrics.BaseRegistry, rollup []string) metrics.BaseRegistry

RegistryWithRollup wraps a registry with a metric constructor that adds rollup to all time series created from it

func SingleValue

func SingleValue(value float64) metrics.TimeWindowAggregation

SingleValue helps create a TimeWindowAggregation of a single value at the current timestamp

func WithDimensions

func WithDimensions(a metrics.BaseRegistry, dimensions map[string]string) metrics.BaseRegistry

WithDimensions wraps a registry with a registry that adds default dimensions to created time series

func WithMetadata

WithMetadata wraps a registry with a metadata constructor for all time series

func WithRollup

func WithRollup(rollup []string) metrics.MetadataConstructor

WithRollup creates a metadata constructor that appends rollups to a time series's metadata

Types

type AggregationFlusher

type AggregationFlusher struct {
	Source metrics.AggregationSource
	Sink   metrics.AggregationSink
}

AggregationFlusher implements the Flushable interface to send metrics from a source to a sink

func (*AggregationFlusher) Flush

func (f *AggregationFlusher) Flush(ctx context.Context) error

Flush sends all metrics from soure, into sink

type BufferConfig

type BufferConfig struct {
	BufferSize           int
	OnDroppedAggregation func(error, []metrics.TimeSeriesAggregation)
	FlushTimeout         time.Duration
	BlockOnAggregate     bool
}

BufferConfig configures a Buffer

type BufferedSink

type BufferedSink struct {
	Destination metrics.AggregationSink
	Config      BufferConfig
	// contains filtered or unexported fields
}

BufferedSink aggregations before they are sent to a sink

func (*BufferedSink) Aggregate

func (b *BufferedSink) Aggregate(ctx context.Context, aggs []metrics.TimeSeriesAggregation) error

Aggregate adds aggregations to a channel then returns. Does not block on context by default.

func (*BufferedSink) Close

func (b *BufferedSink) Close() error

Close stops the goroutine that drains aggregations from the buffer

func (*BufferedSink) Start

func (b *BufferedSink) Start() error

Start blocks until Close is called: draining from the buffer into a sink

type CompressionSink

type CompressionSink struct {
	Sink metrics.AggregationSink
}

CompressionSink can aggregation two aggregations into a single aggregation if they match on both time series and window.

func (*CompressionSink) Aggregate

Aggregate multiple tsa into a single tsa if they match on both time series and time window

type DurationObserver

type DurationObserver struct {
	// contains filtered or unexported fields
}

DurationObserver wraps an observer to allow reporting durations, rather than flat float64 objects

func Duration

func Duration(a metrics.BaseRegistry, metricName string, dimensions map[string]string) *DurationObserver

Duration is similar to Float, but attaches metadata of the unit "seconds" to time series

func (*DurationObserver) Observe

func (t *DurationObserver) Observe(d time.Duration)

Observe reports to the wrapped observer the duration as a Second time value

type Flushable

type Flushable interface {
	Flush(ctx context.Context) error
}

Flushable is anything that can be "flushed" in some periodic fashion. Used by PeriodicFlusher

type LocklessValueAggregator

type LocklessValueAggregator struct {
	Bucketer metrics.Bucketer
	// contains filtered or unexported fields
}

LocklessValueAggregator observes values without any locking (it is not thread safe)

func (*LocklessValueAggregator) Aggregate

Aggregate returns an aggregation of all the observed values

func (*LocklessValueAggregator) Observe

func (a *LocklessValueAggregator) Observe(value float64)

Observe adds a value to this aggregator (and any bucketer it has)

type Logger

type Logger interface {
	Log(kvs ...interface{})
}

Logger is used to log generic key/value pairs of information

type MultiSink

type MultiSink struct {
	// contains filtered or unexported fields
}

MultiSink aggregations metrics from multiple sinks. It is thread safe.

func (*MultiSink) AddSink

func (m *MultiSink) AddSink(s metrics.AggregationSink)

AddSink includes a sink for aggregation

func (*MultiSink) Aggregate

func (m *MultiSink) Aggregate(ctx context.Context, aggs []metrics.TimeSeriesAggregation) error

Aggregate sends metrics to each added sink

func (*MultiSink) RemoveSink

func (m *MultiSink) RemoveSink(s metrics.AggregationSink)

RemoveSink removes a sink from aggregation

type MultiSource

type MultiSource struct {
	// contains filtered or unexported fields
}

MultiSource is an aggregation source that pulls metrics from multiple places. It is thread safe

func (*MultiSource) AddSource

func (m *MultiSource) AddSource(s metrics.AggregationSource)

AddSource includes a source for flushing

func (*MultiSource) FlushMetrics

func (m *MultiSource) FlushMetrics() []metrics.TimeSeriesAggregation

FlushMetrics returns metrics from all sources

func (*MultiSource) RemoveSource

func (m *MultiSource) RemoveSource(s metrics.AggregationSource)

RemoveSource removes a source, if it exists, from the flushable set

type PeriodicFlusher

type PeriodicFlusher struct {
	Flushable Flushable

	// Optional
	FlushTimeout time.Duration
	Interval     time.Duration
	Logger       Logger
	// Returns a time.Ticker that sends time to chan on duration interval.  func() is the cleanup function for this
	// ticker.  Uses time.Ticker by default
	TimeTicker func(time.Duration) (<-chan time.Time, func())
	// contains filtered or unexported fields
}

PeriodicFlusher calls some flush method every X period of time

func (*PeriodicFlusher) Close

func (f *PeriodicFlusher) Close() error

Close stops the flushing calls

func (*PeriodicFlusher) Start

func (f *PeriodicFlusher) Start() error

Start blocks till Close ends, calling Flushable every Interval duration, or until Close is called

type RollingAggregation

type RollingAggregation struct {
	// Default does not use buckets
	AggregatorFactory func() metrics.ValueAggregator

	// Default is 1 minute
	BucketSize time.Duration
	// Default is time.Now
	Now func() time.Time
	// contains filtered or unexported fields
}

RollingAggregation aggregates data into time rolling buckets

func (*RollingAggregation) CollectMetrics

func (t *RollingAggregation) CollectMetrics() []metrics.TimeWindowAggregation

CollectMetrics returns an aggregation for data in any previous time window bucket

func (*RollingAggregation) Observe

func (t *RollingAggregation) Observe(value float64)

Observe puts this value in a bucket for the current time

type RollupSink

type RollupSink struct {
	Sink metrics.AggregationSink
	// Note: You probably want metris.Registry explicitly here.  You probably don't want a wrapped registry, or you
	//       may *add back* the dimensions you're trying to remove when you make the new time series.
	//       I actually thought about making this *metrics.Registry instead ...
	TSSource metrics.TimeSeriesSource
}

RollupSink understands "rollup" concepts on time series and expands TimeSeriesAggregation values into their rollups

func (*RollupSink) Aggregate

func (r *RollupSink) Aggregate(ctx context.Context, aggs []metrics.TimeSeriesAggregation) error

Aggregate does rollups of every aggregation and sends that to each source

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL