stats

package module
v5.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 9, 2024 License: MIT Imports: 15 Imported by: 0

README

stats CircleCI Go Report Card GoDoc

A Go package for abstracting stats collection.

Installation

go get github.com/segmentio/stats/v5

Migration to v5

Version 4 of the stats package introduced a new way of producing metrics based on defining struct types with tags on certain fields that define how to interpret the values. This approach allows for much more efficient metric production as it allows the program to do quick assignments and increments of the struct fields to set the values to be reported, and submit them all with one call to the stats engine, resulting in orders of magnitude faster metrics production. Here's an example:

type funcMetrics struct {
    calls struct {
        count int           `metric:"count" type:"counter"`
        time  time.Duration `metric:"time"  type:"histogram"`
    } `metric:"func.calls"`
}
t := time.Now()
f()
callTime := time.Since(t)

m := &funcMetrics{}
m.calls.count = 1
m.calls.time = callTime

// Equivalent to:
//
//   stats.Incr("func.calls.count")
//   stats.Observe("func.calls.time", callTime)
//
stats.Report(m)

To avoid greatly increasing the complexity of the codebase some old APIs were removed in favor of this new approach, other were transformed to provide more flexibility and leverage new features.

The stats package used to only support float values, metrics can now be of various numeric types (see stats.MakeMeasure for a detailed description), therefore functions like stats.Add now accept an interface{} value instead of float64. stats.ObserveDuration was also removed since this new approach makes it obsolete (durations can be passed to stats.Observe directly).

The stats.Engine type used to be configured through a configuration object passed to its constructor function, and a few methods (like Register) were exposed to mutate engine instances. This required synchronization in order to be safe to modify an engine from multiple goroutines. We haven't had a use case for modifying an engine after creating it so the constraint on being thread-safe were lifted and the fields exposed on the stats.Engine struct type directly to communicate that they are unsafe to modify concurrently. The helper methods remain tho to make migration of existing code smoother.

Histogram buckets (mostly used for the prometheus client) are now defined by default on the stats.Buckets global variable instead of within the engine. This decoupling was made to avoid paying the cost of doing histogram bucket lookups when producing metrics to backends that don't use them (like datadog or influxdb for example).

The data model also changed a little. Handlers for metrics produced by an engine now accept a list of measures instead of single metrics, each measure being made of a name, a set of fields, and tags to apply to each of those fields. This allows a more generic and more efficient approach to metric production, better fits the influxdb data model, while still being compatible with other clients (datadog, prometheus, ...). A single timeseries is usually identified by the combination of the measure name, a field name and value, and the set of tags set on that measure. Refer to each client for a details about how measures are translated to individual metrics.

Note that no changes were made to the end metrics being produced by each sub-package (httpstats, procstats, ...). This was important as we must keep the behavior backward compatible since making changes here would implicitly break dashboards or monitors set on the various metric collection systems that this package supports, potentially causing production issues.

If you find a bug or an API is not available anymore but deserves to be ported feel free to open an issue.

Quick Start

Engine

A core concept of the stats package is the Engine. Every program importing the package gets a default engine where all metrics produced are aggregated. The program then has to instantiate clients that will consume from the engine at regular time intervals and report the state of the engine to metrics collection platforms.

package main

import (
    "github.com/segmentio/stats/v5"
    "github.com/segmentio/stats/v5/datadog"
)

func main() {
    // Creates a new datadog client publishing metrics to localhost:8125
    dd := datadog.NewClient("localhost:8125")

    // Register the client so it receives metrics from the default engine.
    stats.Register(dd)

    // Flush the default stats engine on return to ensure all buffered
    // metrics are sent to the dogstatsd server.
    defer stats.Flush()

    // That's it! Metrics produced by the application will now be reported!
    // ...
}
Metrics
package main

import (
    "github.com/segmentio/stats/v5"
    "github.com/segmentio/stats/v5/datadog"
)

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    // Increment counters.
    stats.Incr("user.login")
    defer stats.Incr("user.logout")

    // Set a tag on a counter increment.
    stats.Incr("user.login", stats.Tag{"user", "luke"})

    // ...
}
Flushing Metrics

Metrics are stored in a buffer, which will be flushed when it reaches its capacity. For most use-cases, you do not need to explicitly send out metrics.

If you're producing metrics only very infrequently, you may have metrics that stay in the buffer and never get sent out. In that case, you can manually trigger stats flushes like so:

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    // Force a metrics flush every second
    go func() {
      for range time.Tick(time.Second) {
        stats.Flush()
      }
    }()

    // ...
}

Monitoring

Processes

🚧 Go metrics reported with the procstats package were previously tagged with a version label that reported the Go runtime version. This label was renamed to go_version in v5.6.0.

The github.com/segmentio/stats/procstats package exposes an API for creating a statistics collector on local processes. Statistics are collected for the current process and metrics including Goroutine count and memory usage are reported.

Here's an example of how to use the collector:

package main

import (
    "github.com/segmentio/stats/v5/datadog"
    "github.com/segmentio/stats/v5/procstats"
)


func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // Start a new collector for the current process, reporting Go metrics.
    c := procstats.StartCollector(procstats.NewGoMetrics())

    // Gracefully stops stats collection.
    defer c.Close()

    // ...
}

One can also collect additional statistics on resource delays, such as CPU delays, block I/O delays, and paging/swapping delays. This capability is currently only available on Linux, and can be optionally enabled as follows:

func main() {
    // As above...

    // Start a new collector for the current process, reporting Go metrics.
    c := procstats.StartCollector(procstats.NewDelayMetrics())
    defer c.Close()
}
HTTP Servers

The github.com/segmentio/stats/httpstats package exposes a decorator of http.Handler that automatically adds metric collection to a HTTP handler, reporting things like request processing time, error counters, header and body sizes...

Here's an example of how to use the decorator:

package main

import (
    "net/http"

    "github.com/segmentio/stats/v5/datadog"
    "github.com/segmentio/stats/v5/httpstats"
)

func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // ...

    http.ListenAndServe(":8080", httpstats.NewHandler(
        http.HandlerFunc(func(res http.ResponseWriter, req *http.Request) {
            // This HTTP handler is automatically reporting metrics for all
            // requests it handles.
            // ...
        }),
    ))
}
HTTP Clients

The github.com/segmentio/stats/httpstats package exposes a decorator of http.RoundTripper which collects and reports metrics for client requests the same way it's done on the server side.

Here's an example of how to use the decorator:

package main

import (
    "net/http"

    "github.com/segmentio/stats/v5/datadog"
    "github.com/segmentio/stats/v5/httpstats"
)

func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // Make a new HTTP client with a transport that will report HTTP metrics,
    // set the engine to nil to use the default.
    httpc := &http.Client{
        Transport: httpstats.NewTransport(
            &http.Transport{},
        ),
    }

    // ...
}

You can also modify the default HTTP client to automatically get metrics for all packages using it, this is very convinient to get insights into dependencies.

package main

import (
    "net/http"

    "github.com/segmentio/stats/v5/datadog"
    "github.com/segmentio/stats/v5/httpstats"
)

func main() {
     stats.Register(datadog.NewClient("localhost:8125"))
     defer stats.Flush()

    // Wraps the default HTTP client's transport.
    http.DefaultClient.Transport = httpstats.NewTransport(http.DefaultClient.Transport)

    // ...
}
Redis

The github.com/segmentio/stats/redisstats package exposes:

Here's an example of how to use the decorator on the client side:

package main

import (
    "github.com/segmentio/redis-go"
    "github.com/segmentio/stats/v5/redisstats"
)

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    client := redis.Client{
        Addr:      "127.0.0.1:6379",
        Transport: redisstats.NewTransport(&redis.Transport{}),
    }

    // ...
}

And on the server side:

package main

import (
    "github.com/segmentio/redis-go"
    "github.com/segmentio/stats/v5/redisstats"
)

func main() {
    stats.Register(datadog.NewClient("localhost:8125"))
    defer stats.Flush()

    handler := redis.HandlerFunc(func(res redis.ResponseWriter, req *redis.Request) {
      // Implement handler function here
    })

    server := redis.Server{
        Handler: redisstats.NewHandler(&handler),
    }

    server.ListenAndServe()

    // ...
}

Documentation

Overview

Package stats exposes tools for producing application performance metrics to various metric collection backends.

Index

Constants

This section is empty.

Variables

View Source
var Buckets = HistogramBuckets{}

Buckets is a registry where histogram buckets are placed. Some metric collection backends need to have histogram buckets defined by the program (like Prometheus), a common pattern is to use the init function of a package to register buckets for the various histograms that it produces.

View Source
var DefaultEngine = NewEngine(progname(), Discard)

DefaultEngine is the engine used by global helper functions.

View Source
var Discard = &discard{}

Discard is a handler that doesn't do anything with the measures it receives.

Functions

func Add

func Add(name string, value interface{}, tags ...Tag)

Add increments by value the counter identified by name and tags.

func AddAt

func AddAt(time time.Time, name string, value interface{}, tags ...Tag)

AddAt increments by value the counter identified by name and tags.

func ContextAddTags

func ContextAddTags(ctx context.Context, tags ...Tag) bool

ContextAddTags adds the given tags to the given context, if the tags have been set on any of the ancestor contexts. ContextAddTags returns true if tags were successfully appended to the context, and false otherwise.

The proper way to set tags on a context if you don't know whether or not tags already exist on the context is to first call ContextAddTags, and if that returns false, then call ContextWithTags instead.

func ContextWithTags

func ContextWithTags(ctx context.Context, tags ...Tag) context.Context

ContextWithTags returns a new child context with the given tags. If the parent context already has tags set on it, they are _not_ propegated into the context children.

func Flush

func Flush()

Flush flushes the default engine.

func Incr

func Incr(name string, tags ...Tag)

Incr increments by one the counter identified by name and tags.

func IncrAt

func IncrAt(time time.Time, name string, tags ...Tag)

IncrAt increments by one the counter identified by name and tags.

func Observe

func Observe(name string, value interface{}, tags ...Tag)

Observe reports value for the histogram identified by name and tags.

func ObserveAt

func ObserveAt(time time.Time, name string, value interface{}, tags ...Tag)

ObserveAt reports value for the histogram identified by name and tags.

func Register

func Register(handler Handler)

Register adds handler to the default engine.

func Report

func Report(metrics interface{}, tags ...Tag)

Report is a helper function that delegates to DefaultEngine.

func ReportAt

func ReportAt(time time.Time, metrics interface{}, tags ...Tag)

ReportAt is a helper function that delegates to DefaultEngine.

func Set

func Set(name string, value interface{}, tags ...Tag)

Set sets to value the gauge identified by name and tags.

func SetAt

func SetAt(time time.Time, name string, value interface{}, tags ...Tag)

SetAt sets to value the gauge identified by name and tags.

func TagsAreSorted

func TagsAreSorted(tags []Tag) bool

TagsAreSorted returns true if the given list of tags is sorted by tag name, false otherwise.

Types

type Buffer

type Buffer struct {
	// Target size of the memory buffer where metrics are serialized.
	//
	// If left to zero, a size of 1024 bytes is used as default (this is low,
	// you should set this value).
	//
	// Note that if the buffer size is small, the program may generate metrics
	// that don't fit into the configured buffer size. In that case the buffer
	// will still pass the serialized byte slice to its Serializer to leave the
	// decision of accepting or rejecting the metrics.
	BufferSize int

	// Size of the internal buffer pool, this controls how well the buffer
	// performs in highly concurrent environments. If unset, 2 x GOMAXPROCS
	// is used as a default value.
	BufferPoolSize int

	// The Serializer used to write the measures.
	//
	// This field cannot be nil.
	Serializer Serializer
	// contains filtered or unexported fields
}

Buffer is the implementation of a measure handler which uses a Serializer to serialize the metric into a memory buffer and write them once the buffer has reached a target size.

func (*Buffer) Flush

func (b *Buffer) Flush()

Flush satisfies the Flusher interface.

func (*Buffer) HandleMeasures

func (b *Buffer) HandleMeasures(time time.Time, measures ...Measure)

HandleMeasures satisfies the Handler interface.

type Clock

type Clock struct {
	// contains filtered or unexported fields
}

The Clock type can be used to report statistics on durations.

Clocks are useful to measure the duration taken by sequential execution steps and therefore aren't safe to be used concurrently by multiple goroutines.

Note: Clock times are reported to datadog in seconds. See `stats/datadog/measure.go`.

func (*Clock) Stamp

func (c *Clock) Stamp(name string)

Stamp reports the time difference between now and the last time the method was called (or since the clock was created).

The metric produced by this method call will have a "stamp" tag set to name.

func (*Clock) StampAt

func (c *Clock) StampAt(name string, now time.Time)

StampAt reports the time difference between now and the last time the method was called (or since the clock was created).

The metric produced by this method call will have a "stamp" tag set to name.

func (*Clock) Stop

func (c *Clock) Stop()

Stop reports the time difference between now and the time the clock was created at.

The metric produced by this method call will have a "stamp" tag set to "total".

func (*Clock) StopAt

func (c *Clock) StopAt(now time.Time)

StopAt reports the time difference between now and the time the clock was created at.

The metric produced by this method call will have a "stamp" tag set to "total".

type Engine

type Engine struct {
	// The measure handler that the engine forwards measures to.
	Handler Handler

	// A prefix set on all metric names produced by the engine.
	Prefix string

	// A list of tags set on all metrics produced by the engine.
	//
	// The list of tags has to be sorted. This is automatically managed by the
	// helper methods WithPrefix, WithTags and the NewEngine function. A program
	// that manipulates this field directly has to respect this requirement.
	Tags []Tag

	// Indicates whether to allow duplicated tags from the tags list before sending.
	// This option is turned off by default, ensuring that duplicate tags are removed.
	// Turn it on if you need to send the same tag multiple times with different values,
	// which is a special use case.
	AllowDuplicateTags bool
	// contains filtered or unexported fields
}

An Engine carries the context for producing metrics, it is configured by setting the exported fields or using the helper methods to create sub-engines that inherit the configuration of the base they were created from.

The program must not modify the engine's handler, prefix, or tags after it started using it. If changes need to be made new engines must be created by calls to WithPrefix or WithTags.

func NewEngine

func NewEngine(prefix string, handler Handler, tags ...Tag) *Engine

NewEngine creates and returns a new engine configured with prefix, handler, and tags.

func WithPrefix

func WithPrefix(prefix string, tags ...Tag) *Engine

WithPrefix returns a copy of the engine with prefix appended to default engine's current prefix and tags set to the merge of eng's current tags and those passed as argument. Both the default engine and the returned engine share the same handler.

func WithTags

func WithTags(tags ...Tag) *Engine

WithTags returns a copy of the engine with tags set to the merge of the default engine's current tags and those passed as arguments. Both the default engine and the returned engine share the same handler.

func (*Engine) Add

func (eng *Engine) Add(name string, value interface{}, tags ...Tag)

Add increments by value the counter identified by name and tags.

func (*Engine) AddAt

func (eng *Engine) AddAt(t time.Time, name string, value interface{}, tags ...Tag)

AddAt increments by value the counter identified by name and tags.

func (*Engine) Clock

func (eng *Engine) Clock(name string, tags ...Tag) *Clock

Clock returns a new clock identified by name and tags.

func (*Engine) ClockAt

func (eng *Engine) ClockAt(name string, start time.Time, tags ...Tag) *Clock

ClockAt returns a new clock identified by name and tags with a specified start time.

func (*Engine) Flush

func (eng *Engine) Flush()

Flush flushes eng's handler (if it implements the Flusher interface).

func (*Engine) Incr

func (eng *Engine) Incr(name string, tags ...Tag)

Incr increments by one the counter identified by name and tags.

func (*Engine) IncrAt

func (eng *Engine) IncrAt(time time.Time, name string, tags ...Tag)

IncrAt increments by one the counter identified by name and tags.

func (*Engine) Observe

func (eng *Engine) Observe(name string, value interface{}, tags ...Tag)

Observe reports value for the histogram identified by name and tags.

func (*Engine) ObserveAt

func (eng *Engine) ObserveAt(t time.Time, name string, value interface{}, tags ...Tag)

ObserveAt reports value for the histogram identified by name and tags.

func (*Engine) Register

func (eng *Engine) Register(handler Handler)

Register adds handler to eng.

func (*Engine) Report

func (eng *Engine) Report(metrics interface{}, tags ...Tag)

Report calls ReportAt with time.Now() as first argument.

func (*Engine) ReportAt

func (eng *Engine) ReportAt(time time.Time, metrics interface{}, tags ...Tag)

ReportAt reports a set of metrics for a given time. The metrics must be of type struct, pointer to struct, or a slice or array to one of those. See MakeMeasures for details about how to make struct types exposing metrics.

func (*Engine) Set

func (eng *Engine) Set(name string, value interface{}, tags ...Tag)

Set sets to value the gauge identified by name and tags.

func (*Engine) SetAt

func (eng *Engine) SetAt(t time.Time, name string, value interface{}, tags ...Tag)

SetAt sets to value the gauge identified by name and tags.

func (*Engine) WithPrefix

func (eng *Engine) WithPrefix(prefix string, tags ...Tag) *Engine

WithPrefix returns a copy of the engine with prefix appended to eng's current prefix and tags set to the merge of eng's current tags and those passed as argument. Both eng and the returned engine share the same handler.

func (*Engine) WithTags

func (eng *Engine) WithTags(tags ...Tag) *Engine

WithTags returns a copy of the engine with tags set to the merge of eng's current tags and those passed as arguments. Both eng and the returned engine share the same handler.

type Field

type Field struct {
	Name  string
	Value Value
}

A Field is a key/value type that represents a single metric in a Measure.

func MakeField

func MakeField(name string, value interface{}, ftype FieldType) Field

MakeField constructs and returns a new Field from name, value, and ftype.

func (Field) String

func (f Field) String() string

func (Field) Type

func (f Field) Type() FieldType

Type returns the type of f.

type FieldType

type FieldType int32

FieldType is an enumeration of the different metric types that may be set on a Field value.

const (
	// Counter represents incrementing counter metrics.
	Counter FieldType = iota

	// Gauge represents metrics that snapshot a value that may increase and
	// decrease.
	Gauge

	// Histogram represents metrics to observe the distribution of values.
	Histogram
)

func (FieldType) GoString

func (t FieldType) GoString() string

GoString return a string representation of the FieldType.

func (FieldType) String

func (t FieldType) String() string

type Flusher

type Flusher interface {
	Flush()
}

Flusher is an interface implemented by measure handlers in order to flush any buffered data.

type Handler

type Handler interface {
	// HandleMeasures is called by the Engine on which the handler was set
	// whenever new measures are produced by the program. The first argument
	// is the time at which the measures were taken.
	//
	// The method must treat the list of measures as read-only values, and
	// must not retain pointers to any of the measures or their sub-fields
	// after returning.
	HandleMeasures(time time.Time, measures ...Measure)
}

The Handler interface is implemented by types that produce measures to various metric collection backends.

func FilteredHandler

func FilteredHandler(h Handler, filter func([]Measure) []Measure) Handler

FilteredHandler constructs a Handler that processes Measures with `filter` before forwarding to `h`.

func MultiHandler

func MultiHandler(handlers ...Handler) Handler

MultiHandler constructs a handler which dispatches measures to all given handlers.

type HandlerFunc

type HandlerFunc func(time.Time, ...Measure)

HandlerFunc is a type alias making it possible to use simple functions as measure handlers.

func (HandlerFunc) HandleMeasures

func (f HandlerFunc) HandleMeasures(time time.Time, measures ...Measure)

HandleMeasures calls f, satisfies the Handler interface.

type HistogramBuckets

type HistogramBuckets map[Key][]Value

HistogramBuckets is a map type storing histogram buckets.

func (HistogramBuckets) Set

func (b HistogramBuckets) Set(key string, buckets ...interface{})

Set sets a set of buckets to the given list of sorted values.

type Key

type Key struct {
	Measure string
	Field   string
}

Key is a type used to uniquely identify metrics.

type Measure

type Measure struct {
	Name   string
	Fields []Field
	Tags   []Tag
}

Measure is a type that represents a single measure made by the application. Measures are identified by a name, a set of fields that define what has been instrumented, and a set of tags representing different dimensions of the measure.

Implementations of the Handler interface receive lists of measures produced by the application, and assume the tags will be sorted.

func MakeMeasures

func MakeMeasures(prefix string, value interface{}, tags ...Tag) []Measure

MakeMeasures takes a struct value or a pointer to a struct value as argument and extracts and returns the list of measures that it represented.

The rules for converting values to measure are:

  1. All fields exposing a 'metric' tag are expected to be of type bool, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64, uintptr, float32, float64, or time.Duration, and represent fields of the measures. The struct fields may also define a 'type' tag with a value of "counter", "gauge" or "histogram" to tune the behavior of the measure handlers.

  2. All fields exposing a 'tag' tag are expected to be of type string and represent tags of the measures.

  3. All struct fields are searched recursively for fields matching rule (1) and (2). Tags found within a struct are inherited by measures generated from sub-fields, they may also be overwritten.

func (Measure) Clone

func (m Measure) Clone() Measure

Clone creates and returns a deep copy of m. The original and returned values and do not share any pointers to mutable types (but may share string values for example).

func (Measure) String

func (m Measure) String() string

type Serializer

type Serializer interface {
	io.Writer

	// Appends the serialized representation of the given measures into b.
	//
	// The method must not retain any of the arguments.
	AppendMeasures(b []byte, time time.Time, measures ...Measure) []byte
}

The Serializer interface is used to abstract the logic of serializing measures.

type Tag

type Tag struct {
	Name  string
	Value string
}

A Tag is a pair of a string key and value set on measures to define the dimensions of the metrics.

func ContextTags

func ContextTags(ctx context.Context) []Tag

ContextTags returns a copy of the tags on the context if they exist and nil if they don't exist.

func M

func M(m map[string]string) []Tag

M allows for creating a tag list from a map.

func SortTags

func SortTags(tags []Tag) []Tag

SortTags sorts and deduplicates tags in-place, favoring later elements whenever a tag name duplicate occurs. The returned slice may be shorter than the input due to the elimination of duplicates.

func T

func T(k, v string) Tag

T is shorthand for `stats.Tag{Name: "blah", Value: "foo"}` It returns the tag for Name k and Value v.

func (Tag) String

func (t Tag) String() string

type Type

type Type int32

Type is an int32 type alias used to denote a values underlying type.

const (
	Null Type = iota
	Bool
	Int
	Uint
	Float
	Duration
	Invalid
)

Underlying Types.

func (Type) GoString

func (t Type) GoString() string

GoString implements the GoStringer interface.

func (Type) String

func (t Type) String() string

String returns the string representation of a type.

type Value

type Value struct {
	// contains filtered or unexported fields
}

Value is a wrapper type which is used to encapsulate underlying types (nil, bool, int, uintptr, float) in a single pseudo-generic type.

func MustValueOf

func MustValueOf(v Value) Value

MustValueOf asserts that v's underlying Type is valid, otherwise it panics.

func ValueOf

func ValueOf(v interface{}) Value

ValueOf inspects v's underlying type and returns a Value which encapsulates this type. If the underlying type of v is not supported by Value's encapsulation its Type() will return stats.Invalid.

func (Value) Bool

func (v Value) Bool() bool

Bool returns a bool if the underlying data for this value is zero.

func (Value) Duration

func (v Value) Duration() time.Duration

Duration returns a new time.Duration representation of this Value.

func (Value) Float

func (v Value) Float() float64

Float returns a new float64 representation of this Value.

func (Value) Int

func (v Value) Int() int64

Int returns an new int64 representation of this Value.

func (Value) Interface

func (v Value) Interface() interface{}

Interface returns an new interface{} representation of this value. However, if the underlying Type is unsupported it panics.

func (Value) String

func (v Value) String() string

String returns a string representation of the underling value.

func (Value) Type

func (v Value) Type() Type

Type returns the Type of this value.

func (Value) Uint

func (v Value) Uint() uint64

Uint returns a uint64 representation of this Value.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL