Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Aggregation ¶
type Aggregation struct { Env string Resource string Service string Type string Hostname string StatusCode uint32 Version string Synthetics bool }
Aggregation contains all the dimension on which we aggregate statistics when adding or removing fields to Aggregation the methods ToTagSet, KeyLen and WriteKey should always be updated accordingly
func NewAggregationFromSpan ¶
func NewAggregationFromSpan(s *pb.Span, env string, agentHostname string) Aggregation
NewAggregationFromSpan creates a new aggregation from the provided span and env
type Concentrator ¶
type Concentrator struct { In chan []Input Out chan pb.StatsPayload // contains filtered or unexported fields }
Concentrator produces time bucketed statistics from a stream of raw traces. https://en.wikipedia.org/wiki/Knelson_concentrator Gets an imperial shitton of traces, and outputs pre-computed data structures allowing to find the gold (stats) amongst the traces.
func NewConcentrator ¶
func NewConcentrator(conf *config.AgentConfig, out chan pb.StatsPayload, now time.Time) *Concentrator
NewConcentrator initializes a new concentrator ready to be started
func (*Concentrator) Add ¶
func (c *Concentrator) Add(inputs []Input)
Add applies the given input to the concentrator.
func (*Concentrator) Flush ¶
func (c *Concentrator) Flush() pb.StatsPayload
Flush deletes and returns complete statistic buckets
func (*Concentrator) Run ¶
func (c *Concentrator) Run()
Run runs the main loop of the concentrator goroutine. Traces are received through `Add`, this loop only deals with flushing.
type Input ¶
type Input struct { Trace WeightedTrace Env string }
Input contains input for the concentractor.
type PayloadKey ¶
type PayloadKey struct {
// contains filtered or unexported fields
}
PayloadKey uniquely identifies a ClientStatsPayload inside a StatsPayload
type RawBucket ¶
type RawBucket struct {
// contains filtered or unexported fields
}
RawBucket is used to compute span data and aggregate it within a time-framed bucket. This should not be used outside the agent, use ClientStatsBucket for this.
func NewRawBucket ¶
NewRawBucket opens a new calculation bucket for time ts and initializes it properly
func (*RawBucket) Export ¶
func (sb *RawBucket) Export() map[PayloadKey]pb.ClientStatsBucket
Export transforms a RawBucket into a ClientStatsBucket, typically used before communicating data to the API, as RawBucket is the internal type while ClientStatsBucket is the public, shared one.
func (*RawBucket) HandleSpan ¶
func (sb *RawBucket) HandleSpan(s *WeightedSpan, env string, agentHostname string)
HandleSpan adds the span to this bucket stats, aggregated with the finest grain matching given aggregators
type WeightedSpan ¶
type WeightedSpan struct { Weight float64 // Span weight. Similar to the trace root.Weight(). TopLevel bool // Is this span a service top-level or not. Similar to span.TopLevel(). Measured bool // Is this span marked for metrics computation. *pb.Span }
WeightedSpan extends Span to contain weights required by the Concentrator.
type WeightedTrace ¶
type WeightedTrace []*WeightedSpan
WeightedTrace is a slice of WeightedSpan pointers.
func NewWeightedTrace ¶
func NewWeightedTrace(trace pb.Trace, root *pb.Span) WeightedTrace
NewWeightedTrace returns a weighted trace, with coefficient required by the concentrator.