Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Aggregation ¶
type Aggregation struct { BucketsAggregationKey PayloadAggregationKey }
Aggregation contains all the dimension on which we aggregate statistics.
func NewAggregationFromGroup ¶
func NewAggregationFromGroup(g pb.ClientGroupedStats) Aggregation
NewAggregationFromGroup gets the Aggregation key of grouped stats.
func NewAggregationFromSpan ¶
func NewAggregationFromSpan(s *pb.Span, origin, env, hostname, containerID string) Aggregation
NewAggregationFromSpan creates a new aggregation from the provided span and env
type BucketsAggregationKey ¶
type BucketsAggregationKey struct { Service string Name string Resource string Type string StatusCode uint32 Synthetics bool }
BucketsAggregationKey specifies the key by which a bucket is aggregated.
type ClientStatsAggregator ¶
type ClientStatsAggregator struct { In chan pb.ClientStatsPayload // contains filtered or unexported fields }
ClientStatsAggregator aggregates client stats payloads on buckets of bucketDuration If a single payload is received on a bucket, this Aggregator is a passthrough. If two or more payloads collide, their counts will be aggregated into one bucket. Multiple payloads will be sent: - Original payloads with their distributions will be sent with counts zeroed. - A single payload with the bucket aggregated counts will be sent. This and the aggregator timestamp alignment ensure that all counts will have at most one point per second per agent for a specific granularity. While distributions are not tied to the agent.
func NewClientStatsAggregator ¶
func NewClientStatsAggregator(conf *config.AgentConfig, out chan pb.StatsPayload) *ClientStatsAggregator
NewClientStatsAggregator initializes a new aggregator ready to be started
func (*ClientStatsAggregator) Start ¶
func (a *ClientStatsAggregator) Start()
Start starts the aggregator.
func (*ClientStatsAggregator) Stop ¶
func (a *ClientStatsAggregator) Stop()
Stop stops the aggregator. Calling Stop twice will panic.
type Concentrator ¶
type Concentrator struct { In chan Input Out chan pb.StatsPayload // contains filtered or unexported fields }
Concentrator produces time bucketed statistics from a stream of raw traces. https://en.wikipedia.org/wiki/Knelson_concentrator Gets an imperial shitton of traces, and outputs pre-computed data structures allowing to find the gold (stats) amongst the traces.
func NewConcentrator ¶
func NewConcentrator(conf *config.AgentConfig, out chan pb.StatsPayload, now time.Time) *Concentrator
NewConcentrator initializes a new concentrator ready to be started
func (*Concentrator) Add ¶
func (c *Concentrator) Add(t Input)
Add applies the given input to the concentrator.
func (*Concentrator) Flush ¶
func (c *Concentrator) Flush() pb.StatsPayload
Flush deletes and returns complete statistic buckets
func (*Concentrator) Run ¶
func (c *Concentrator) Run()
Run runs the main loop of the concentrator goroutine. Traces are received through `Add`, this loop only deals with flushing.
type EnvTrace ¶
type EnvTrace struct { Trace WeightedTrace Env string }
EnvTrace contains input for the concentractor.
type PayloadAggregationKey ¶
PayloadAggregationKey specifies the key by which a payload is aggregated.
type RawBucket ¶
type RawBucket struct {
// contains filtered or unexported fields
}
RawBucket is used to compute span data and aggregate it within a time-framed bucket. This should not be used outside the agent, use ClientStatsBucket for this.
func NewRawBucket ¶
NewRawBucket opens a new calculation bucket for time ts and initializes it properly
func (*RawBucket) Export ¶
func (sb *RawBucket) Export() map[PayloadAggregationKey]pb.ClientStatsBucket
Export transforms a RawBucket into a ClientStatsBucket, typically used before communicating data to the API, as RawBucket is the internal type while ClientStatsBucket is the public, shared one.
func (*RawBucket) HandleSpan ¶
func (sb *RawBucket) HandleSpan(s *WeightedSpan, origin, env, hostname, containerID string)
HandleSpan adds the span to this bucket stats, aggregated with the finest grain matching given aggregators
type WeightedSpan ¶
type WeightedSpan struct { Weight float64 // Span weight. Similar to the trace root.Weight(). TopLevel bool // Is this span a service top-level or not. Similar to span.TopLevel(). Measured bool // Is this span marked for metrics computation. *pb.Span }
WeightedSpan extends Span to contain weights required by the Concentrator.
type WeightedTrace ¶
type WeightedTrace struct { TracerHostname string Origin string Spans []*WeightedSpan }
WeightedTrace is a slice of WeightedSpan pointers.
func NewWeightedTrace ¶
func NewWeightedTrace(trace *pb.TraceChunk, root *pb.Span, tracerHostname string) WeightedTrace
NewWeightedTrace returns a weighted trace, with coefficient required by the concentrator.