Documentation ¶
Overview ¶
Package dynsampler contains several sampling algorithms to help you select a representative set of events instead of a full stream.
This package is intended to help sample a stream of tracking events, where events are typically created in response to a stream of traffic (for the purposes of logging or debugging). In general, sampling is used to reduce the total volume of events necessary to represent the stream of traffic in a meaningful way.
For the purposes of these examples, the "traffic" will be a set of HTTP requests being handled by a server, and "event" will be a blob of metadata about a given HTTP request that might be useful to keep track of later. A "sample rate" of 100 means that for every 100 requests, we capture a single event and indicate that it represents 100 similar requests.
Use ¶
Use the `Sampler` interface in your code. Each different sampling algorithm implements the Sampler interface.
The following guidelines can help you choose a sampler. Depending on the shape of your traffic, one may serve better than another, or you may need to write a new one! Please consider contributing it back to this package if you do.
* If your system has a completely homogeneous stream of requests: use `Static` to use a constant sample rate.
* If your system has a steady stream of requests and a well-known low cardinality partition key (e.g. http status): use `Static` and override sample rates on a per-key basis (e.g. if you know want to sample `HTTP 200/OK` events at a different rate from `HTTP 503/Server Error`).
* If your logging system has a strict cap on the rate it can receive events, use `TotalThroughput`, which will calculate sample rates based on keeping *the entire system's* representative event throughput right around (or under) particular cap.
* If your system has a rough cap on the rate it can receive events and your partitioned keyspace is fairly steady, use `PerKeyThroughput`, which will calculate sample rates based on keeping the event throughput roughly constant *per key/partition* (e.g. per user id)
* The best choice for a system with a large key space and a large disparity between the highest volume and lowest volume keys is `AvgSampleRateWithMin` - it will increase the sample rate of higher volume traffic proportionally to the logarithm of the specific key's volume. If total traffic falls below a configured minimum, it stops sampling to avoid any sampling when the traffic is too low to warrant it.
* `EMASampleRate` works like `AvgSampleRate`, but calculates sample rates based on a moving average (Exponential Moving Average) of many measurement intervals rather than a single isolated interval. In addition, it can detect large bursts in traffic and will trigger a recalculation of sample rates before the regular interval.
Each sampler implementation below has additional configuration parameters and a detailed description of how it chooses a sample rate.
Some implementations implement `SaveState` and `LoadState` - enabling you to serialize the Sampler's internal state and load it back. This is useful, for example, if you want to avoid losing calculated sample rates between process restarts.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AvgSampleRate ¶
type AvgSampleRate struct { // ClearFrequencySec is how often the counters reset in seconds; default 30 ClearFrequencySec int // GoalSampleRate is the average sample rate we're aiming for, across all // events. Default 10 GoalSampleRate int // MaxKeys, if greater than 0, limits the number of distinct keys used to build // the sample rate map within the interval defined by `ClearFrequencySec`. Once // MaxKeys is reached, new keys will not be included in the sample rate map, but // existing keys will continue to be be counted. MaxKeys int // contains filtered or unexported fields }
AvgSampleRate implements Sampler and attempts to average a given sample rate, weighting rare traffic and frequent traffic differently so as to end up with the correct average. This method breaks down when total traffic is low because it will be excessively sampled.
Keys that occur only once within ClearFrequencySec will always have a sample rate of 1. Keys that occur more frequently will be sampled on a logarithmic curve. In other words, every key will be represented at least once per ClearFrequencySec and more frequent keys will have their sample rate increased proportionally to wind up with the goal sample rate.
func (*AvgSampleRate) GetSampleRate ¶
func (a *AvgSampleRate) GetSampleRate(key string) int
GetSampleRate takes a key and returns the appropriate sample rate for that key. Will never return zero.
func (*AvgSampleRate) LoadState ¶ added in v0.2.0
func (a *AvgSampleRate) LoadState(state []byte) error
LoadState accepts a byte array with a JSON representation of a previous instance's state
func (*AvgSampleRate) SaveState ¶ added in v0.2.0
func (a *AvgSampleRate) SaveState() ([]byte, error)
SaveState returns a byte array with a JSON representation of the sampler state
func (*AvgSampleRate) Start ¶
func (a *AvgSampleRate) Start() error
type AvgSampleWithMin ¶
type AvgSampleWithMin struct { // ClearFrequencySec is how often the counters reset in seconds; default 30 ClearFrequencySec int // GoalSampleRate is the average sample rate we're aiming for, across all // events. Default 10 GoalSampleRate int // MaxKeys, if greater than 0, limits the number of distinct keys used to build // the sample rate map within the interval defined by `ClearFrequencySec`. Once // MaxKeys is reached, new keys will not be included in the sample rate map, but // existing keys will continue to be be counted. MaxKeys int // MinEventsPerSec - when the total number of events drops below this // threshold, sampling will cease. default 50 MinEventsPerSec int // contains filtered or unexported fields }
AvgSampleWithMin implements Sampler and attempts to average a given sample rate, with a minimum number of events per second (i.e. it will reduce sampling if it would end up sending fewer than the mininum number of events). This method attempts to get the best of the normal average sample rate method, without the failings it shows on the low end of total traffic throughput
Keys that occur only once within ClearFrequencySec will always have a sample rate of 1. Keys that occur more frequently will be sampled on a logarithmic curve. In other words, every key will be represented at least once per ClearFrequencySec and more frequent keys will have their sample rate increased proportionally to wind up with the goal sample rate.
func (*AvgSampleWithMin) GetSampleRate ¶
func (a *AvgSampleWithMin) GetSampleRate(key string) int
GetSampleRate takes a key and returns the appropriate sample rate for that key
func (*AvgSampleWithMin) LoadState ¶ added in v0.2.0
func (a *AvgSampleWithMin) LoadState(state []byte) error
LoadState is not implemented
func (*AvgSampleWithMin) SaveState ¶ added in v0.2.0
func (a *AvgSampleWithMin) SaveState() ([]byte, error)
SaveState is not implemented
func (*AvgSampleWithMin) Start ¶
func (a *AvgSampleWithMin) Start() error
type EMASampleRate ¶ added in v0.2.0
type EMASampleRate struct { // AdjustmentInterval defines how often (in seconds) we adjust the moving average from // recent observations. Default 15s AdjustmentInterval int // Weight is a value between (0, 1) indicating the weighting factor used to adjust // the EMA. With larger values, newer data will influence the average more, and older // values will be factored out more quickly. In mathematical literature concerning EMA, // this is referred to as the `alpha` constant. // Default is 0.5 Weight float64 // GoalSampleRate is the average sample rate we're aiming for, across all // events. Default 10 GoalSampleRate int // MaxKeys, if greater than 0, limits the number of distinct keys tracked in EMA. // Once MaxKeys is reached, new keys will not be included in the sample rate map, but // existing keys will continue to be be counted. MaxKeys int // AgeOutValue indicates the threshold for removing keys from the EMA. The EMA of any key will approach 0 // if it is not repeatedly observed, but will never truly reach it, so we have to decide what constitutes "zero". // Keys with averages below this threshold will be removed from the EMA. Default is the same as Weight, as this prevents // a key with the smallest integer value (1) from being aged out immediately. This value should generally be <= Weight, // unless you have very specific reasons to set it higher. AgeOutValue float64 // BurstMultiple, if set, is multiplied by the sum of the running average of counts to define // the burst detection threshold. If total counts observed for a given interval exceed the threshold // EMA is updated immediately, rather than waiting on the AdjustmentInterval. // Defaults to 2; negative value disables. With a default of 2, if your traffic suddenly doubles, // burst detection will kick in. BurstMultiple float64 // BurstDetectionDelay indicates the number of intervals to run after Start is called before burst detection kicks in. // Defaults to 3 BurstDetectionDelay uint // contains filtered or unexported fields }
EMASampleRate implements Sampler and attempts to average a given sample rate, weighting rare traffic and frequent traffic differently so as to end up with the correct average. This method breaks down when total traffic is low because it will be excessively sampled.
Based on the AvgSampleRate implementation, EMASampleRate differs in that rather than compute rate based on a periodic sample of traffic, it maintains an Exponential Moving Average of counts seen per key, and adjusts this average at regular intervals. The weight applied to more recent intervals is defined by `weight`, a number between (0, 1) - larger values weight the average more toward recent observations. In other words, a larger weight will cause sample rates more quickly adapt to traffic patterns, while a smaller weight will result in sample rates that are less sensitive to bursts or drops in traffic and thus more consistent over time.
Keys that are not found in the EMA will always have a sample rate of 1. Keys that occur more frequently will be sampled on a logarithmic curve. In other words, every key will be represented at least once in any given window and more frequent keys will have their sample rate increased proportionally to wind up with the goal sample rate.
func (*EMASampleRate) GetSampleRate ¶ added in v0.2.0
func (e *EMASampleRate) GetSampleRate(key string) int
GetSampleRate takes a key and returns the appropriate sample rate for that key. Will never return zero.
func (*EMASampleRate) LoadState ¶ added in v0.2.0
func (e *EMASampleRate) LoadState(state []byte) error
LoadState accepts a byte array with a JSON representation of a previous instance's state
func (*EMASampleRate) SaveState ¶ added in v0.2.0
func (e *EMASampleRate) SaveState() ([]byte, error)
SaveState returns a byte array with a JSON representation of the sampler state
func (*EMASampleRate) Start ¶ added in v0.2.0
func (e *EMASampleRate) Start() error
type OnlyOnce ¶
type OnlyOnce struct { // ClearFrequencySec is how often the counters reset in seconds; default 30 ClearFrequencySec int // contains filtered or unexported fields }
OnlyOnce implements Sampler and returns a sample rate of 1 the first time a key is seen and 1,000,000,000 every subsequent time. Essentially, this means that every key will be reported the first time it's seen during each ClearFrequencySec and never again. Set ClearFrequencySec to -1 to report each key only once for the life of the process.
(Note that it's not guaranteed that each key will be reported exactly once, just that the first seen event will be reported and subsequent events are unlikely to be reported. It is probable that an additional event will be reported for every billion times the key appears.)
This emulates what you might expect from something catching stack traces - the first one is important but every subsequent one just repeats the same information.
func (*OnlyOnce) GetSampleRate ¶
GetSampleRate takes a key and returns the appropriate sample rate for that key
type PerKeyThroughput ¶
type PerKeyThroughput struct { // ClearFrequency is how often the counters reset in seconds; default 30 ClearFrequencySec int // PerKeyThroughputPerSec is the target number of events to send per second // per key. Sample rates are generated on a per key basis to squash the // throughput down to match the goal throughput. default 10 PerKeyThroughputPerSec int // MaxKeys, if greater than 0, limits the number of distinct keys used to build // the sample rate map within the interval defined by `ClearFrequencySec`. Once // MaxKeys is reached, new keys will not be included in the sample rate map, but // existing keys will continue to be be counted. MaxKeys int // contains filtered or unexported fields }
PerKeyThroughput implements Sampler and attempts to meet a goal of a fixed number of events per key per second sent to Honeycomb.
This method is to guarantee that at most a certain number of events per key get transmitted, no matter how many keys you have or how much traffic comes through. In other words, if capturing a minimum amount of traffic per key is important but beyond that doesn't matter much, this is the best method.
func (*PerKeyThroughput) GetSampleRate ¶
func (p *PerKeyThroughput) GetSampleRate(key string) int
GetSampleRate takes a key and returns the appropriate sample rate for that key
func (*PerKeyThroughput) LoadState ¶ added in v0.2.0
func (p *PerKeyThroughput) LoadState(state []byte) error
LoadState is not implemented
func (*PerKeyThroughput) SaveState ¶ added in v0.2.0
func (p *PerKeyThroughput) SaveState() ([]byte, error)
SaveState is not implemented
func (*PerKeyThroughput) Start ¶
func (p *PerKeyThroughput) Start() error
type Sampler ¶
type Sampler interface { // Start initializes the sampler. You should call Start() before using the // sampler. Start() error // GetSampleRate will return the sample rate to use for the string given. You // should call it with whatever key you choose to use to partition traffic // into different sample rates. GetSampleRate(string) int // SaveState returns a byte array containing the state of the Sampler implementation. // It can be used to persist state between process restarts. SaveState() ([]byte, error) // LoadState accepts a byte array containing the serialized, previous state of the sampler // implementation. It should be called before `Start`. LoadState([]byte) error }
Sampler is the interface to samplers using different methods to determine sample rate. You should instantiate one of the actual samplers in this package, depending on the sample method you'd like to use. Each sampling method has its own set of struct variables you should set before Start()ing the sampler.
type Static ¶
type Static struct { // Rates is the set of sample rates to use Rates map[string]int // Default is the value to use if the key is not whitelisted in Rates Default int }
Static implements Sampler with a static mapping for sample rates. This is useful if you have a known set of keys that you want to sample at specific rates and apply a default to everything else.
func (*Static) GetSampleRate ¶
GetSampleRate takes a key and returns the appropriate sample rate for that key
type TotalThroughput ¶
type TotalThroughput struct { // ClearFrequency is how often the counters reset in seconds; default 30 ClearFrequencySec int // GoalThroughputPerSec is the target number of events to send per second. // Sample rates are generated to squash the total throughput down to match the // goal throughput. Actual throughput may exceed goal throughput. default 100 GoalThroughputPerSec int // MaxKeys, if greater than 0, limits the number of distinct keys used to build // the sample rate map within the interval defined by `ClearFrequencySec`. Once // MaxKeys is reached, new keys will not be included in the sample rate map, but // existing keys will continue to be be counted. MaxKeys int // contains filtered or unexported fields }
TotalThroughput implements Sampler and attempts to meet a goal of a fixed number of events per second sent to Honeycomb.
If your key space is sharded across different servers, this is a good method for making sure each server sends roughly the same volume of content to Honeycomb. It performs poorly when active the keyspace is very large.
GoalThroughputSec * ClearFrequencySec defines the upper limit of the number of keys that can be reported and stay under the goal, but with that many keys, you'll only get one event per key per ClearFrequencySec, which is very coarse. You should aim for at least 1 event per key per sec to 1 event per key per 10sec to get reasonable data. In other words, the number of active keys should be less than 10*GoalThroughputSec.
func (*TotalThroughput) GetSampleRate ¶
func (t *TotalThroughput) GetSampleRate(key string) int
GetSampleRate takes a key and returns the appropriate sample rate for that key
func (*TotalThroughput) LoadState ¶ added in v0.2.0
func (t *TotalThroughput) LoadState(state []byte) error
LoadState is not implemented
func (*TotalThroughput) SaveState ¶ added in v0.2.0
func (t *TotalThroughput) SaveState() ([]byte, error)
SaveState is not implemented
func (*TotalThroughput) Start ¶
func (t *TotalThroughput) Start() error