Documentation ¶
Index ¶
- type Doer
- type Loadshed
- type Option
- func Aggregator(a rolling.Aggregator) Option
- func AverageLatency(lower float64, upper float64, bucketSize time.Duration, buckets int, ...) Option
- func CPU(lower float64, upper float64, pollingInterval time.Duration, windowSize int) Option
- func Concurrency(lower int, upper int, wg *WaitGroup) Option
- func ErrorRate(lower float64, upper float64, bucketSize time.Duration, buckets int, ...) Option
- func PercentileLatency(lower float64, upper float64, bucketSize time.Duration, buckets int, ...) Option
- type Rejected
- type WaitGroup
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Loadshed ¶
type Loadshed struct {
// contains filtered or unexported fields
}
Loadshed is a struct containing all the aggregators that rejects a percentage of requests based on aggregation of system load data.
type Option ¶
Option is a partial initializer for Loadshed
func Aggregator ¶
func Aggregator(a rolling.Aggregator) Option
Aggregator adds an arbitrary Aggregator to the evaluation for load shedding. The result of the aggregator will be interpreted as a percentage value between 0.0 and 1.0. This value will be used as the percentage of requests to reject.
func AverageLatency ¶
func AverageLatency(lower float64, upper float64, bucketSize time.Duration, buckets int, preallocHint int, requiredPoints int) Option
AverageLatency generates an option that adds average request latency within a rolling time window to the load shedding calculation. If the average value, in seconds, falls between lower and upper then a percentage of new requests will be rejected. The rolling window is configured by defining a bucket size and number of buckets. The preallocHint is an optimisation for keeping the number of alloc calls low. If the hint is zero then a default value is used.
func CPU ¶
CPU generates an option that adds a rolling average of CPU usage to the load shedding calculation. It will configure the Decorator to reject a percentage of traffic once the average CPU usage is between lower and upper.
func Concurrency ¶
Concurrency generates an option that adds total concurrent requests to the load shedding calculation. Once the requests in flight reaches a value between lower and upper the Decorator will begin rejecting new requests based on the distance between the threshold values.
func ErrorRate ¶
func ErrorRate(lower float64, upper float64, bucketSize time.Duration, buckets int, preallocHint int, requiredPoints int) Option
ErrorRate generates an option that calculates the error rate percentile within a rolling time window to the load shedding calculation. If the error rate value falls between the lower and upper then a percentage of new requests will be rejected. The rolling window is configured by defining a bucket size and number of buckets. The preallocHint is an optimisation for keeping the number of alloc calls low. If the hint is zero then a default value is used.
func PercentileLatency ¶
func PercentileLatency(lower float64, upper float64, bucketSize time.Duration, buckets int, preallocHint int, requiredPoints int, percentile float64) Option
PercentileLatency generates an option much like AverageLatency except the aggregation is computed as a percentile of the data recorded rather than an average. The percentile should be given as N%. For example, 95.0 or 99.0. Fractional percentiles, like 99.9, are also valid.
type WaitGroup ¶
WaitGroup wraps a sync.WaitGroup to make it usable as a load shedding tool.
func NewWaitGroup ¶
func NewWaitGroup() *WaitGroup
NewWaitGroup generates a specialised WaitGroup that tracks the number of concurrent operations. This implementation also satisfies the Aggregator interface from bitbucket.org/atlassian/rolling so that this can be fed into a calculation of system health.