Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func QPSInterval ¶
QPSInterval returns the interval between events corresponding to the given queries/second rate.
This is a helper to be used when populating Limiter.RefillInterval.
Types ¶
type Limiter ¶
type Limiter[K comparable] struct { // Size is the number of keys to track. Only the Size most // recently seen keys have their limits enforced precisely, older // keys are assumed to not be querying frequently enough to bother // tracking. Size int // Max is the number of tokens available for a key to consume // before time-based rate limiting kicks in. An unused limiter // regains available tokens over time, up to Max tokens. A newly // tracked key initially receives Max tokens. Max int64 // RefillInterval is the interval at which a key regains tokens for // use, up to Max tokens. RefillInterval time.Duration // Overdraft is the amount of additional tokens a key can be // charged for when it exceeds its rate limit. Each additional // request issued for the key charges one unit of overdraft, up to // this limit. Overdraft tokens are refilled at the normal rate, // and must be fully repaid before any tokens become available for // requests. // // A non-zero Overdraft results in "cooldown" behavior: with a // normal token bucket that bottoms out at zero tokens, an abusive // key can still consume one token every RefillInterval. With a // non-zero overdraft, a throttled key must stop requesting tokens // entirely for a cooldown period, otherwise they remain // perpetually in debt and cannot proceed at all. Overdraft int64 // contains filtered or unexported fields }
Limiter is a keyed token bucket rate limiter.
Each key gets its own separate token bucket to pull from, enabling enforcement on things like "requests per IP address". To avoid unbounded memory growth, Limiter actually only tracks limits precisely for the N most recently seen keys, and assumes that untracked keys are well-behaved. This trades off absolute precision for bounded memory use, while still enforcing well for outlier keys.
As such, Limiter should only be used in situations where "rough" enforcement of outliers only is sufficient, such as throttling egregious outlier keys (e.g. something sending 100 queries per second, where everyone else is sending at most 5).
Each key's token bucket behaves like a regular token bucket, with the added feature that a bucket's token count can optionally go negative. This implements a form of "cooldown" for keys that exceed the rate limit: once a key starts getting denied, it must stop requesting tokens long enough for the bucket to return to a positive balance. If the key keeps hammering the limiter in excess of the rate limit, the token count will remain negative, and the key will not be allowed to proceed at all. This is in contrast to the classic token bucket, where a key trying to use more than the rate limit will get capped at the limit, but can still occasionally consume a token as one becomes available.
The zero value is a valid limiter that rejects all requests. A useful limiter must specify a Size, Max and RefillInterval.
func (*Limiter[K]) Allow ¶
Allow charges the key one token (up to the overdraft limit), and reports whether the key can perform an action.
func (*Limiter[K]) DumpHTML ¶
DumpHTML writes the state of the limiter to the given writer, formatted as an HTML table. If onlyLimited is true, the output only lists keys that are currently being limited.
DumpHTML blocks other callers of the limiter while it collects the state for dumping. It should not be called on large limiters involved in hot codepaths.