Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type DistributedRateLimiting ¶
type DistributedRateLimiting struct { // How often to sync internal state to storage. Default: 5s WriteInterval caddy.Duration `json:"write_interval,omitempty"` // How often to sync other instances' states from storage. // Default: 5s ReadInterval caddy.Duration `json:"read_interval,omitempty"` // contains filtered or unexported fields }
DistributedRateLimiting enables and customizes distributed rate limiting. It works by writing out the state of all internal rate limiters to storage, and reading in the state of all other rate limiters in the cluster, every so often.
Distributed rate limiting is not exact like the standard internal rate limiting, but it is eventually consistent. Lower (more frequent) sync intervals will result in higher consistency and precision, but more I/O and CPU overhead.
type Handler ¶
type Handler struct { // RateLimits contains the definitions of the rate limit zones, keyed by name. // The name **MUST** be globally unique across all other instances of this handler. RateLimits map[string]*RateLimit `json:"rate_limits,omitempty"` // Percentage jitter on expiration times (example: 0.2 means 20% jitter) Jitter float64 `json:"jitter,omitempty"` // How often to scan for expired rate limit states. Default: 1m. SweepInterval caddy.Duration `json:"sweep_interval,omitempty"` // Enables distributed rate limiting. For this to work properly, rate limit // zones must have the same configuration for all instances in the cluster // because an instance's own configuration is used to calculate whether a // rate limit is exceeded. As usual, a cluster is defined to be all instances // sharing the same storage configuration. Distributed *DistributedRateLimiting `json:"distributed,omitempty"` // Storage backend through which rate limit state is synced. If not set, // the global or default storage configuration will be used. StorageRaw json.RawMessage `json:"storage,omitempty" caddy:"namespace=caddy.storage inline_key=module"` // contains filtered or unexported fields }
Handler implements rate limiting functionality.
If a rate limit is exceeded, an HTTP error with status 429 will be returned. This error can be handled using the conventional error handling routes in your config. An additional placeholder is made available, called `{http.rate_limit.exceeded.name}`, which you can use for logging or handling; it contains the name of the rate limit zone which limit was exceeded.
func (Handler) CaddyModule ¶
func (Handler) CaddyModule() caddy.ModuleInfo
CaddyModule returns the Caddy module information.
func (*Handler) UnmarshalCaddyfile ¶
UnmarshalCaddyfile implements caddyfile.Unmarshaler. Syntax:
rate_limit { zone <name> { key <string> window <duration> events <max_events> } distributed { read_interval <duration> write_interval <duration> } storage <module...> jitter <percent> sweep_interval <duration> }
type RateLimit ¶
type RateLimit struct { // Request matchers, which defines the class of requests that are in the RL zone. MatcherSetsRaw caddyhttp.RawMatcherSets `json:"match,omitempty" caddy:"namespace=http.matchers"` // The key which uniquely differentiates rate limits within this zone. It could // be a static string (no placeholders), resulting in one and only one rate limiter // for the whole zone. Or, placeholders could be used to dynamically allocate // rate limiters. For example, a key of "foo" will create exactly one rate limiter // for all clients. But a key of "{http.request.remote.host}" will create one rate // limiter for each different client IP address. Key string `json:"key,omitempty"` // Number of events allowed within the window. MaxEvents int `json:"max_events,omitempty"` // Duration of the sliding window. Window caddy.Duration `json:"window,omitempty"` // contains filtered or unexported fields }
RateLimit describes an HTTP rate limit zone.