collection

package
v1.2.14-prerelease11 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 18, 2024 License: MIT Imports: 16 Imported by: 0

Documentation

Overview

Package collection contains the limiting-host ratelimit usage tracking and enforcing logic, which acts as a quotas.Collection.

At a very high level, this wraps quotas.Limiter values to do a few additional things in the context of the github.com/uber/cadence/common/quotas/global ratelimiter system:

  • keep track of usage per key (quotas.Limiter does not support this natively)
  • periodically report usage to each key's "aggregator" host (batched and fanned out in parallel)
  • apply the aggregator's returned per-key RPS limits to future requests
  • fall back to the wrapped limiter in case of failures (handled internally in internal.FallbackLimiter)

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Collection

type Collection struct {
	// contains filtered or unexported fields
}

Collection wraps three kinds of ratelimiter collections, and allows choosing/shadowing which one is used per key:

  1. a "global" collection, which tracks usage, sends data to aggregators, and adjusts to match request patterns between hosts.
  2. a "local" collection, which tracks usage, but all decisions stay local (no requests are sent anywhere to share load info).
  3. a "disabled" collection, which does NOT track usage, and is used to bypass as much of this Collection as possible

1 is the reason this Collection exists - limiter-usage is tracked and submitted to aggregating hosts to drive the whole "global load-balanced ratelimiter" system. Internally, this will fall back to a local collection if there is insufficient data or too many errors.

2 is essentially just a pass-through of the "local" collection, but with added allow/reject metrics. Currently, all of these are our "target RPS / num hosts in ring" ratelimiters. This is a lower-cost and MUCH less complex system, and it SHOULD be used if your Cadence cluster receives requests in a roughly random way (e.g. any client-side request goes to a roughly-fair roughly-random Frontend host).

3 is a *complete* pass-through of the "local" collection (no metrics, no monitoring, nothing), and is intended to be temporary. It is meant to be a maximum-safety fallback mode during initial rollout, and should be removed once 2 is demonstrated to be safe enough to use in all cases.

And last but not least: 1's local-fallback and 2 MUST NOT share ratelimiter instances, or the local instances will be double-counted when shadowing. they should likely be configured to behave identically, but they need to be separate instances.

func (*Collection) For

func (c *Collection) For(key string) quotas.Limiter

func (*Collection) OnStart

func (c *Collection) OnStart(ctx context.Context) error

OnStart follows fx's OnStart hook semantics.

func (*Collection) OnStop

func (c *Collection) OnStop(ctx context.Context) error

OnStop follows fx's OnStop hook semantics.

func (*Collection) TestOverrides

func (c *Collection) TestOverrides(t *testing.T, timesource *clock.MockedTimeSource, km *shared.KeyMapper)

Directories

Path Synopsis
Package internal protects these types' concurrency primitives and other internals from accidental misuse.
Package internal protects these types' concurrency primitives and other internals from accidental misuse.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL