gubernator

package module
v2.8.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 22, 2024 License: Apache-2.0 Imports: 83 Imported by: 0

README

Gubernator Logo
Distributed RateLimiting Service

Gubernator

Gubernator is a distributed, high performance, cloud native and stateless rate-limiting service.

Features
  • Gubernator evenly distributes rate limit requests across the entire cluster, which means you can scale the system by simply adding more nodes.
  • Gubernator doesn’t rely on external caches like memcached or redis, as such there is no deployment synchronization with a dependant service. This makes dynamically growing or shrinking the cluster in an orchestration system like kubernetes or nomad trivial.
  • Gubernator holds no state on disk, It’s configuration is passed to it by the client on a per-request basis.
  • Gubernator provides both GRPC and HTTP access to the API.
  • It Can be run as a sidecar to services that need rate limiting or as a separate service.
  • It Can be used as a library to implement a domain-specific rate limiting service.
  • Supports optional eventually consistent rate limit distribution for extremely high throughput environments. (See GLOBAL behavior architecture.md)
  • Gubernator is the english pronunciation of governor in Russian, also it sounds cool.
Stateless configuration

Gubernator is stateless in that it doesn’t require disk space to operate. No configuration or cache data is ever synced to disk. This is because every request to gubernator includes the config for the rate limit. At first you might think this an unnecessary overhead to each request. However, In reality a rate limit config is made up of only 4, 64bit integers.

Quick Start
# Download the docker-compose file
$ curl -O https://raw.githubusercontent.com/gubernator-io/gubernator/master/docker-compose.yaml
# Run the docker container
$ docker-compose up -d

Now you can make rate limit requests via CURL

# Hit the HTTP API at localhost:1050 (GRPC is at 1051)
$ curl http://localhost:1050/v1/HealthCheck

# Make a rate limit request
$ curl http://localhost:1050/v1/GetRateLimits \
  --header 'Content-Type: application/json' \
  --data '{
    "requests": [
        {
            "name": "requests_per_sec",
            "uniqueKey": "account:12345",
            "hits": "1",
            "limit": "10",
            "duration": "1000"
        }
    ]
}'
ProtoBuf Structure

An example rate limit request sent via GRPC might look like the following

rate_limits:
    # Scopes the request to a specific rate limit
  - name: requests_per_sec
    # A unique_key that identifies this instance of a rate limit request
    unique_key: account_id=123|source_ip=172.0.0.1
    # The number of hits we are requesting
    hits: 1
    # The total number of requests allowed for this rate limit
    limit: 100
    # The duration of the rate limit in milliseconds
    duration: 1000
    # The algorithm used to calculate the rate limit
    # 0 = Token Bucket
    # 1 = Leaky Bucket
    algorithm: 0
    # The behavior of the rate limit in gubernator.
    # 0 = BATCHING (Enables batching of requests to peers)
    # 1 = NO_BATCHING (Disables batching)
    # 2 = GLOBAL (Enable global caching for this rate limit)
    behavior: 0

An example response would be

rate_limits:
    # The status of the rate limit.  OK = 0, OVER_LIMIT = 1
  - status: 0,
    # The current configured limit
    limit: 10,
    # The number of requests remaining
    remaining: 7,
    # A unix timestamp in milliseconds of when the bucket will reset, or if 
    # OVER_LIMIT is set it is the time at which the rate limit will no 
    # longer return OVER_LIMIT.
    reset_time: 1551309219226,
    # Additional metadata about the request the client might find useful
    metadata:
      # This is the name of the coordinator that rate limited this request
      "owner": "api-n03.staging.us-east-1.mailgun.org:9041"
Rate limit Algorithm

Gubernator currently supports 2 rate limit algorithms.

  1. Token Bucket implementation starts with an empty bucket, then each Hit adds a token to the bucket until the bucket is full. Once the bucket is full, requests will return OVER_LIMIT until the reset_time is reached at which point the bucket is emptied and requests will return UNDER_LIMIT. This algorithm is useful for enforcing very bursty limits. (IE: Applications where a single request can add more than 1 hit to the bucket; or non network based queuing systems.) The downside to this implementation is that once you have hit the limit no more requests are allowed until the configured rate limit duration resets the bucket to zero.

  2. Leaky Bucket is implemented similarly to Token Bucket where OVER_LIMIT is returned when the bucket is full. However tokens leak from the bucket at a consistent rate which is calculated as duration / limit. This algorithm is useful for metering, as the bucket leaks allowing traffic to continue without the need to wait for the configured rate limit duration to reset the bucket to zero.

Performance

In our production environment, for every request to our API we send 2 rate limit requests to gubernator for rate limit evaluation, one to rate the HTTP request and the other is to rate the number of recipients a user can send an email too within the specific duration. Under this setup a single gubernator node fields over 2,000 requests a second with most batched responses returned in under 1 millisecond.

requests graph

Peer requests forwarded to owning nodes typically respond in under 30 microseconds.

peer requests graph

NOTE The above graphs only report the slowest request within the 1 second sample time. So you are seeing the slowest requests that gubernator fields to clients.

Gubernator allows users to choose non-batching behavior which would further reduce latency for client rate limit requests. However because of throughput requirements our production environment uses Behaviour=BATCHING with the default 500 microsecond window. In production we have observed batch sizes of 1,000 during peak API usage. Other users who don’t have the same high traffic demands could disable batching and would see lower latencies but at the cost of throughput.

Gregorian Behavior

Users may choose a behavior called DURATION_IS_GREGORIAN which changes the behavior of the Duration field. When Behavior is set to DURATION_IS_GREGORIAN the Duration of the rate limit is reset whenever the end of selected gregorian calendar interval is reached.

This is useful when you want to impose daily or monthly limits on a resource. Using this behavior you know when the end of the day or month is reached the limit on the resource is reset regardless of when the first rate limit request was received by Gubernator.

Given the following Duration values

  • 0 = Minutes
  • 1 = Hours
  • 2 = Days
  • 3 = Weeks
  • 4 = Months
  • 5 = Years

Examples when using Behavior = DURATION_IS_GREGORIAN

  • If Duration = 2 (Days) then the rate limit will reset to Current = 0 at the end of the current day the rate limit was created.
  • If Duration = 0 (Minutes) then the rate limit will reset to Current = 0 at the end of the minute the rate limit was created.
  • If Duration = 4 (Months) then the rate limit will reset to Current = 0 at the end of the month the rate limit was created.

Reset Remaining Behavior

Users may add behavior Behavior_RESET_REMAINING to the rate check request. This will reset the rate limit as if created new on first use.

When using Reset Remaining, the Hits field should be 0.

Drain Over Limit Behavior

Users may add behavior Behavior_DRAIN_OVER_LIMIT to the rate check request. A GetRateLimits call drains the remaining counter on first over limit event. Then, successive GetRateLimits calls will return zero remaining counter and not any residual value. This behavior works best with token bucket algorithm because the Remaining counter will stay zero after an over limit until reset time, whereas leaky bucket algorithm will immediately update Remaining to a non-zero value.

This facilitates scenarios that require an over limit event to stay over limit until the rate limit resets. This approach is necessary if a process must make two rate checks, before and after a process, and the Hit amount is not known until after the process.

  • Before process: Call GetRateLimits with Hits=0 to check the value of Remaining counter. If Remaining is zero, it's known that the rate limit is depleted and the process can be aborted.
  • After process: Call GetRateLimits with a user specified Hits value. If the call returns over limit, the process cannot be aborted because it had already completed. Using DRAIN_OVER_LIMIT behavior, the Remaining count will be drained to zero.

Once an over limit occurs in the "After" step, successive processes will detect the over limit state in the "Before" step.

Gubernator as a library

If you are using golang, you can use Gubernator as a library. This is useful if you wish to implement a rate limit service with your own company specific model on top. We do this internally here at mailgun with a service we creatively called ratelimits which keeps track of the limits imposed on a per account basis. In this way you can utilize the power and speed of Gubernator but still layer business logic and integrate domain specific problems into your rate limiting service.

When you use the library, your service becomes a full member of the cluster participating in the same consistent hashing and caching as a stand alone Gubernator server would. All you need to do is provide the GRPC server instance and tell Gubernator where the peers in your cluster are located. The cmd/gubernator/main.go is a great example of how to use Gubernator as a library.

Optional Disk Persistence

While the Gubernator server currently doesn't directly support disk persistence, the Gubernator library does provide interfaces through which library users can implement persistence. The Gubernator library has two interfaces available for disk persistence. Depending on the use case an implementor can implement the Loader interface and only support persistence of rate limits at startup and shutdown, or users can implement the Store interface and Gubernator will continuously call OnChange() and Get() to keep the in memory cache and persistent store up to date with the latest rate limit data. Both interfaces can be implemented simultaneously to ensure data is always saved to persistent storage.

For those who choose to implement the Store interface, it is not required to store ALL the rate limits received via OnChange(). For instance; If you wish to support rate limit durations longer than a minute, day or month, calls to OnChange() can check the duration of a rate limit and decide to only persist those rate limits that have durations over a self determined limit.

API

All methods are accessed via GRPC but are also exposed via HTTP using the GRPC Gateway

Health Check

Health check returns unhealthy in the event a peer is reported by etcd or kubernetes as up but the server instance is unable to contact that peer via it's advertised address.

GRPC
rpc HealthCheck (HealthCheckReq) returns (HealthCheckResp)
HTTP
GET /v1/HealthCheck

Example response:

{
  "status": "healthy",
  "peer_count": 3
}
Get Rate Limit

Rate limits can be applied or retrieved using this interface. If the client makes a request to the server with hits: 0 then current state of the rate limit is retrieved but not incremented.

GRPC
rpc GetRateLimits (GetRateLimitsReq) returns (GetRateLimitsResp)
HTTP
POST /v1/GetRateLimits

Example Payload

{
  "requests": [
    {
      "name": "requests_per_sec",
      "uniqueKey": "account:12345",
      "hits": "1",
      "limit": "10",
      "duration": "1000"
    }
  ]
}

Example response:

{
  "responses": [
    {
      "status": "UNDER_LIMIT",
      "limit": "10",
      "remaining": "9",
      "reset_time": "1690855128786",
      "error": "",
      "metadata": {
        "owner": "gubernator:1051"
      }
    }
  ]
}
Deployment

NOTE: Gubernator uses etcd, Kubernetes or round-robin DNS to discover peers and establish a cluster. If you don't have either, the docker-compose method is the simplest way to try gubernator out.

Docker with existing etcd cluster
$ docker run -p 1051:1051 -p 1050:1050 -e GUBER_ETCD_ENDPOINTS=etcd1:2379,etcd2:2379 \
   ghcr.io/gubernator-io/gubernator:latest

# Hit the HTTP API at localhost:1050
$ curl http://localhost:1050/v1/HealthCheck
Kubernetes
# Download the kubernetes deployment spec
$ curl -O https://raw.githubusercontent.com/gubernator-io/gubernator/master/k8s-deployment.yaml

# Edit the deployment file to change the environment config variables
$ vi k8s-deployment.yaml

# Create the deployment (includes headless service spec)
$ kubectl create -f k8s-deployment.yaml
Round-robin DNS

If your DNS service supports auto-registration, for example AWS Route53 service discovery, you can use same fully-qualified domain name to both let your business logic containers or instances to find gubernator and for gubernator containers/instances to find each other.

TLS

Gubernator supports TLS for both HTTP and GRPC connections. You can see an example with self signed certs by running docker-compose-tls.yaml

# Run docker compose
$ docker-compose -f docker-compose-tls.yaml up -d

# Hit the HTTP API at localhost:1050 (GRPC is at 1051)
$ curl --cacert certs/ca.cert --cert certs/gubernator.pem --key certs/gubernator.key  https://localhost:1050/v1/HealthCheck
Configuration

Gubernator is configured via environment variables with an optional --config flag which takes a file of key/values and places them into the local environment before startup.

See the example.conf for all available config options and their descriptions.

Architecture

See architecture.md for a full description of the architecture and the inner workings of gubernator.

Monitoring

Gubernator publishes Prometheus metrics for realtime monitoring. See prometheus.md for details.

OpenTelemetry Tracing (OTEL)

Gubernator supports OpenTelemetry. See tracing.md for details.

Documentation

Overview

Package gubernator is a reverse proxy.

It translates gRPC into RESTful JSON APIs.

Package gubernator is a reverse proxy.

It translates gRPC into RESTful JSON APIs.

Index

Constants

View Source
const (
	Millisecond = 1
	Second      = 1000 * Millisecond
	Minute      = 60 * Second
)
View Source
const (
	Healthy   = "healthy"
	UnHealthy = "unhealthy"
)
View Source
const (
	V1_GetRateLimits_FullMethodName = "/pb.gubernator.V1/GetRateLimits"
	V1_HealthCheck_FullMethodName   = "/pb.gubernator.V1/HealthCheck"
)
View Source
const (
	GregorianMinutes int64 = iota
	GregorianHours
	GregorianDays
	GregorianWeeks
	GregorianMonths
	GregorianYears
)
View Source
const (
	PeersV1_GetPeerRateLimits_FullMethodName = "/pb.gubernator.PeersV1/GetPeerRateLimits"
	PeersV1_UpdatePeerGlobals_FullMethodName = "/pb.gubernator.PeersV1/UpdatePeerGlobals"
)

Variables

View Source
var (
	Algorithm_name = map[int32]string{
		0: "TOKEN_BUCKET",
		1: "LEAKY_BUCKET",
	}
	Algorithm_value = map[string]int32{
		"TOKEN_BUCKET": 0,
		"LEAKY_BUCKET": 1,
	}
)

Enum value maps for Algorithm.

View Source
var (
	Behavior_name = map[int32]string{
		0:  "BATCHING",
		1:  "NO_BATCHING",
		2:  "GLOBAL",
		4:  "DURATION_IS_GREGORIAN",
		8:  "RESET_REMAINING",
		16: "MULTI_REGION",
		32: "DRAIN_OVER_LIMIT",
	}
	Behavior_value = map[string]int32{
		"BATCHING":              0,
		"NO_BATCHING":           1,
		"GLOBAL":                2,
		"DURATION_IS_GREGORIAN": 4,
		"RESET_REMAINING":       8,
		"MULTI_REGION":          16,
		"DRAIN_OVER_LIMIT":      32,
	}
)

Enum value maps for Behavior.

View Source
var (
	Status_name = map[int32]string{
		0: "UNDER_LIMIT",
		1: "OVER_LIMIT",
	}
	Status_value = map[string]int32{
		"UNDER_LIMIT": 0,
		"OVER_LIMIT":  1,
	}
)

Enum value maps for Status.

View Source
var DebugEnabled = false
View Source
var File_gubernator_proto protoreflect.FileDescriptor
View Source
var File_peers_proto protoreflect.FileDescriptor
View Source
var PeersV1_ServiceDesc = grpc.ServiceDesc{
	ServiceName: "pb.gubernator.PeersV1",
	HandlerType: (*PeersV1Server)(nil),
	Methods: []grpc.MethodDesc{
		{
			MethodName: "GetPeerRateLimits",
			Handler:    _PeersV1_GetPeerRateLimits_Handler,
		},
		{
			MethodName: "UpdatePeerGlobals",
			Handler:    _PeersV1_UpdatePeerGlobals_Handler,
		},
	},
	Streams:  []grpc.StreamDesc{},
	Metadata: "peers.proto",
}

PeersV1_ServiceDesc is the grpc.ServiceDesc for PeersV1 service. It's only intended for direct use with grpc.RegisterService, and not to be introspected or modified (even as a copy)

View Source
var V1_ServiceDesc = grpc.ServiceDesc{
	ServiceName: "pb.gubernator.V1",
	HandlerType: (*V1Server)(nil),
	Methods: []grpc.MethodDesc{
		{
			MethodName: "GetRateLimits",
			Handler:    _V1_GetRateLimits_Handler,
		},
		{
			MethodName: "HealthCheck",
			Handler:    _V1_HealthCheck_Handler,
		},
	},
	Streams:  []grpc.StreamDesc{},
	Metadata: "gubernator.proto",
}

V1_ServiceDesc is the grpc.ServiceDesc for V1 service. It's only intended for direct use with grpc.RegisterService, and not to be introspected or modified (even as a copy)

Functions

func ContextWithStats

func ContextWithStats(ctx context.Context, stats *GRPCStats) context.Context

Returns a new `context.Context` that holds a reference to `GRPCStats`.

func FromTimeStamp

func FromTimeStamp(ts int64) time.Duration

FromTimeStamp is a convenience function to convert a unix millisecond timestamp to a time.Duration. Useful when working with gubernator request and response duration and reset_time fields.

func FromUnixMilliseconds

func FromUnixMilliseconds(ts int64) time.Time

FromUnixMilliseconds is a convenience function to convert a unix millisecond timestamp to a time.Time. Useful when working with gubernator request and response duration and reset_time fields.

func GetInstanceID

func GetInstanceID() string

GetInstanceID attempts to source a unique id from the environment, if none is found, then it will generate a random instance id.

func GetTracingLevel

func GetTracingLevel() tracing.Level

func GregorianDuration

func GregorianDuration(now clock.Time, d int64) (int64, error)

GregorianDuration returns the entire duration of the Gregorian interval

func GregorianExpiration

func GregorianExpiration(now clock.Time, d int64) (int64, error)

GregorianExpiration returns an gregorian interval as defined by the 'DURATION_IS_GREGORIAN` Behavior. it returns the expiration time as the end of GREGORIAN interval in milliseconds from `now`.

Example: If `now` is 2019-01-01 11:20:10 and `d` = GregorianMinutes then the return expire time would be 2019-01-01 11:20:59 in milliseconds since epoch

func HasBehavior

func HasBehavior(b Behavior, flag Behavior) bool

HasBehavior returns true if the provided behavior is set

func LocalHost

func LocalHost() string

LocalHost returns the local IPV interface which Gubernator should bind to by default. There are a few reasons why the interface will be different on different platforms.

### Linux ### Gubernator should bind to either localhost IPV6 or IPV4 on Linux. In most cases using the DNS name "localhost" will result in binding to the correct interface depending on the Linux network configuration.

### GitHub Actions ### GHA does not support IPV6, as such trying to bind to `[::1]` will result in an error while running in GHA even if the linux runner defaults `localhost` to IPV6. As such we explicitly bind to 127.0.0.1 for GHA.

### Darwin (Apple OSX) ### Later versions of OSX bind to the publicly addressable interface if we use the DNS name "localhost" which is not the loop back interface. As a result OSX will warn the user with the message "Do you want the application to accept incoming network connections?" every time Gubernator is run, including when running unit tests. So for OSX we return 127.0.0.1.

func MillisecondNow

func MillisecondNow() int64

MillisecondNow returns unix epoch in milliseconds

func NewStaticBuilder

func NewStaticBuilder() resolver.Builder

NewStaticBuilder returns a builder which returns a staticResolver that tells GRPC to connect a specific peer in the cluster.

func RandomString

func RandomString(n int) string

RandomString returns a random alpha string of 'n' length

func RegisterPeersV1Handler

func RegisterPeersV1Handler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.ClientConn) error

RegisterPeersV1Handler registers the http handlers for service PeersV1 to "mux". The handlers forward requests to the grpc endpoint over "conn".

func RegisterPeersV1HandlerClient

func RegisterPeersV1HandlerClient(ctx context.Context, mux *runtime.ServeMux, client PeersV1Client) error

RegisterPeersV1HandlerClient registers the http handlers for service PeersV1 to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "PeersV1Client". Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "PeersV1Client" doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in "PeersV1Client" to call the correct interceptors.

func RegisterPeersV1HandlerFromEndpoint

func RegisterPeersV1HandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error)

RegisterPeersV1HandlerFromEndpoint is same as RegisterPeersV1Handler but automatically dials to "endpoint" and closes the connection when "ctx" gets done.

func RegisterPeersV1HandlerServer

func RegisterPeersV1HandlerServer(ctx context.Context, mux *runtime.ServeMux, server PeersV1Server) error

RegisterPeersV1HandlerServer registers the http handlers for service PeersV1 to "mux". UnaryRPC :call PeersV1Server directly. StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906. Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterPeersV1HandlerFromEndpoint instead.

func RegisterPeersV1Server

func RegisterPeersV1Server(s grpc.ServiceRegistrar, srv PeersV1Server)

func RegisterV1Handler

func RegisterV1Handler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.ClientConn) error

RegisterV1Handler registers the http handlers for service V1 to "mux". The handlers forward requests to the grpc endpoint over "conn".

func RegisterV1HandlerClient

func RegisterV1HandlerClient(ctx context.Context, mux *runtime.ServeMux, client V1Client) error

RegisterV1HandlerClient registers the http handlers for service V1 to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "V1Client". Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "V1Client" doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in "V1Client" to call the correct interceptors.

func RegisterV1HandlerFromEndpoint

func RegisterV1HandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error)

RegisterV1HandlerFromEndpoint is same as RegisterV1Handler but automatically dials to "endpoint" and closes the connection when "ctx" gets done.

func RegisterV1HandlerServer

func RegisterV1HandlerServer(ctx context.Context, mux *runtime.ServeMux, server V1Server) error

RegisterV1HandlerServer registers the http handlers for service V1 to "mux". UnaryRPC :call V1Server directly. StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906. Note that using this registration option will cause many gRPC library features to stop working. Consider using RegisterV1HandlerFromEndpoint instead.

func RegisterV1Server

func RegisterV1Server(s grpc.ServiceRegistrar, srv V1Server)

func ResolveHostIP

func ResolveHostIP(addr string) (string, error)

ResolveHostIP attempts to discover the actual ip address of the host if the passed address is "0.0.0.0" or "::"

func RestConfig

func RestConfig() (*rest.Config, error)

func SetBehavior

func SetBehavior(b *Behavior, flag Behavior, set bool)

SetBehavior sets or clears the behavior depending on the boolean `set`

func SetupTLS

func SetupTLS(conf *TLSConfig) error

func ToTimeStamp

func ToTimeStamp(duration time.Duration) int64

ToTimeStamp is a convenience function to convert a time.Duration to a unix millisecond timestamp. Useful when working with gubernator request and response duration and reset_time fields.

func WaitForConnect

func WaitForConnect(ctx context.Context, addresses []string) error

WaitForConnect returns nil if the list of addresses is listening for connections; will block until context is cancelled.

Types

type Algorithm

type Algorithm int32
const (
	// Token bucket algorithm https://en.wikipedia.org/wiki/Token_bucket
	Algorithm_TOKEN_BUCKET Algorithm = 0
	// Leaky bucket algorithm https://en.wikipedia.org/wiki/Leaky_bucket
	Algorithm_LEAKY_BUCKET Algorithm = 1
)

func (Algorithm) Descriptor

func (Algorithm) Descriptor() protoreflect.EnumDescriptor

func (Algorithm) Enum

func (x Algorithm) Enum() *Algorithm

func (Algorithm) EnumDescriptor deprecated

func (Algorithm) EnumDescriptor() ([]byte, []int)

Deprecated: Use Algorithm.Descriptor instead.

func (Algorithm) Number

func (x Algorithm) Number() protoreflect.EnumNumber

func (Algorithm) String

func (x Algorithm) String() string

func (Algorithm) Type

type AsyncReq

type AsyncReq struct {
	WG      *sync.WaitGroup
	AsyncCh chan AsyncResp
	Req     *RateLimitReq
	Peer    *PeerClient
	Key     string
	Idx     int
}

type AsyncResp

type AsyncResp struct {
	Idx  int
	Resp *RateLimitResp
}

type Behavior

type Behavior int32

A set of int32 flags used to control the behavior of a rate limit in gubernator

const (
	// BATCHING is the default behavior. This enables batching requests which protects the
	// service from thundering herd. IE: When a service experiences spikes of unexpected high
	// volume requests.
	//
	// Using this option introduces a small amount of latency depending on
	// the `batchWait` setting. Defaults to around 500 Microseconds of additional
	// latency in low throughput situations. For high volume loads, batching can reduce
	// the overall load on the system substantially.
	Behavior_BATCHING Behavior = 0 // <-- this is here because proto requires it, but has no effect if used
	// Disables batching. Use this for super low latency rate limit requests when
	// thundering herd is not a concern but latency of requests is of paramount importance.
	Behavior_NO_BATCHING Behavior = 1
	// Enables Global caching of the rate limit. Use this if the rate limit applies globally to
	// all ingress requests. (IE: Throttle hundreds of thousands of requests to an entire
	// datacenter or cluster of http servers)
	//
	// Using this option gubernator will continue to use a single peer as the rate limit coordinator
	// to increment and manage the state of the rate limit, however the result of the rate limit is
	// distributed to each peer and cached locally. A rate limit request received from any peer in the
	// cluster will first check the local cache for a rate limit answer, if it exists the peer will
	// immediately return the answer to the client and asynchronously forward the aggregate hits to
	// the owner peer. Because of GLOBALS async nature we lose some accuracy in rate limit
	// reporting, which may result in allowing some requests beyond the chosen rate limit. However we
	// gain massive performance as every request coming into the system does not have to wait for a
	// single peer to decide if the rate limit has been reached.
	Behavior_GLOBAL Behavior = 2
	// Changes the behavior of the `Duration` field. When `Behavior` is set to `DURATION_IS_GREGORIAN`
	// the `Duration` of the rate limit is reset whenever the end of selected GREGORIAN calendar
	// interval is reached.
	//
	// Given the following `Duration` values
	//
	//	0 = Minutes
	//	1 = Hours
	//	2 = Days
	//	3 = Weeks
	//	4 = Months
	//	5 = Years
	//
	// Examples when using `Behavior = DURATION_IS_GREGORIAN`
	//
	// If  `Duration = 2` (Days) then the rate limit will expire at the end of the current day the
	// rate limit was created.
	//
	// If `Duration = 0` (Minutes) then the rate limit will expire at the end of the current minute
	// the rate limit was created.
	//
	// If `Duration = 4` (Months) then the rate limit will expire at the end of the current month
	// the rate limit was created.
	Behavior_DURATION_IS_GREGORIAN Behavior = 4
	// If this flag is set causes the rate limit to reset any accrued hits stored in the cache, and will
	// ignore any `Hit` values provided in the current request. The effect this has is dependent on
	// algorithm chosen. For instance, if used with `TOKEN_BUCKET` it will immediately expire the
	// cache value. For `LEAKY_BUCKET` it sets the `Remaining` to `Limit`.
	Behavior_RESET_REMAINING Behavior = 8
	// Enables rate limits to be pushed to other regions. Currently this is only implemented when using
	// 'member-list' peer discovery. Also requires GUBER_DATA_CENTER to be set to different values on at
	// least 2 instances of Gubernator.
	Behavior_MULTI_REGION Behavior = 16
	// A GetRateLimits call drains the remaining counter on first over limit
	// event. Then, successive GetRateLimits calls will return zero remaining
	// counter and not any residual value.
	Behavior_DRAIN_OVER_LIMIT Behavior = 32
)

func (Behavior) Descriptor

func (Behavior) Descriptor() protoreflect.EnumDescriptor

func (Behavior) Enum

func (x Behavior) Enum() *Behavior

func (Behavior) EnumDescriptor deprecated

func (Behavior) EnumDescriptor() ([]byte, []int)

Deprecated: Use Behavior.Descriptor instead.

func (Behavior) Number

func (x Behavior) Number() protoreflect.EnumNumber

func (Behavior) String

func (x Behavior) String() string

func (Behavior) Type

type BehaviorConfig

type BehaviorConfig struct {
	// How long we should wait for a batched response from a peer
	BatchTimeout time.Duration
	// How long we should wait before sending a batched request
	BatchWait time.Duration
	// The max number of requests we can batch into a single peer request
	BatchLimit int
	// DisableBatching disables batching behavior for all ratelimits.
	DisableBatching bool

	// How long a non-owning peer should wait before syncing hits to the owning peer
	GlobalSyncWait time.Duration
	// How long we should wait for global sync responses from peers
	GlobalTimeout time.Duration
	// The max number of global updates we can batch into a single peer request
	GlobalBatchLimit int
	// ForceGlobal forces global behavior on all rate limit checks.
	ForceGlobal bool

	// Number of concurrent requests that will be made to peers. Defaults to 100
	GlobalPeerRequestsConcurrency int
}

BehaviorConfig controls the handling of rate limits in the cluster

type Cache

type Cache interface {
	Add(item *CacheItem) bool
	UpdateExpiration(key string, expireAt int64) bool
	GetItem(key string) (value *CacheItem, ok bool)
	Each() chan *CacheItem
	Remove(key string)
	Size() int64
	Close() error
}

type CacheItem

type CacheItem struct {
	Algorithm Algorithm
	Key       string
	Value     interface{}

	// Timestamp when rate limit expires in epoch milliseconds.
	ExpireAt int64
	// Timestamp when the cache should invalidate this rate limit. This is useful when used in conjunction with
	// a persistent store to ensure our node has the most up to date info from the store. Ignored if set to `0`
	// It is set by the persistent store implementation to indicate when the node should query the persistent store
	// for the latest rate limit data.
	InvalidAt int64
}

func (*CacheItem) IsExpired

func (item *CacheItem) IsExpired() bool

type Config

type Config struct {
	InstanceID string

	// (Required) A list of GRPC servers to register our instance with
	GRPCServers []*grpc.Server

	// (Optional) Adjust how gubernator behaviors are configured
	Behaviors BehaviorConfig

	// (Optional) The cache implementation
	CacheFactory func(maxSize int) Cache

	// (Optional) A persistent store implementation. Allows the implementor the ability to store the rate limits this
	// instance of gubernator owns. It's up to the implementor to decide what rate limits to persist.
	// For instance an implementor might only persist rate limits that have an expiration of
	// longer than 1 hour.
	Store Store

	// (Optional) A loader from a persistent store. Allows the implementor the ability to load and save
	// the contents of the cache when the gubernator instance is started and stopped
	Loader Loader

	// (Optional) This is the peer picker algorithm the server will use decide which peer in the local cluster
	// will own the rate limit
	LocalPicker PeerPicker

	// (Optional) This is the peer picker algorithm the server will use when deciding which remote peer to forward
	// rate limits too when a `Config.DataCenter` is set to something other than empty string.
	RegionPicker RegionPeerPicker

	// (Optional) This is the name of our local data center. This value will be used by LocalPicker when
	// deciding who we should immediately connect too for our local picker. Should remain empty if not
	// using multi data center support.
	DataCenter string

	// (Optional) A Logger which implements the declared logger interface (typically *logrus.Entry)
	Logger FieldLogger

	// (Optional) The TLS config used when connecting to gubernator peers
	PeerTLS *tls.Config

	// (Optional) If true, will emit traces for GRPC client requests to other peers
	PeerTraceGRPC bool

	// (Optional) The number of go routine workers used to process concurrent rate limit requests
	// Default is set to number of CPUs.
	Workers int

	// (Optional) The total size of the cache used to store rate limits. Defaults to 50,000
	CacheSize int

	// (Optional) EventChannel receives hit events
	EventChannel chan<- HitEvent
}

Config for a gubernator instance

func (*Config) SetDefaults

func (c *Config) SetDefaults() error

type DNSPool

type DNSPool struct {
	// contains filtered or unexported fields
}

func NewDNSPool

func NewDNSPool(conf DNSPoolConfig) (*DNSPool, error)

func (*DNSPool) Close

func (p *DNSPool) Close()

type DNSPoolConfig

type DNSPoolConfig struct {
	// (Required) The FQDN that should resolve to gubernator instance ip addresses
	FQDN string

	// (Required) Filesystem path to "/etc/resolv.conf", override for testing
	ResolvConf string

	// (Required) Own GRPC address
	OwnAddress string

	// (Required) Called when the list of gubernators in the pool updates
	OnUpdate UpdateFunc

	Logger FieldLogger
}

type DNSResolver

type DNSResolver struct {
	Servers []string
	// contains filtered or unexported fields
}

Adapted from TimothyYe/godns DNSResolver represents a dns resolver

func NewFromResolvConf

func NewFromResolvConf(path string) (*DNSResolver, error)

NewFromResolvConf initializes DnsResolver from resolv.conf like file.

type Daemon

type Daemon struct {
	GRPCListeners []net.Listener
	HTTPListener  net.Listener
	V1Server      *V1Instance
	InstanceID    string
	PeerInfo      PeerInfo
	// contains filtered or unexported fields
}

func SpawnDaemon

func SpawnDaemon(ctx context.Context, conf DaemonConfig) (*Daemon, error)

SpawnDaemon starts a new gubernator daemon according to the provided DaemonConfig. This function will block until the daemon responds to connections as specified by GRPCListenAddress and HTTPListenAddress

func (*Daemon) Client

func (s *Daemon) Client() (V1Client, error)

func (*Daemon) Close

func (s *Daemon) Close()

Close gracefully closes all server connections and listening sockets

func (*Daemon) Config

func (s *Daemon) Config() DaemonConfig

Config returns the current config for this Daemon

func (*Daemon) MustClient

func (s *Daemon) MustClient() V1Client

func (*Daemon) Peers

func (s *Daemon) Peers() []PeerInfo

Peers returns the peers this daemon knows about

func (*Daemon) SetPeers

func (s *Daemon) SetPeers(in []PeerInfo)

SetPeers sets the peers for this daemon

func (*Daemon) Start

func (s *Daemon) Start(ctx context.Context) error

type DaemonConfig

type DaemonConfig struct {
	// (Required) The `address:port` that will accept GRPC requests
	GRPCListenAddress string

	// (Required) The `address:port` that will accept HTTP requests
	HTTPListenAddress string

	// (Optional) The `address:port` that will accept HTTP requests for /v1/HealthCheck
	// without verifying client certificates. Only starts listener when TLS config is provided.
	// TLS config is identical to what is applied on HTTPListenAddress, except that server
	// Does not attempt to verify client certificate. Useful when your health probes cannot
	// provide client certificate but you want to enforce mTLS in other RPCs (like in K8s)
	HTTPStatusListenAddress string

	// (Optional) Defines the max age connection from client in seconds.
	// Default is infinity
	GRPCMaxConnectionAgeSeconds int

	// (Optional) The `address:port` that is advertised to other Gubernator peers.
	// Defaults to `GRPCListenAddress`
	AdvertiseAddress string

	// (Optional) The number of items in the cache. Defaults to 50,000
	CacheSize int

	// (Optional) The number of go routine workers used to process concurrent rate limit requests
	// Defaults to the number of CPUs returned by runtime.NumCPU()
	Workers int

	// (Optional) Configure how behaviours behave
	Behaviors BehaviorConfig

	// (Optional) Identifies the datacenter this instance is running in. For
	// use with multi-region support
	DataCenter string

	// (Optional) Which pool to use when discovering other Gubernator peers
	//  Valid options are [etcd, k8s, member-list] (Defaults to 'member-list')
	PeerDiscoveryType string

	// (Optional) Etcd configuration used for peer discovery
	EtcdPoolConf EtcdPoolConfig

	// (Optional) K8s configuration used for peer discovery
	K8PoolConf K8sPoolConfig

	// (Optional) DNS Configuration used for peer discovery
	DNSPoolConf DNSPoolConfig

	// (Optional) Member list configuration used for peer discovery
	MemberListPoolConf MemberListPoolConfig

	// (Optional) The PeerPicker as selected by `GUBER_PEER_PICKER`
	Picker PeerPicker

	// (Optional) A Logger which implements the declared logger interface (typically *logrus.Entry)
	Logger FieldLogger

	// (Optional) TLS Configuration; SpawnDaemon() will modify the passed TLS config in an
	// attempt to build a complete TLS config if one is not provided.
	TLS *TLSConfig

	// (Optional) Metrics Flags which enable or disable a collection of some metric types
	MetricFlags MetricFlags

	// (Optional) Instance ID which is a unique id that identifies this instance of gubernator
	InstanceID string

	// (Optional) TraceLevel sets the tracing level, this controls the number of spans included in a single trace.
	//  Valid options are (tracing.InfoLevel, tracing.DebugLevel) Defaults to tracing.InfoLevel
	TraceLevel tracing.Level

	// (Optional) EventChannel receives hit events
	EventChannel chan<- HitEvent
}

func SetupDaemonConfig

func SetupDaemonConfig(logger *logrus.Logger, configFile io.Reader) (DaemonConfig, error)

SetupDaemonConfig returns a DaemonConfig object that is the result of merging the lines in the provided configFile and the environment variables. See `example.conf` for all available config options and their descriptions.

func (*DaemonConfig) ClientTLS

func (d *DaemonConfig) ClientTLS() *tls.Config

func (*DaemonConfig) ServerTLS

func (d *DaemonConfig) ServerTLS() *tls.Config

type EtcdPool

type EtcdPool struct {
	// contains filtered or unexported fields
}

func NewEtcdPool

func NewEtcdPool(conf EtcdPoolConfig) (*EtcdPool, error)

func (*EtcdPool) Close

func (e *EtcdPool) Close()

func (*EtcdPool) GetPeers

func (e *EtcdPool) GetPeers(ctx context.Context) ([]PeerInfo, error)

Get peers list from etcd.

type EtcdPoolConfig

type EtcdPoolConfig struct {
	// (Required) This is the peer information that will be advertised to other gubernator instances
	Advertise PeerInfo

	// (Required) An etcd client currently connected to an etcd cluster
	Client *etcd.Client

	// (Required) Called when the list of gubernators in the pool updates
	OnUpdate UpdateFunc

	// (Optional) The etcd key prefix used when discovering other peers. Defaults to `/gubernator/peers/`
	KeyPrefix string

	// (Optional) The etcd config used to connect to the etcd cluster
	EtcdConfig *etcd.Config

	// (Optional) An interface through which logging will occur (Usually *logrus.Entry)
	Logger FieldLogger
}

type FieldLogger

type FieldLogger interface {
	WithField(key string, value interface{}) *logrus.Entry
	WithFields(fields logrus.Fields) *logrus.Entry
	WithError(err error) *logrus.Entry
	WithContext(ctx context.Context) *logrus.Entry
	WithTime(t time.Time) *logrus.Entry

	Tracef(format string, args ...interface{})
	Debugf(format string, args ...interface{})
	Infof(format string, args ...interface{})
	Printf(format string, args ...interface{})
	Warnf(format string, args ...interface{})
	Warningf(format string, args ...interface{})
	Errorf(format string, args ...interface{})
	Fatalf(format string, args ...interface{})

	Log(level logrus.Level, args ...interface{})
	Debug(args ...interface{})
	Info(args ...interface{})
	Print(args ...interface{})
	Warn(args ...interface{})
	Warning(args ...interface{})
	Error(args ...interface{})
}

The FieldLogger interface generalizes the Entry and Logger types

type GRPCStats

type GRPCStats struct {
	Duration clock.Duration
	Method   string
	Failed   float64
	Success  float64
}

func StatsFromContext

func StatsFromContext(ctx context.Context) *GRPCStats

Returns the `GRPCStats` previously associated with `ctx`.

type GRPCStatsHandler

type GRPCStatsHandler struct {
	// contains filtered or unexported fields
}

Implements the Prometheus collector interface. Such that when the /metrics handler is called this collector pulls all the stats from

func NewGRPCStatsHandler

func NewGRPCStatsHandler() *GRPCStatsHandler

func (*GRPCStatsHandler) Close

func (c *GRPCStatsHandler) Close()

func (*GRPCStatsHandler) Collect

func (c *GRPCStatsHandler) Collect(ch chan<- prometheus.Metric)

func (*GRPCStatsHandler) Describe

func (c *GRPCStatsHandler) Describe(ch chan<- *prometheus.Desc)

func (*GRPCStatsHandler) HandleConn

func (c *GRPCStatsHandler) HandleConn(ctx context.Context, s stats.ConnStats)

func (*GRPCStatsHandler) HandleRPC

func (c *GRPCStatsHandler) HandleRPC(ctx context.Context, s stats.RPCStats)

func (*GRPCStatsHandler) TagConn

func (*GRPCStatsHandler) TagRPC

func (c *GRPCStatsHandler) TagRPC(ctx context.Context, tagInfo *stats.RPCTagInfo) context.Context

type GetPeerRateLimitsReq

type GetPeerRateLimitsReq struct {

	// Must specify at least one RateLimit. The peer that recives this request MUST be authoritative for
	// each rate_limit[x].unique_key provided, as the peer will not forward the request to any other peers
	Requests []*RateLimitReq `protobuf:"bytes,1,rep,name=requests,proto3" json:"requests,omitempty"`
	// contains filtered or unexported fields
}

func (*GetPeerRateLimitsReq) Descriptor deprecated

func (*GetPeerRateLimitsReq) Descriptor() ([]byte, []int)

Deprecated: Use GetPeerRateLimitsReq.ProtoReflect.Descriptor instead.

func (*GetPeerRateLimitsReq) GetRequests

func (x *GetPeerRateLimitsReq) GetRequests() []*RateLimitReq

func (*GetPeerRateLimitsReq) ProtoMessage

func (*GetPeerRateLimitsReq) ProtoMessage()

func (*GetPeerRateLimitsReq) ProtoReflect

func (x *GetPeerRateLimitsReq) ProtoReflect() protoreflect.Message

func (*GetPeerRateLimitsReq) Reset

func (x *GetPeerRateLimitsReq) Reset()

func (*GetPeerRateLimitsReq) String

func (x *GetPeerRateLimitsReq) String() string

type GetPeerRateLimitsResp

type GetPeerRateLimitsResp struct {

	// Responses are in the same order as they appeared in the PeerRateLimitRequests
	RateLimits []*RateLimitResp `protobuf:"bytes,1,rep,name=rate_limits,json=rateLimits,proto3" json:"rate_limits,omitempty"`
	// contains filtered or unexported fields
}

func (*GetPeerRateLimitsResp) Descriptor deprecated

func (*GetPeerRateLimitsResp) Descriptor() ([]byte, []int)

Deprecated: Use GetPeerRateLimitsResp.ProtoReflect.Descriptor instead.

func (*GetPeerRateLimitsResp) GetRateLimits

func (x *GetPeerRateLimitsResp) GetRateLimits() []*RateLimitResp

func (*GetPeerRateLimitsResp) ProtoMessage

func (*GetPeerRateLimitsResp) ProtoMessage()

func (*GetPeerRateLimitsResp) ProtoReflect

func (x *GetPeerRateLimitsResp) ProtoReflect() protoreflect.Message

func (*GetPeerRateLimitsResp) Reset

func (x *GetPeerRateLimitsResp) Reset()

func (*GetPeerRateLimitsResp) String

func (x *GetPeerRateLimitsResp) String() string

type GetRateLimitsReq

type GetRateLimitsReq struct {
	Requests []*RateLimitReq `protobuf:"bytes,1,rep,name=requests,proto3" json:"requests,omitempty"`
	// contains filtered or unexported fields
}

Must specify at least one Request

func (*GetRateLimitsReq) Descriptor deprecated

func (*GetRateLimitsReq) Descriptor() ([]byte, []int)

Deprecated: Use GetRateLimitsReq.ProtoReflect.Descriptor instead.

func (*GetRateLimitsReq) GetRequests

func (x *GetRateLimitsReq) GetRequests() []*RateLimitReq

func (*GetRateLimitsReq) ProtoMessage

func (*GetRateLimitsReq) ProtoMessage()

func (*GetRateLimitsReq) ProtoReflect

func (x *GetRateLimitsReq) ProtoReflect() protoreflect.Message

func (*GetRateLimitsReq) Reset

func (x *GetRateLimitsReq) Reset()

func (*GetRateLimitsReq) String

func (x *GetRateLimitsReq) String() string

type GetRateLimitsResp

type GetRateLimitsResp struct {
	Responses []*RateLimitResp `protobuf:"bytes,1,rep,name=responses,proto3" json:"responses,omitempty"`
	// contains filtered or unexported fields
}

RateLimits returned are in the same order as the Requests

func (*GetRateLimitsResp) Descriptor deprecated

func (*GetRateLimitsResp) Descriptor() ([]byte, []int)

Deprecated: Use GetRateLimitsResp.ProtoReflect.Descriptor instead.

func (*GetRateLimitsResp) GetResponses

func (x *GetRateLimitsResp) GetResponses() []*RateLimitResp

func (*GetRateLimitsResp) ProtoMessage

func (*GetRateLimitsResp) ProtoMessage()

func (*GetRateLimitsResp) ProtoReflect

func (x *GetRateLimitsResp) ProtoReflect() protoreflect.Message

func (*GetRateLimitsResp) Reset

func (x *GetRateLimitsResp) Reset()

func (*GetRateLimitsResp) String

func (x *GetRateLimitsResp) String() string

type HashString64

type HashString64 func(data string) uint64

type HealthCheckReq

type HealthCheckReq struct {
	// contains filtered or unexported fields
}

func (*HealthCheckReq) Descriptor deprecated

func (*HealthCheckReq) Descriptor() ([]byte, []int)

Deprecated: Use HealthCheckReq.ProtoReflect.Descriptor instead.

func (*HealthCheckReq) ProtoMessage

func (*HealthCheckReq) ProtoMessage()

func (*HealthCheckReq) ProtoReflect

func (x *HealthCheckReq) ProtoReflect() protoreflect.Message

func (*HealthCheckReq) Reset

func (x *HealthCheckReq) Reset()

func (*HealthCheckReq) String

func (x *HealthCheckReq) String() string

type HealthCheckResp

type HealthCheckResp struct {

	// Valid entries are 'healthy' or 'unhealthy'
	Status string `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"`
	// If 'unhealthy', message indicates the problem
	Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
	// The number of peers we know about
	PeerCount int32 `protobuf:"varint,3,opt,name=peer_count,json=peerCount,proto3" json:"peer_count,omitempty"`
	// contains filtered or unexported fields
}

func (*HealthCheckResp) Descriptor deprecated

func (*HealthCheckResp) Descriptor() ([]byte, []int)

Deprecated: Use HealthCheckResp.ProtoReflect.Descriptor instead.

func (*HealthCheckResp) GetMessage

func (x *HealthCheckResp) GetMessage() string

func (*HealthCheckResp) GetPeerCount

func (x *HealthCheckResp) GetPeerCount() int32

func (*HealthCheckResp) GetStatus

func (x *HealthCheckResp) GetStatus() string

func (*HealthCheckResp) ProtoMessage

func (*HealthCheckResp) ProtoMessage()

func (*HealthCheckResp) ProtoReflect

func (x *HealthCheckResp) ProtoReflect() protoreflect.Message

func (*HealthCheckResp) Reset

func (x *HealthCheckResp) Reset()

func (*HealthCheckResp) String

func (x *HealthCheckResp) String() string

type HitEvent added in v2.8.1

type HitEvent struct {
	Request  *RateLimitReq
	Response *RateLimitResp
}

type Interval

type Interval struct {
	C chan struct{}
	// contains filtered or unexported fields
}

Interval is a one-shot ticker. Call `Next()` to trigger the start of an interval. Read the `C` channel for tick event.

func NewInterval

func NewInterval(d clock.Duration) *Interval

NewInterval creates a new ticker like object, however the `C` channel does not return the current time and `C` channel will only get a tick after `Next()` has been called.

func (*Interval) Next

func (i *Interval) Next()

Next queues the next interval to run, If multiple calls to Next() are made before previous intervals have completed they are ignored.

func (*Interval) Stop

func (i *Interval) Stop()

type K8sPool

type K8sPool struct {
	// contains filtered or unexported fields
}

func NewK8sPool

func NewK8sPool(conf K8sPoolConfig) (*K8sPool, error)

func (*K8sPool) Close

func (e *K8sPool) Close()

type K8sPoolConfig

type K8sPoolConfig struct {
	Logger    FieldLogger
	Mechanism WatchMechanism
	OnUpdate  UpdateFunc
	Namespace string
	Selector  string
	PodIP     string
	PodPort   string
}

type LRUCache

type LRUCache struct {
	// contains filtered or unexported fields
}

LRUCache is an LRU cache that supports expiration and is not thread-safe Be sure to use a mutex to prevent concurrent method calls.

func NewLRUCache

func NewLRUCache(maxSize int) *LRUCache

NewLRUCache creates a new Cache with a maximum size.

func (*LRUCache) Add

func (c *LRUCache) Add(item *CacheItem) bool

Add adds a value to the cache.

func (*LRUCache) Close

func (c *LRUCache) Close() error

func (*LRUCache) Each

func (c *LRUCache) Each() chan *CacheItem

Each is not thread-safe. Each() maintains a goroutine that iterates. Other go routines cannot safely access the Cache while iterating. It would be safer if this were done using an iterator or delegate pattern that doesn't require a goroutine. May need to reassess functional requirements.

func (*LRUCache) GetItem

func (c *LRUCache) GetItem(key string) (item *CacheItem, ok bool)

GetItem returns the item stored in the cache

func (*LRUCache) Remove

func (c *LRUCache) Remove(key string)

Remove removes the provided key from the cache.

func (*LRUCache) Size

func (c *LRUCache) Size() int64

Size returns the number of items in the cache.

func (*LRUCache) UpdateExpiration

func (c *LRUCache) UpdateExpiration(key string, expireAt int64) bool

UpdateExpiration updates the expiration time for the key

type LRUCacheCollector

type LRUCacheCollector struct {
	// contains filtered or unexported fields
}

LRUCacheCollector provides prometheus metrics collector for LRUCache. Register only one collector, add one or more caches to this collector.

func NewLRUCacheCollector

func NewLRUCacheCollector() *LRUCacheCollector

func (*LRUCacheCollector) AddCache

func (collector *LRUCacheCollector) AddCache(cache Cache)

AddCache adds a Cache object to be tracked by the collector.

func (*LRUCacheCollector) Collect

func (collector *LRUCacheCollector) Collect(ch chan<- prometheus.Metric)

Collect fetches metric counts and gauges from the cache

func (*LRUCacheCollector) Describe

func (collector *LRUCacheCollector) Describe(ch chan<- *prometheus.Desc)

Describe fetches prometheus metrics to be registered

type LeakyBucketItem

type LeakyBucketItem struct {
	Limit     int64
	Duration  int64
	Remaining float64
	UpdatedAt int64
	Burst     int64
}

type Loader

type Loader interface {
	// Load is called by gubernator just before the instance is ready to accept requests. The implementation
	// should return a channel gubernator can read to load all rate limits that should be loaded into the
	// instance cache. The implementation should close the channel to indicate no more rate limits left to load.
	Load() (chan *CacheItem, error)

	// Save is called by gubernator just before the instance is shutdown. The passed channel should be
	// read until the channel is closed.
	Save(chan *CacheItem) error
}

Loader interface allows implementors to store all or a subset of ratelimits into a persistent store during startup and shutdown of the gubernator instance.

type LogLevelJSON

type LogLevelJSON struct {
	Level logrus.Level
}

func (LogLevelJSON) MarshalJSON

func (ll LogLevelJSON) MarshalJSON() ([]byte, error)

func (LogLevelJSON) String

func (ll LogLevelJSON) String() string

func (*LogLevelJSON) UnmarshalJSON

func (ll *LogLevelJSON) UnmarshalJSON(b []byte) error

type MemberListEncryptionConfig added in v2.7.0

type MemberListEncryptionConfig struct {
	// (Required) A list of base64 encoded keys. Each key should be either 16, 24, or 32 bytes
	// when decoded to select AES-128, AES-192, or AES-256 respectively.
	// The first key in the list will be used for encrypting outbound messages. All keys are
	// attempted when decrypting gossip, which allows for rotations.
	SecretKeys []string `json:"secret-keys"`
	// (Optional) Defaults to true. Controls whether to enforce encryption for incoming
	// gossip. It is used for upshifting from unencrypted to encrypted gossip on
	// a running cluster.
	GossipVerifyIncoming bool `json:"gossip-verify-incoming"`
	// (Optional) Defaults to true. Controls whether to enforce encryption for outgoing
	// gossip. It is used for upshifting from unencrypted to encrypted gossip on
	// a running cluster.
	GossipVerifyOutgoing bool `json:"gossip-verify-outgoing"`
}

type MemberListPool

type MemberListPool struct {
	// contains filtered or unexported fields
}

func NewMemberListPool

func NewMemberListPool(ctx context.Context, conf MemberListPoolConfig) (*MemberListPool, error)

func (*MemberListPool) Close

func (m *MemberListPool) Close()

type MemberListPoolConfig

type MemberListPoolConfig struct {
	// (Required) This is the peer information that will be advertised to other members
	Advertise PeerInfo

	// (Required) This is the address:port the member list protocol listen for other members on
	MemberListAddress string

	// (Required) This is the address:port the member list will advertise to other members it finds
	AdvertiseAddress string

	// (Required) A list of nodes this member list instance can contact to find other members.
	KnownNodes []string

	// (Required) A callback function which is called when the member list changes
	OnUpdate UpdateFunc

	// (Optional) The name of the node this member list identifies itself as.
	NodeName string

	// (Optional) An interface through which logging will occur (Usually *logrus.Entry)
	Logger FieldLogger

	// (Optional) The encryption settings used for memberlist.
	EncryptionConfig MemberListEncryptionConfig
}

type MetadataCarrier

type MetadataCarrier struct {
	Map map[string]string
}

func (MetadataCarrier) Get

func (c MetadataCarrier) Get(key string) string

Get returns the value associated with the passed key.

func (MetadataCarrier) Keys

func (c MetadataCarrier) Keys() []string

Keys lists the keys stored in this carrier.

func (MetadataCarrier) Set

func (c MetadataCarrier) Set(key, value string)

Set stores the key-value pair.

type MetricFlags

type MetricFlags int64
const (
	FlagOSMetrics MetricFlags = 1 << iota
	FlagGolangMetrics
)

func (*MetricFlags) Has

func (f *MetricFlags) Has(flag MetricFlags) bool

func (*MetricFlags) Set

func (f *MetricFlags) Set(flag MetricFlags, set bool)

type MockLoader

type MockLoader struct {
	Called     map[string]int
	CacheItems []*CacheItem
}

func NewMockLoader

func NewMockLoader() *MockLoader

func (*MockLoader) Load

func (ml *MockLoader) Load() (chan *CacheItem, error)

func (*MockLoader) Save

func (ml *MockLoader) Save(in chan *CacheItem) error

type MockStore

type MockStore struct {
	Called     map[string]int
	CacheItems map[string]*CacheItem
}

func NewMockStore

func NewMockStore() *MockStore

func (*MockStore) Get

func (ms *MockStore) Get(ctx context.Context, r *RateLimitReq) (*CacheItem, bool)

func (*MockStore) OnChange

func (ms *MockStore) OnChange(ctx context.Context, r *RateLimitReq, item *CacheItem)

func (*MockStore) Remove

func (ms *MockStore) Remove(ctx context.Context, key string)

type PeerClient

type PeerClient struct {
	// contains filtered or unexported fields
}

func NewPeerClient

func NewPeerClient(conf PeerConfig) (*PeerClient, error)

NewPeerClient tries to establish a connection to a peer in a non-blocking fashion. If batching is enabled, it also starts a goroutine where batches will be processed.

func (*PeerClient) GetLastErr

func (c *PeerClient) GetLastErr() []string

func (*PeerClient) GetPeerRateLimit

func (c *PeerClient) GetPeerRateLimit(ctx context.Context, r *RateLimitReq) (resp *RateLimitResp, err error)

GetPeerRateLimit forwards a rate limit request to a peer. If the rate limit has `behavior == BATCHING` configured, this method will attempt to batch the rate limits

func (*PeerClient) GetPeerRateLimits

func (c *PeerClient) GetPeerRateLimits(ctx context.Context, r *GetPeerRateLimitsReq) (resp *GetPeerRateLimitsResp, err error)

GetPeerRateLimits requests a list of rate limit statuses from a peer

func (*PeerClient) Info

func (c *PeerClient) Info() PeerInfo

Info returns PeerInfo struct that describes this PeerClient

func (*PeerClient) Shutdown

func (c *PeerClient) Shutdown(ctx context.Context) error

Shutdown waits until all outstanding requests have finished or the context is cancelled. Then it closes the grpc connection.

func (*PeerClient) UpdatePeerGlobals

func (c *PeerClient) UpdatePeerGlobals(ctx context.Context, r *UpdatePeerGlobalsReq) (resp *UpdatePeerGlobalsResp, err error)

UpdatePeerGlobals sends global rate limit status updates to a peer

type PeerConfig

type PeerConfig struct {
	TLS       *tls.Config
	Behavior  BehaviorConfig
	Info      PeerInfo
	Log       FieldLogger
	TraceGRPC bool
}

type PeerInfo

type PeerInfo struct {
	// (Optional) The name of the data center this peer is in. Leave blank if not using multi data center support.
	DataCenter string `json:"data-center"`
	// (Optional) The http address:port of the peer
	HTTPAddress string `json:"http-address"`
	// (Required) The grpc address:port of the peer
	GRPCAddress string `json:"grpc-address"`
	// (Optional) Is true if PeerInfo is for this instance of gubernator
	IsOwner bool `json:"is-owner,omitempty"`
}

func RandomPeer

func RandomPeer(peers []PeerInfo) PeerInfo

RandomPeer returns a random peer from the list of peers provided

func (PeerInfo) HashKey

func (p PeerInfo) HashKey() string

HashKey returns the hash key used to identify this peer in the Picker.

type PeerPicker

type PeerPicker interface {
	GetByPeerInfo(PeerInfo) *PeerClient
	Peers() []*PeerClient
	Get(string) (*PeerClient, error)
	New() PeerPicker
	Add(*PeerClient)
}

type PeersV1Client

type PeersV1Client interface {
	// Used by peers to relay batches of requests to an owner peer
	GetPeerRateLimits(ctx context.Context, in *GetPeerRateLimitsReq, opts ...grpc.CallOption) (*GetPeerRateLimitsResp, error)
	// Used by owner peers to send global rate limit updates to non-owner peers
	UpdatePeerGlobals(ctx context.Context, in *UpdatePeerGlobalsReq, opts ...grpc.CallOption) (*UpdatePeerGlobalsResp, error)
}

PeersV1Client is the client API for PeersV1 service.

For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.

func NewPeersV1Client

func NewPeersV1Client(cc grpc.ClientConnInterface) PeersV1Client

type PeersV1Server

type PeersV1Server interface {
	// Used by peers to relay batches of requests to an owner peer
	GetPeerRateLimits(context.Context, *GetPeerRateLimitsReq) (*GetPeerRateLimitsResp, error)
	// Used by owner peers to send global rate limit updates to non-owner peers
	UpdatePeerGlobals(context.Context, *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error)
}

PeersV1Server is the server API for PeersV1 service. All implementations should embed UnimplementedPeersV1Server for forward compatibility

type PoolInterface

type PoolInterface interface {
	Close()
}

type RateLimitReq

type RateLimitReq struct {

	// The name of the rate limit IE: 'requests_per_second', 'gets_per_minute`
	Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
	// Uniquely identifies this rate limit IE: 'ip:10.2.10.7' or 'account:123445'
	UniqueKey string `protobuf:"bytes,2,opt,name=unique_key,json=uniqueKey,proto3" json:"unique_key,omitempty"`
	// Rate limit requests optionally specify the number of hits a request adds to the matched limit. If Hit
	// is zero, the request returns the current limit, but does not increment the hit count.
	Hits int64 `protobuf:"varint,3,opt,name=hits,proto3" json:"hits,omitempty"`
	// The number of requests that can occur for the duration of the rate limit
	Limit int64 `protobuf:"varint,4,opt,name=limit,proto3" json:"limit,omitempty"`
	// The duration of the rate limit in milliseconds
	// Second = 1000 Milliseconds
	// Minute = 60000 Milliseconds
	// Hour = 3600000 Milliseconds
	Duration int64 `protobuf:"varint,5,opt,name=duration,proto3" json:"duration,omitempty"`
	// The algorithm used to calculate the rate limit. The algorithm may change on
	// subsequent requests, when this occurs any previous rate limit hit counts are reset.
	Algorithm Algorithm `protobuf:"varint,6,opt,name=algorithm,proto3,enum=pb.gubernator.Algorithm" json:"algorithm,omitempty"`
	// Behavior is a set of int32 flags that control the behavior of the rate limit in gubernator
	Behavior Behavior `protobuf:"varint,7,opt,name=behavior,proto3,enum=pb.gubernator.Behavior" json:"behavior,omitempty"`
	// Maximum burst size that the limit can accept.
	Burst int64 `protobuf:"varint,8,opt,name=burst,proto3" json:"burst,omitempty"`
	// This is metadata that is associated with this rate limit. Peer to Peer communication will use
	// this to pass trace context to other peers. Might be useful for future clients to pass along
	// trace information to gubernator.
	Metadata map[string]string `` /* 157-byte string literal not displayed */
	// The exact time this request was created in Epoch milliseconds.  Due to
	// time drift between systems, it may be advantageous for a client to set the
	// exact time the request was created. It possible the system clock for the
	// client has drifted from the system clock where gubernator daemon is
	// running.
	//
	// The created time is used by gubernator to calculate the reset time for
	// both token and leaky algorithms. If it is not set by the client,
	// gubernator will set the created time when it receives the rate limit
	// request.
	CreatedAt *int64 `protobuf:"varint,10,opt,name=created_at,json=createdAt,proto3,oneof" json:"created_at,omitempty"`
	// contains filtered or unexported fields
}

func (*RateLimitReq) Descriptor deprecated

func (*RateLimitReq) Descriptor() ([]byte, []int)

Deprecated: Use RateLimitReq.ProtoReflect.Descriptor instead.

func (*RateLimitReq) GetAlgorithm

func (x *RateLimitReq) GetAlgorithm() Algorithm

func (*RateLimitReq) GetBehavior

func (x *RateLimitReq) GetBehavior() Behavior

func (*RateLimitReq) GetBurst

func (x *RateLimitReq) GetBurst() int64

func (*RateLimitReq) GetCreatedAt

func (x *RateLimitReq) GetCreatedAt() int64

func (*RateLimitReq) GetDuration

func (x *RateLimitReq) GetDuration() int64

func (*RateLimitReq) GetHits

func (x *RateLimitReq) GetHits() int64

func (*RateLimitReq) GetLimit

func (x *RateLimitReq) GetLimit() int64

func (*RateLimitReq) GetMetadata

func (x *RateLimitReq) GetMetadata() map[string]string

func (*RateLimitReq) GetName

func (x *RateLimitReq) GetName() string

func (*RateLimitReq) GetUniqueKey

func (x *RateLimitReq) GetUniqueKey() string

func (*RateLimitReq) HashKey

func (m *RateLimitReq) HashKey() string

func (*RateLimitReq) ProtoMessage

func (*RateLimitReq) ProtoMessage()

func (*RateLimitReq) ProtoReflect

func (x *RateLimitReq) ProtoReflect() protoreflect.Message

func (*RateLimitReq) Reset

func (x *RateLimitReq) Reset()

func (*RateLimitReq) String

func (x *RateLimitReq) String() string

type RateLimitReqState

type RateLimitReqState struct {
	IsOwner bool
}

type RateLimitResp

type RateLimitResp struct {

	// The status of the rate limit.
	Status Status `protobuf:"varint,1,opt,name=status,proto3,enum=pb.gubernator.Status" json:"status,omitempty"`
	// The currently configured request limit (Identical to [[RateLimitReq.limit]]).
	Limit int64 `protobuf:"varint,2,opt,name=limit,proto3" json:"limit,omitempty"`
	// This is the number of requests remaining before the rate limit is hit but after subtracting the hits from the current request
	Remaining int64 `protobuf:"varint,3,opt,name=remaining,proto3" json:"remaining,omitempty"`
	// This is the time when the rate limit span will be reset, provided as a unix timestamp in milliseconds.
	ResetTime int64 `protobuf:"varint,4,opt,name=reset_time,json=resetTime,proto3" json:"reset_time,omitempty"`
	// Contains the error; If set all other values should be ignored
	Error string `protobuf:"bytes,5,opt,name=error,proto3" json:"error,omitempty"`
	// This is additional metadata that a client might find useful. (IE: Additional headers, coordinator ownership, etc..)
	Metadata map[string]string `` /* 157-byte string literal not displayed */
	// contains filtered or unexported fields
}

func (*RateLimitResp) Descriptor deprecated

func (*RateLimitResp) Descriptor() ([]byte, []int)

Deprecated: Use RateLimitResp.ProtoReflect.Descriptor instead.

func (*RateLimitResp) GetError

func (x *RateLimitResp) GetError() string

func (*RateLimitResp) GetLimit

func (x *RateLimitResp) GetLimit() int64

func (*RateLimitResp) GetMetadata

func (x *RateLimitResp) GetMetadata() map[string]string

func (*RateLimitResp) GetRemaining

func (x *RateLimitResp) GetRemaining() int64

func (*RateLimitResp) GetResetTime

func (x *RateLimitResp) GetResetTime() int64

func (*RateLimitResp) GetStatus

func (x *RateLimitResp) GetStatus() Status

func (*RateLimitResp) ProtoMessage

func (*RateLimitResp) ProtoMessage()

func (*RateLimitResp) ProtoReflect

func (x *RateLimitResp) ProtoReflect() protoreflect.Message

func (*RateLimitResp) Reset

func (x *RateLimitResp) Reset()

func (*RateLimitResp) String

func (x *RateLimitResp) String() string

type RegionPeerPicker

type RegionPeerPicker interface {
	GetClients(string) ([]*PeerClient, error)
	GetByPeerInfo(PeerInfo) *PeerClient
	Pickers() map[string]PeerPicker
	Peers() []*PeerClient
	Add(*PeerClient)
	New() RegionPeerPicker
}

type RegionPicker

type RegionPicker struct {
	*ReplicatedConsistentHash
	// contains filtered or unexported fields
}

RegionPicker encapsulates pickers for a set of regions

func NewRegionPicker

func NewRegionPicker(fn HashString64) *RegionPicker

func (*RegionPicker) Add

func (rp *RegionPicker) Add(peer *PeerClient)

func (*RegionPicker) GetByPeerInfo

func (rp *RegionPicker) GetByPeerInfo(info PeerInfo) *PeerClient

GetByPeerInfo returns the first PeerClient the PeerInfo.HasKey() matches

func (*RegionPicker) GetClients

func (rp *RegionPicker) GetClients(key string) ([]*PeerClient, error)

GetClients returns all the PeerClients that match this key in all regions

func (*RegionPicker) New

func (rp *RegionPicker) New() RegionPeerPicker

func (*RegionPicker) Peers

func (rp *RegionPicker) Peers() []*PeerClient

func (*RegionPicker) Pickers

func (rp *RegionPicker) Pickers() map[string]PeerPicker

Pickers return a map of each region and its respective PeerPicker

type ReplicatedConsistentHash

type ReplicatedConsistentHash struct {
	// contains filtered or unexported fields
}

Implements PeerPicker

func NewReplicatedConsistentHash

func NewReplicatedConsistentHash(fn HashString64, replicas int) *ReplicatedConsistentHash

func (*ReplicatedConsistentHash) Add

func (ch *ReplicatedConsistentHash) Add(peer *PeerClient)

Adds a peer to the hash

func (*ReplicatedConsistentHash) Get

Given a key, return the peer that key is assigned too

func (*ReplicatedConsistentHash) GetByPeerInfo

func (ch *ReplicatedConsistentHash) GetByPeerInfo(peer PeerInfo) *PeerClient

Returns the peer by hostname

func (*ReplicatedConsistentHash) New

func (*ReplicatedConsistentHash) Peers

func (ch *ReplicatedConsistentHash) Peers() []*PeerClient

func (*ReplicatedConsistentHash) Size

func (ch *ReplicatedConsistentHash) Size() int

Returns number of peers in the picker

type Status

type Status int32
const (
	Status_UNDER_LIMIT Status = 0
	Status_OVER_LIMIT  Status = 1
)

func (Status) Descriptor

func (Status) Descriptor() protoreflect.EnumDescriptor

func (Status) Enum

func (x Status) Enum() *Status

func (Status) EnumDescriptor deprecated

func (Status) EnumDescriptor() ([]byte, []int)

Deprecated: Use Status.Descriptor instead.

func (Status) Number

func (x Status) Number() protoreflect.EnumNumber

func (Status) String

func (x Status) String() string

func (Status) Type

func (Status) Type() protoreflect.EnumType

type Store

type Store interface {
	// Called by gubernator *after* a rate limit item is updated. It's up to the store to
	// decide if this rate limit item should be persisted in the store. It's up to the
	// store to expire old rate limit items. The CacheItem represents the current state of
	// the rate limit item *after* the RateLimitReq has been applied.
	OnChange(ctx context.Context, r *RateLimitReq, item *CacheItem)

	// Called by gubernator when a rate limit is missing from the cache. It's up to the store
	// to decide if this request is fulfilled. Should return true if the request is fulfilled
	// and false if the request is not fulfilled or doesn't exist in the store.
	Get(ctx context.Context, r *RateLimitReq) (*CacheItem, bool)

	// Called by gubernator when an existing rate limit should be removed from the store.
	// NOTE: This is NOT called when an rate limit expires from the cache, store implementors
	// must expire rate limits in the store.
	Remove(ctx context.Context, key string)
}

Store interface allows implementors to off load storage of all or a subset of ratelimits to some persistent store. Methods OnChange() and Remove() should avoid blocking where possible to maximize performance of gubernator. Implementations MUST be threadsafe.

type TLSConfig

type TLSConfig struct {
	// (Optional) The path to the Trusted Certificate Authority.
	CaFile string

	// (Optional) The path to the Trusted Certificate Authority private key.
	CaKeyFile string

	// (Optional) The path to the un-encrypted key for the server certificate.
	KeyFile string

	// (Optional) The path to the server certificate.
	CertFile string

	// (Optional) If true gubernator will generate self-signed certificates. If CaFile and CaKeyFile
	//  is set but no KeyFile or CertFile is set then gubernator will generate a self-signed key using
	//  the CaFile provided.
	AutoTLS bool

	// (Optional) Configures the MinVersion for ServerTLS. If not set, defaults to TLS 1.0
	MinVersion uint16

	// (Optional) Sets the Client Authentication type as defined in the 'tls' package.
	// Defaults to tls.NoClientCert.See the standard library tls.ClientAuthType for valid values.
	// If set to anything but tls.NoClientCert then SetupTLS() attempts to load ClientAuthCaFile,
	// ClientAuthKeyFile and ClientAuthCertFile and sets those certs into the ClientTLS struct. If
	// none of the ClientXXXFile's are set, uses KeyFile and CertFile for client authentication.
	ClientAuth tls.ClientAuthType

	// (Optional) The path to the Trusted Certificate Authority used for client auth. If ClientAuth is
	// set and this field is empty, then CaFile is used to auth clients.
	ClientAuthCaFile string

	// (Optional) The path to the client private key, which is used to create the ClientTLS config. If
	// ClientAuth is set and this field is empty then KeyFile is used to create the ClientTLS.
	ClientAuthKeyFile string

	// (Optional) The path to the client cert key, which is used to create the ClientTLS config. If
	// ClientAuth is set and this field is empty then KeyFile is used to create the ClientTLS.
	ClientAuthCertFile string

	// (Optional) If InsecureSkipVerify is true, TLS clients will accept any certificate
	// presented by the server and any host name in that certificate.
	InsecureSkipVerify bool

	// (Optional) A Logger which implements the declared logger interface (typically *logrus.Entry)
	Logger FieldLogger

	// (Optional) The CA Certificate in PEM format. Used if CaFile is unset
	CaPEM *bytes.Buffer

	// (Optional) The CA Private Key in PEM format. Used if CaKeyFile is unset
	CaKeyPEM *bytes.Buffer

	// (Optional) The Certificate Key in PEM format. Used if KeyFile is unset.
	KeyPEM *bytes.Buffer

	// (Optional) The Certificate in PEM format. Used if CertFile is unset.
	CertPEM *bytes.Buffer

	// (Optional) The client auth CA Certificate in PEM format. Used if ClientAuthCaFile is unset.
	ClientAuthCaPEM *bytes.Buffer

	// (Optional) The client auth private key in PEM format. Used if ClientAuthKeyFile is unset.
	ClientAuthKeyPEM *bytes.Buffer

	// (Optional) The client auth Certificate in PEM format. Used if ClientAuthCertFile is unset.
	ClientAuthCertPEM *bytes.Buffer

	// (Optional) the server name to check when validating the provided certificate
	ClientAuthServerName string

	// (Optional) The config created for use by the gubernator server. If set, all other
	// fields in this struct are ignored and this config is used. If unset, gubernator.SetupTLS()
	// will create a config using the above fields.
	ServerTLS *tls.Config

	// (Optional) The config created for use by gubernator clients and peer communication. If set, all other
	// fields in this struct are ignored and this config is used. If unset, gubernator.SetupTLS()
	// will create a config using the above fields.
	ClientTLS *tls.Config
}

type TokenBucketItem

type TokenBucketItem struct {
	Status    Status
	Limit     int64
	Duration  int64
	Remaining int64
	CreatedAt int64
}

type UnimplementedPeersV1Server

type UnimplementedPeersV1Server struct {
}

UnimplementedPeersV1Server should be embedded to have forward compatible implementations.

func (UnimplementedPeersV1Server) GetPeerRateLimits

func (UnimplementedPeersV1Server) UpdatePeerGlobals

type UnimplementedV1Server

type UnimplementedV1Server struct {
}

UnimplementedV1Server should be embedded to have forward compatible implementations.

func (UnimplementedV1Server) GetRateLimits

func (UnimplementedV1Server) HealthCheck

type UnsafePeersV1Server

type UnsafePeersV1Server interface {
	// contains filtered or unexported methods
}

UnsafePeersV1Server may be embedded to opt out of forward compatibility for this service. Use of this interface is not recommended, as added methods to PeersV1Server will result in compilation errors.

type UnsafeV1Server

type UnsafeV1Server interface {
	// contains filtered or unexported methods
}

UnsafeV1Server may be embedded to opt out of forward compatibility for this service. Use of this interface is not recommended, as added methods to V1Server will result in compilation errors.

type UpdateFunc

type UpdateFunc func([]PeerInfo)

type UpdatePeerGlobal

type UpdatePeerGlobal struct {

	// Uniquely identifies this rate limit IE: 'ip:10.2.10.7' or 'account:123445'
	Key    string         `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
	Status *RateLimitResp `protobuf:"bytes,2,opt,name=status,proto3" json:"status,omitempty"`
	// The algorithm used to calculate the rate limit. The algorithm may change on
	// subsequent requests, when this occurs any previous rate limit hit counts are reset.
	Algorithm Algorithm `protobuf:"varint,3,opt,name=algorithm,proto3,enum=pb.gubernator.Algorithm" json:"algorithm,omitempty"`
	// The duration of the rate limit in milliseconds
	Duration int64 `protobuf:"varint,4,opt,name=duration,proto3" json:"duration,omitempty"`
	// The exact time the original request was created in Epoch milliseconds.
	// Due to time drift between systems, it may be advantageous for a client to
	// set the exact time the request was created. It possible the system clock
	// for the client has drifted from the system clock where gubernator daemon
	// is running.
	//
	// The created time is used by gubernator to calculate the reset time for
	// both token and leaky algorithms. If it is not set by the client,
	// gubernator will set the created time when it receives the rate limit
	// request.
	CreatedAt int64 `protobuf:"varint,5,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
	// contains filtered or unexported fields
}

func (*UpdatePeerGlobal) Descriptor deprecated

func (*UpdatePeerGlobal) Descriptor() ([]byte, []int)

Deprecated: Use UpdatePeerGlobal.ProtoReflect.Descriptor instead.

func (*UpdatePeerGlobal) GetAlgorithm

func (x *UpdatePeerGlobal) GetAlgorithm() Algorithm

func (*UpdatePeerGlobal) GetCreatedAt

func (x *UpdatePeerGlobal) GetCreatedAt() int64

func (*UpdatePeerGlobal) GetDuration

func (x *UpdatePeerGlobal) GetDuration() int64

func (*UpdatePeerGlobal) GetKey

func (x *UpdatePeerGlobal) GetKey() string

func (*UpdatePeerGlobal) GetStatus

func (x *UpdatePeerGlobal) GetStatus() *RateLimitResp

func (*UpdatePeerGlobal) ProtoMessage

func (*UpdatePeerGlobal) ProtoMessage()

func (*UpdatePeerGlobal) ProtoReflect

func (x *UpdatePeerGlobal) ProtoReflect() protoreflect.Message

func (*UpdatePeerGlobal) Reset

func (x *UpdatePeerGlobal) Reset()

func (*UpdatePeerGlobal) String

func (x *UpdatePeerGlobal) String() string

type UpdatePeerGlobalsReq

type UpdatePeerGlobalsReq struct {

	// Must specify at least one RateLimit
	Globals []*UpdatePeerGlobal `protobuf:"bytes,1,rep,name=globals,proto3" json:"globals,omitempty"`
	// contains filtered or unexported fields
}

func (*UpdatePeerGlobalsReq) Descriptor deprecated

func (*UpdatePeerGlobalsReq) Descriptor() ([]byte, []int)

Deprecated: Use UpdatePeerGlobalsReq.ProtoReflect.Descriptor instead.

func (*UpdatePeerGlobalsReq) GetGlobals

func (x *UpdatePeerGlobalsReq) GetGlobals() []*UpdatePeerGlobal

func (*UpdatePeerGlobalsReq) ProtoMessage

func (*UpdatePeerGlobalsReq) ProtoMessage()

func (*UpdatePeerGlobalsReq) ProtoReflect

func (x *UpdatePeerGlobalsReq) ProtoReflect() protoreflect.Message

func (*UpdatePeerGlobalsReq) Reset

func (x *UpdatePeerGlobalsReq) Reset()

func (*UpdatePeerGlobalsReq) String

func (x *UpdatePeerGlobalsReq) String() string

type UpdatePeerGlobalsResp

type UpdatePeerGlobalsResp struct {
	// contains filtered or unexported fields
}

func (*UpdatePeerGlobalsResp) Descriptor deprecated

func (*UpdatePeerGlobalsResp) Descriptor() ([]byte, []int)

Deprecated: Use UpdatePeerGlobalsResp.ProtoReflect.Descriptor instead.

func (*UpdatePeerGlobalsResp) ProtoMessage

func (*UpdatePeerGlobalsResp) ProtoMessage()

func (*UpdatePeerGlobalsResp) ProtoReflect

func (x *UpdatePeerGlobalsResp) ProtoReflect() protoreflect.Message

func (*UpdatePeerGlobalsResp) Reset

func (x *UpdatePeerGlobalsResp) Reset()

func (*UpdatePeerGlobalsResp) String

func (x *UpdatePeerGlobalsResp) String() string

type V1Client

type V1Client interface {
	// Given a list of rate limit requests, return the rate limits of each.
	GetRateLimits(ctx context.Context, in *GetRateLimitsReq, opts ...grpc.CallOption) (*GetRateLimitsResp, error)
	// This method is for round trip benchmarking and can be used by
	// the client to determine connectivity to the server
	HealthCheck(ctx context.Context, in *HealthCheckReq, opts ...grpc.CallOption) (*HealthCheckResp, error)
}

V1Client is the client API for V1 service.

For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.

func DialV1Server

func DialV1Server(server string, tls *tls.Config) (V1Client, error)

DialV1Server is a convenience function for dialing gubernator instances

func NewV1Client

func NewV1Client(cc grpc.ClientConnInterface) V1Client

type V1Instance

type V1Instance struct {
	UnimplementedV1Server
	UnimplementedPeersV1Server
	// contains filtered or unexported fields
}

func NewV1Instance

func NewV1Instance(conf Config) (s *V1Instance, err error)

NewV1Instance instantiate a single instance of a gubernator peer and register this instance with the provided GRPCServer.

func (*V1Instance) Close

func (s *V1Instance) Close() (err error)

func (*V1Instance) Collect

func (s *V1Instance) Collect(ch chan<- prometheus.Metric)

Collect fetches metrics from the server for use by prometheus

func (*V1Instance) Describe

func (s *V1Instance) Describe(ch chan<- *prometheus.Desc)

Describe fetches prometheus metrics to be registered

func (*V1Instance) GetPeer

func (s *V1Instance) GetPeer(ctx context.Context, key string) (p *PeerClient, err error)

GetPeer returns a peer client for the hash key provided

func (*V1Instance) GetPeerList

func (s *V1Instance) GetPeerList() []*PeerClient

func (*V1Instance) GetPeerRateLimits

func (s *V1Instance) GetPeerRateLimits(ctx context.Context, r *GetPeerRateLimitsReq) (resp *GetPeerRateLimitsResp, err error)

GetPeerRateLimits is called by other peers to get the rate limits owned by this peer.

func (*V1Instance) GetRateLimits

func (s *V1Instance) GetRateLimits(ctx context.Context, r *GetRateLimitsReq) (*GetRateLimitsResp, error)

GetRateLimits is the public interface used by clients to request rate limits from the system. If the rate limit `Name` and `UniqueKey` is not owned by this instance, then we forward the request to the peer that does.

func (*V1Instance) GetRegionPickers

func (s *V1Instance) GetRegionPickers() map[string]PeerPicker

func (*V1Instance) HealthCheck

func (s *V1Instance) HealthCheck(ctx context.Context, r *HealthCheckReq) (health *HealthCheckResp, err error)

HealthCheck Returns the health of our instance.

func (*V1Instance) SetPeers

func (s *V1Instance) SetPeers(peerInfo []PeerInfo)

SetPeers replaces the peers and shuts down all the previous peers. TODO this should return an error if we failed to connect to any of the new peers

func (*V1Instance) UpdatePeerGlobals

func (s *V1Instance) UpdatePeerGlobals(ctx context.Context, r *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error)

UpdatePeerGlobals updates the local cache with a list of global rate limits. This method should only be called by a peer who is the owner of a global rate limit.

type V1Server

type V1Server interface {
	// Given a list of rate limit requests, return the rate limits of each.
	GetRateLimits(context.Context, *GetRateLimitsReq) (*GetRateLimitsResp, error)
	// This method is for round trip benchmarking and can be used by
	// the client to determine connectivity to the server
	HealthCheck(context.Context, *HealthCheckReq) (*HealthCheckResp, error)
}

V1Server is the server API for V1 service. All implementations should embed UnimplementedV1Server for forward compatibility

type WatchMechanism

type WatchMechanism string
const (
	WatchEndpoints WatchMechanism = "endpoints"
	WatchPods      WatchMechanism = "pods"
)

func WatchMechanismFromString

func WatchMechanismFromString(mechanism string) (WatchMechanism, error)

type Worker

type Worker struct {
	// contains filtered or unexported fields
}

type WorkerPool

type WorkerPool struct {
	// contains filtered or unexported fields
}

func NewWorkerPool

func NewWorkerPool(conf *Config) *WorkerPool

func (*WorkerPool) AddCacheItem

func (p *WorkerPool) AddCacheItem(ctx context.Context, key string, item *CacheItem) (err error)

AddCacheItem adds an item to the worker's cache.

func (*WorkerPool) Close

func (p *WorkerPool) Close() error

func (*WorkerPool) GetCacheItem

func (p *WorkerPool) GetCacheItem(ctx context.Context, key string) (item *CacheItem, found bool, err error)

GetCacheItem gets item from worker's cache.

func (*WorkerPool) GetRateLimit

func (p *WorkerPool) GetRateLimit(ctx context.Context, rlRequest *RateLimitReq, reqState RateLimitReqState) (*RateLimitResp, error)

GetRateLimit sends a GetRateLimit request to worker pool.

func (*WorkerPool) Load

func (p *WorkerPool) Load(ctx context.Context) (err error)

Load atomically loads cache from persistent storage. Read from persistent storage. Load into each appropriate worker's cache. Workers are locked during this load operation to prevent race conditions.

func (*WorkerPool) Store

func (p *WorkerPool) Store(ctx context.Context) (err error)

Store atomically stores cache to persistent storage. Save all workers' caches to persistent storage. Workers are locked during this store operation to prevent race conditions.

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL