README ¶
Gubernator
Gubernator is a distributed, high performance, cloud native and stateless rate limiting service.
Features of Gubernator
- Gubernator evenly distributes rate limit requests across the entire cluster, which means you can scale the system by simply adding more nodes.
- Gubernator doesn’t rely on external caches like memcached or redis, as such there is no deployment synchronization with a dependant service. This makes dynamically growing or shrinking the cluster in an orchestration system like kubernetes or nomad trivial.
- Gubernator holds no state on disk, It’s configuration is passed to it by the client on a per-request basis.
- Gubernator provides both GRPC and HTTP access to it’s API.
- Can be run as a sidecar to services that need rate limiting or as a separate service.
- Can be used as a library to implement a domain specific rate limiting service.
- Supports optional eventually consistent rate limit distribution for extremely high throughput environments. (See GLOBAL behavior architecture.md)
- Gubernator is the english pronunciation of governor in Russian, also it sounds cool.
Stateless configuration
Gubernator is stateless in that it doesn’t require disk space to operate. No configuration or cache data is ever synced to disk. This is because every request to gubernator includes the config for the rate limit. At first you might think this an unnecessary overhead to each request. However, In reality a rate limit config is made up of only 4, 64bit integers.
An example rate limit request sent via GRPC might look like the following
rate_limits:
# Scopes the request to a specific rate limit
- name: requests_per_sec
# A unique_key that identifies this instance of a rate limit request
unique_key: account_id=123|source_ip=172.0.0.1
# The number of hits we are requesting
hits: 1
# The total number of requests allowed for this rate limit
limit: 100
# The duration of the rate limit in milliseconds
duration: 1000
# The algorithm used to calculate the rate limit
# 0 = Token Bucket
# 1 = Leaky Bucket
algorithm: 0
# The behavior of the rate limit in gubernator.
# 0 = BATCHING (Enables batching of requests to peers)
# 1 = NO_BATCHING (Disables batching)
# 2 = GLOBAL (Enable global caching for this rate limit)
behavior: 0
An example response would be
rate_limits:
# The status of the rate limit. OK = 0, OVER_LIMIT = 1
- status: 0,
# The current configured limit
limit: 10,
# The number of requests remaining
remaining: 7,
# A unix timestamp in milliseconds of when the bucket will reset, or if
# OVER_LIMIT is set it is the time at which the rate limit will no
# longer return OVER_LIMIT.
reset_time: 1551309219226,
# Additional metadata about the request the client might find useful
metadata:
# This is the name of the coordinator that rate limited this request
"owner": "api-n03.staging.us-east-1.mailgun.org:9041"
Rate limit Algorithm
Gubernator currently supports 2 rate limit algorithms.
-
Token Bucket implementation starts with an empty bucket, then each
Hit
adds a token to the bucket until the bucket is full. Once the bucket is full, requests will returnOVER_LIMIT
until thereset_time
is reached at which point the bucket is emptied and requests will returnUNDER_LIMIT
. This algorithm is useful for enforcing very bursty limits. (IE: Applications where a single request can add more than 1hit
to the bucket; or non network based queuing systems.) The downside to this implementation is that once you have hit the limit no more requests are allowed until the configured rate limit duration resets the bucket to zero. -
Leaky Bucket is implemented similarly to Token Bucket where
OVER_LIMIT
is returned when the bucket is full. However tokens leak from the bucket at a consistent rate which is calculated asduration / limit
. This algorithm is useful for metering, as the bucket leaks allowing traffic to continue without the need to wait for the configured rate limit duration to reset the bucket to zero.
Performance
In our production environment, for every request to our API we send 2 rate limit requests to gubernator for rate limit evaluation, one to rate the HTTP request and the other is to rate the number of recipients a user can send an email too within the specific duration. Under this setup a single gubernator node fields over 2,000 requests a second with most batched responses returned in under 1 millisecond.
Peer requests forwarded to owning nodes typically respond in under 30 microseconds.
NOTE The above graphs only report the slowest request within the 1 second sample time. So you are seeing the slowest requests that gubernator fields to clients.
Gubernator allows users to choose non-batching behavior which would further reduce latency for client rate limit requests. However because of throughput requirements our production environment uses Behaviour=BATCHING with the default 500 microsecond window. In production we have observed batch sizes of 1,000 during peak API usage. Other users who don’t have the same high traffic demands could disable batching and would see lower latencies but at the cost of throughput.
Gregorian Behavior
Users may choose a behavior called DURATION_IS_GREGORIAN
which changes the
behavior of the Duration
field. When Behavior
is set to DURATION_IS_GREGORIAN
the Duration
of the rate limit is reset whenever the end of selected gregorian
calendar interval is reached.
This is useful when you want to impose daily or monthly limits on a resource. Using this behavior you know when the end of the day or month is reached the limit on the resource is reset regardless of when the first rate limit request was received by Gubernator.
Given the following Duration
values
- 0 = Minutes
- 1 = Hours
- 2 = Days
- 3 = Weeks
- 4 = Months
- 5 = Years
Examples when using Behavior = DURATION_IS_GREGORIAN
- If
Duration = 2
(Days) then the rate limit will reset toCurrent = 0
at the end of the current day the rate limit was created. - If
Duration = 0
(Minutes) then the rate limit will reset toCurrent = 0
at the end of the minute the rate limit was created. - If
Duration = 4
(Months) then the rate limit will reset toCurrent = 0
at the end of the month the rate limit was created.
Gubernator as a library
If you are using golang, you can use Gubernator as a library. This is useful if
you wish to implement a rate limit service with your own company specific model
on top. We do this internally here at mailgun with a service we creatively
called ratelimits
which keeps track of the limits imposed on a per account
basis. In this way you can utilize the power and speed of Gubernator but still
layer business logic and integrate domain specific problems into your rate
limiting service.
When you use the library, your service becomes a full member of the cluster
participating in the same consistent hashing and caching as a stand alone
Gubernator server would. All you need to do is provide the GRPC server instance
and tell Gubernator where the peers in your cluster are located. The
cmd/gubernator/main.go
is a great example of how to use Gubernator as a
library.
Optional Disk Persistence
While the Gubernator server currently doesn't directly support disk
persistence, the Gubernator library does provide interfaces through which
library users can implement persistence. The Gubernator library has two
interfaces available for disk persistence. Depending on the use case an
implementor can implement the Loader interface and only support persistence
of rate limits at startup and shutdown, or users can implement the Store
interface and Gubernator will continuously call OnChange()
and Get()
to
keep the in memory cache and persistent store up to date with the latest rate
limit data. Both interfaces can be implemented simultaneously to ensure data
is always saved to persistent storage.
For those who choose to implement the Store
interface, it is not required to
store ALL the rate limits received via OnChange()
. For instance; If you wish
to support rate limit durations longer than a minute, day or month, calls to
OnChange()
can check the duration of a rate limit and decide to only persist
those rate limits that have durations over a self determined limit.
API
All methods are accessed via GRPC but are also exposed via HTTP using the GRPC Gateway
Health Check
Health check returns unhealthy
in the event a peer is reported by etcd or kubernetes
as up
but the server instance is unable to contact that peer via it's advertised address.
rpc HealthCheck (HealthCheckReq) returns (HealthCheckResp)
GET /v1/HealthCheck
Example response:
{
"status": "healthy",
"peer_count": 3
}
Get Rate Limit
Rate limits can be applied or retrieved using this interface. If the client
makes a request to the server with hits: 0
then current state of the rate
limit is retrieved but not incremented.
rpc GetRateLimits (GetRateLimitsReq) returns (GetRateLimitsResp)
POST /v1/GetRateLimits
Example Payload
{
"requests":[
{
"name": "requests_per_sec",
"unique_key": "account.id=1234",
"hits": 1,
"duration": 60000,
"limit": 10
}
]
}
Example response:
{
"responses":[
{
"status": 0,
"limit": "10",
"remaining": "7",
"reset_time": "1551309219226"
}
]
}
Deployment
NOTE: Gubernator uses etcd or kubernetes to discover peers and establish a cluster. If you don't have either, the docker-compose method is the simplest way to try gubernator out.
$ docker run -p 8081:81 -p 9080:80 -e GUBER_ETCD_ENDPOINTS=etcd1:2379,etcd2:2379 \
thrawn01/gubernator:latest
# Hit the HTTP API at localhost:9080
$ curl http://localhost:9080/v1/HealthCheck
The docker compose file uses member-list for peer discovery
# Download the docker-compose file
$ curl -O https://raw.githubusercontent.com/mailgun/gubernator/master/docker-compose.yaml
# Edit the compose file to change the environment config variables
$ vi docker-compose.yaml
# Run the docker container
$ docker-compose up -d
# Hit the HTTP API at localhost:9080 (GRPC is at 9081)
$ curl http://localhost:9080/v1/HealthCheck
# Download the kubernetes deployment spec
$ curl -O https://raw.githubusercontent.com/mailgun/gubernator/master/k8s-deployment.yaml
# Edit the deployment file to change the environment config variables
$ vi k8s-deployment.yaml
# Create the deployment (includes headless service spec)
$ kubectl create -f k8s-deployment.yaml
Gubernator supports TLS for both HTTP and GRPC connections. You can see an example with
self signed certs by running docker-compose-tls.yaml
# Run docker compose
$ docker-compose -f docker-compose-tls.yaml up -d
# Hit the HTTP API at localhost:9080 (GRPC is at 9081)
$ curl --cacert certs/ca.pem --cert certs/gubernator.pem --key certs/gubernator.key https://localhost:9080/v1/HealthCheck
Configuration
Gubernator is configured via environment variables with an optional --config
flag
which takes a file of key/values and places them into the local environment before startup.
See the example.conf
for all available config options and their descriptions.
Architecture
See architecture.md for a full description of the architecture and the inner workings of gubernator.
Documentation ¶
Overview ¶
Package gubernator is a reverse proxy.
It translates gRPC into RESTful JSON APIs.
Index ¶
- Constants
- Variables
- func ContextWithStats(ctx context.Context, stats *GRPCStats) context.Context
- func FromTimeStamp(ts int64) time.Duration
- func FromUnixMilliseconds(ts int64) time.Time
- func GregorianDuration(now clock.Time, d int64) (int64, error)
- func GregorianExpiration(now clock.Time, d int64) (int64, error)
- func HasBehavior(b Behavior, flag Behavior) bool
- func IsNotReady(err error) bool
- func MillisecondNow() int64
- func RandomString(n int) string
- func RegisterPeersV1Server(s *grpc.Server, srv PeersV1Server)
- func RegisterV1Handler(ctx context.Context, mux *runtime.ServeMux, conn *grpc.ClientConn) error
- func RegisterV1HandlerClient(ctx context.Context, mux *runtime.ServeMux, client V1Client) error
- func RegisterV1HandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, ...) (err error)
- func RegisterV1HandlerServer(ctx context.Context, mux *runtime.ServeMux, server V1Server) error
- func RegisterV1Server(s *grpc.Server, srv V1Server)
- func ResolveHostIP(addr string) (string, error)
- func RestConfig() (*rest.Config, error)
- func SetBehavior(b *Behavior, flag Behavior, set bool)
- func SetupTLS(conf *TLSConfig) error
- func ToTimeStamp(duration time.Duration) int64
- func WaitForConnect(ctx context.Context, addresses []string) error
- type Algorithm
- type Behavior
- type BehaviorConfig
- type Cache
- type CacheItem
- type Config
- type ConsistentHash
- func (ch *ConsistentHash) Add(peer *PeerClient)
- func (ch *ConsistentHash) Get(key string) (*PeerClient, error)
- func (ch *ConsistentHash) GetByPeerInfo(peer PeerInfo) *PeerClient
- func (ch *ConsistentHash) New() PeerPicker
- func (ch *ConsistentHash) Peers() []*PeerClient
- func (ch *ConsistentHash) Size() int
- type Daemon
- type DaemonConfig
- type EtcdPool
- type EtcdPoolConfig
- type GRPCStats
- type GRPCStatsHandler
- func (c *GRPCStatsHandler) Close()
- func (c *GRPCStatsHandler) Collect(ch chan<- prometheus.Metric)
- func (c *GRPCStatsHandler) Describe(ch chan<- *prometheus.Desc)
- func (c *GRPCStatsHandler) HandleConn(ctx context.Context, s stats.ConnStats)
- func (c *GRPCStatsHandler) HandleRPC(ctx context.Context, s stats.RPCStats)
- func (c *GRPCStatsHandler) TagConn(ctx context.Context, _ *stats.ConnTagInfo) context.Context
- func (c *GRPCStatsHandler) TagRPC(ctx context.Context, tagInfo *stats.RPCTagInfo) context.Context
- type GetPeerRateLimitsReq
- func (*GetPeerRateLimitsReq) Descriptor() ([]byte, []int)
- func (m *GetPeerRateLimitsReq) GetRequests() []*RateLimitReq
- func (*GetPeerRateLimitsReq) ProtoMessage()
- func (m *GetPeerRateLimitsReq) Reset()
- func (m *GetPeerRateLimitsReq) String() string
- func (m *GetPeerRateLimitsReq) XXX_DiscardUnknown()
- func (m *GetPeerRateLimitsReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *GetPeerRateLimitsReq) XXX_Merge(src proto.Message)
- func (m *GetPeerRateLimitsReq) XXX_Size() int
- func (m *GetPeerRateLimitsReq) XXX_Unmarshal(b []byte) error
- type GetPeerRateLimitsResp
- func (*GetPeerRateLimitsResp) Descriptor() ([]byte, []int)
- func (m *GetPeerRateLimitsResp) GetRateLimits() []*RateLimitResp
- func (*GetPeerRateLimitsResp) ProtoMessage()
- func (m *GetPeerRateLimitsResp) Reset()
- func (m *GetPeerRateLimitsResp) String() string
- func (m *GetPeerRateLimitsResp) XXX_DiscardUnknown()
- func (m *GetPeerRateLimitsResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *GetPeerRateLimitsResp) XXX_Merge(src proto.Message)
- func (m *GetPeerRateLimitsResp) XXX_Size() int
- func (m *GetPeerRateLimitsResp) XXX_Unmarshal(b []byte) error
- type GetRateLimitsReq
- func (*GetRateLimitsReq) Descriptor() ([]byte, []int)
- func (m *GetRateLimitsReq) GetRequests() []*RateLimitReq
- func (*GetRateLimitsReq) ProtoMessage()
- func (m *GetRateLimitsReq) Reset()
- func (m *GetRateLimitsReq) String() string
- func (m *GetRateLimitsReq) XXX_DiscardUnknown()
- func (m *GetRateLimitsReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *GetRateLimitsReq) XXX_Merge(src proto.Message)
- func (m *GetRateLimitsReq) XXX_Size() int
- func (m *GetRateLimitsReq) XXX_Unmarshal(b []byte) error
- type GetRateLimitsResp
- func (*GetRateLimitsResp) Descriptor() ([]byte, []int)
- func (m *GetRateLimitsResp) GetResponses() []*RateLimitResp
- func (*GetRateLimitsResp) ProtoMessage()
- func (m *GetRateLimitsResp) Reset()
- func (m *GetRateLimitsResp) String() string
- func (m *GetRateLimitsResp) XXX_DiscardUnknown()
- func (m *GetRateLimitsResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *GetRateLimitsResp) XXX_Merge(src proto.Message)
- func (m *GetRateLimitsResp) XXX_Size() int
- func (m *GetRateLimitsResp) XXX_Unmarshal(b []byte) error
- type HashFunc
- type HashFunc64
- type HealthCheckReq
- func (*HealthCheckReq) Descriptor() ([]byte, []int)
- func (*HealthCheckReq) ProtoMessage()
- func (m *HealthCheckReq) Reset()
- func (m *HealthCheckReq) String() string
- func (m *HealthCheckReq) XXX_DiscardUnknown()
- func (m *HealthCheckReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *HealthCheckReq) XXX_Merge(src proto.Message)
- func (m *HealthCheckReq) XXX_Size() int
- func (m *HealthCheckReq) XXX_Unmarshal(b []byte) error
- type HealthCheckResp
- func (*HealthCheckResp) Descriptor() ([]byte, []int)
- func (m *HealthCheckResp) GetMessage() string
- func (m *HealthCheckResp) GetPeerCount() int32
- func (m *HealthCheckResp) GetStatus() string
- func (*HealthCheckResp) ProtoMessage()
- func (m *HealthCheckResp) Reset()
- func (m *HealthCheckResp) String() string
- func (m *HealthCheckResp) XXX_DiscardUnknown()
- func (m *HealthCheckResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *HealthCheckResp) XXX_Merge(src proto.Message)
- func (m *HealthCheckResp) XXX_Size() int
- func (m *HealthCheckResp) XXX_Unmarshal(b []byte) error
- type Interval
- type K8sPool
- type K8sPoolConfig
- type LRUCache
- func (c *LRUCache) Add(record *CacheItem) bool
- func (c *LRUCache) Collect(ch chan<- prometheus.Metric)
- func (c *LRUCache) Describe(ch chan<- *prometheus.Desc)
- func (c *LRUCache) Each() chan *CacheItem
- func (c *LRUCache) GetItem(key interface{}) (item *CacheItem, ok bool)
- func (c *LRUCache) Lock()
- func (c *LRUCache) Remove(key interface{})
- func (c *LRUCache) Size() int
- func (c *LRUCache) Stats(_ bool) cachStats
- func (c *LRUCache) Unlock()
- func (c *LRUCache) UpdateExpiration(key interface{}, expireAt int64) bool
- type LeakyBucketItem
- type Loader
- type MemberListPool
- type MemberListPoolConfig
- type MockLoader
- type MockStore
- type PeerClient
- func (c *PeerClient) GetLastErr() []string
- func (c *PeerClient) GetPeerRateLimit(ctx context.Context, r *RateLimitReq) (*RateLimitResp, error)
- func (c *PeerClient) GetPeerRateLimits(ctx context.Context, r *GetPeerRateLimitsReq) (*GetPeerRateLimitsResp, error)
- func (c *PeerClient) Info() PeerInfo
- func (c *PeerClient) Shutdown(ctx context.Context) error
- func (c *PeerClient) UpdatePeerGlobals(ctx context.Context, r *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error)
- type PeerConfig
- type PeerErr
- type PeerInfo
- type PeerPicker
- type PeersV1Client
- type PeersV1Server
- type PoolInterface
- type RateLimitReq
- func (*RateLimitReq) Descriptor() ([]byte, []int)
- func (m *RateLimitReq) GetAlgorithm() Algorithm
- func (m *RateLimitReq) GetBehavior() Behavior
- func (m *RateLimitReq) GetDuration() int64
- func (m *RateLimitReq) GetHits() int64
- func (m *RateLimitReq) GetLimit() int64
- func (m *RateLimitReq) GetName() string
- func (m *RateLimitReq) GetUniqueKey() string
- func (m *RateLimitReq) HashKey() string
- func (*RateLimitReq) ProtoMessage()
- func (m *RateLimitReq) Reset()
- func (m *RateLimitReq) String() string
- func (m *RateLimitReq) XXX_DiscardUnknown()
- func (m *RateLimitReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *RateLimitReq) XXX_Merge(src proto.Message)
- func (m *RateLimitReq) XXX_Size() int
- func (m *RateLimitReq) XXX_Unmarshal(b []byte) error
- type RateLimitResp
- func (*RateLimitResp) Descriptor() ([]byte, []int)
- func (m *RateLimitResp) GetError() string
- func (m *RateLimitResp) GetLimit() int64
- func (m *RateLimitResp) GetMetadata() map[string]string
- func (m *RateLimitResp) GetRemaining() int64
- func (m *RateLimitResp) GetResetTime() int64
- func (m *RateLimitResp) GetStatus() Status
- func (*RateLimitResp) ProtoMessage()
- func (m *RateLimitResp) Reset()
- func (m *RateLimitResp) String() string
- func (m *RateLimitResp) XXX_DiscardUnknown()
- func (m *RateLimitResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *RateLimitResp) XXX_Merge(src proto.Message)
- func (m *RateLimitResp) XXX_Size() int
- func (m *RateLimitResp) XXX_Unmarshal(b []byte) error
- type RegionPeerPicker
- type RegionPicker
- func (rp *RegionPicker) Add(peer *PeerClient)
- func (rp *RegionPicker) GetByPeerInfo(info PeerInfo) *PeerClient
- func (rp *RegionPicker) GetClients(key string) ([]*PeerClient, error)
- func (rp *RegionPicker) New() RegionPeerPicker
- func (rp *RegionPicker) Peers() []*PeerClient
- func (rp *RegionPicker) Pickers() map[string]PeerPicker
- type ReplicatedConsistentHash
- func (ch *ReplicatedConsistentHash) Add(peer *PeerClient)
- func (ch *ReplicatedConsistentHash) Get(key string) (*PeerClient, error)
- func (ch *ReplicatedConsistentHash) GetByPeerInfo(peer PeerInfo) *PeerClient
- func (ch *ReplicatedConsistentHash) New() PeerPicker
- func (ch *ReplicatedConsistentHash) Peers() []*PeerClient
- func (ch *ReplicatedConsistentHash) Size() int
- type Status
- type Store
- type TLSConfig
- type TokenBucketItem
- type UnimplementedPeersV1Server
- type UnimplementedV1Server
- type UpdateFunc
- type UpdatePeerGlobal
- func (*UpdatePeerGlobal) Descriptor() ([]byte, []int)
- func (m *UpdatePeerGlobal) GetAlgorithm() Algorithm
- func (m *UpdatePeerGlobal) GetKey() string
- func (m *UpdatePeerGlobal) GetStatus() *RateLimitResp
- func (*UpdatePeerGlobal) ProtoMessage()
- func (m *UpdatePeerGlobal) Reset()
- func (m *UpdatePeerGlobal) String() string
- func (m *UpdatePeerGlobal) XXX_DiscardUnknown()
- func (m *UpdatePeerGlobal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *UpdatePeerGlobal) XXX_Merge(src proto.Message)
- func (m *UpdatePeerGlobal) XXX_Size() int
- func (m *UpdatePeerGlobal) XXX_Unmarshal(b []byte) error
- type UpdatePeerGlobalsReq
- func (*UpdatePeerGlobalsReq) Descriptor() ([]byte, []int)
- func (m *UpdatePeerGlobalsReq) GetGlobals() []*UpdatePeerGlobal
- func (*UpdatePeerGlobalsReq) ProtoMessage()
- func (m *UpdatePeerGlobalsReq) Reset()
- func (m *UpdatePeerGlobalsReq) String() string
- func (m *UpdatePeerGlobalsReq) XXX_DiscardUnknown()
- func (m *UpdatePeerGlobalsReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *UpdatePeerGlobalsReq) XXX_Merge(src proto.Message)
- func (m *UpdatePeerGlobalsReq) XXX_Size() int
- func (m *UpdatePeerGlobalsReq) XXX_Unmarshal(b []byte) error
- type UpdatePeerGlobalsResp
- func (*UpdatePeerGlobalsResp) Descriptor() ([]byte, []int)
- func (*UpdatePeerGlobalsResp) ProtoMessage()
- func (m *UpdatePeerGlobalsResp) Reset()
- func (m *UpdatePeerGlobalsResp) String() string
- func (m *UpdatePeerGlobalsResp) XXX_DiscardUnknown()
- func (m *UpdatePeerGlobalsResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *UpdatePeerGlobalsResp) XXX_Merge(src proto.Message)
- func (m *UpdatePeerGlobalsResp) XXX_Size() int
- func (m *UpdatePeerGlobalsResp) XXX_Unmarshal(b []byte) error
- type V1Client
- type V1Instance
- func (s *V1Instance) Close() error
- func (s *V1Instance) Collect(ch chan<- prometheus.Metric)
- func (s *V1Instance) Describe(ch chan<- *prometheus.Desc)
- func (s *V1Instance) GetPeer(key string) (*PeerClient, error)
- func (s *V1Instance) GetPeerList() []*PeerClient
- func (s *V1Instance) GetPeerRateLimits(ctx context.Context, r *GetPeerRateLimitsReq) (*GetPeerRateLimitsResp, error)
- func (s *V1Instance) GetRateLimits(ctx context.Context, r *GetRateLimitsReq) (*GetRateLimitsResp, error)
- func (s *V1Instance) GetRegionPickers() map[string]PeerPicker
- func (s *V1Instance) HealthCheck(ctx context.Context, r *HealthCheckReq) (*HealthCheckResp, error)
- func (s *V1Instance) SetPeers(peerInfo []PeerInfo)
- func (s *V1Instance) UpdatePeerGlobals(ctx context.Context, r *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error)
- type V1Server
- type WatchMechanism
Constants ¶
const ( Millisecond = 1 Second = 1000 * Millisecond Minute = 60 * Second )
const ( Healthy = "healthy" UnHealthy = "unhealthy" )
const ( GregorianMinutes int64 = iota GregorianHours GregorianDays GregorianWeeks GregorianMonths GregorianYears )
const DefaultReplicas = 512
Variables ¶
var Algorithm_name = map[int32]string{
0: "TOKEN_BUCKET",
1: "LEAKY_BUCKET",
}
var Algorithm_value = map[string]int32{
"TOKEN_BUCKET": 0,
"LEAKY_BUCKET": 1,
}
var Behavior_name = map[int32]string{
0: "BATCHING",
1: "NO_BATCHING",
2: "GLOBAL",
4: "DURATION_IS_GREGORIAN",
8: "RESET_REMAINING",
16: "MULTI_REGION",
}
var Behavior_value = map[string]int32{
"BATCHING": 0,
"NO_BATCHING": 1,
"GLOBAL": 2,
"DURATION_IS_GREGORIAN": 4,
"RESET_REMAINING": 8,
"MULTI_REGION": 16,
}
var DebugEnabled = false
var Status_name = map[int32]string{
0: "UNDER_LIMIT",
1: "OVER_LIMIT",
}
var Status_value = map[string]int32{
"UNDER_LIMIT": 0,
"OVER_LIMIT": 1,
}
Functions ¶
func ContextWithStats ¶
Returns a new `context.Context` that holds a reference to `GRPCStats`.
func FromTimeStamp ¶
FromTimeStamp is a convenience function to convert a unix millisecond timestamp to a time.Duration. Useful when working with gubernator request and response duration and reset_time fields.
func FromUnixMilliseconds ¶
FromUnixMilliseconds is a convenience function to convert a unix millisecond timestamp to a time.Time. Useful when working with gubernator request and response duration and reset_time fields.
func GregorianDuration ¶ added in v0.7.1
GregorianDuration returns the entire duration of the Gregorian interval
func GregorianExpiration ¶ added in v0.7.1
GregorianExpiration returns an gregorian interval as defined by the 'DURATION_IS_GREGORIAN` Behavior. it returns the expiration time as the end of GREGORIAN interval in milliseconds from `now`.
Example: If `now` is 2019-01-01 11:20:10 and `d` = GregorianMinutes then the return expire time would be 2019-01-01 11:20:59 in milliseconds since epoch
func HasBehavior ¶ added in v0.8.0
HasBehavior returns true if the provided behavior is set
func IsNotReady ¶ added in v0.9.0
IsNotReady returns true if the err is because the peer is not connected or in a closing state
func RandomString ¶
RandomString returns a random alpha string of 'n' length
func RegisterPeersV1Server ¶
func RegisterPeersV1Server(s *grpc.Server, srv PeersV1Server)
func RegisterV1Handler ¶
RegisterV1Handler registers the http handlers for service V1 to "mux". The handlers forward requests to the grpc endpoint over "conn".
func RegisterV1HandlerClient ¶
RegisterV1HandlerClient registers the http handlers for service V1 to "mux". The handlers forward requests to the grpc endpoint over the given implementation of "V1Client". Note: the gRPC framework executes interceptors within the gRPC handler. If the passed in "V1Client" doesn't go through the normal gRPC flow (creating a gRPC client etc.) then it will be up to the passed in "V1Client" to call the correct interceptors.
func RegisterV1HandlerFromEndpoint ¶
func RegisterV1HandlerFromEndpoint(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) (err error)
RegisterV1HandlerFromEndpoint is same as RegisterV1Handler but automatically dials to "endpoint" and closes the connection when "ctx" gets done.
func RegisterV1HandlerServer ¶ added in v0.9.0
RegisterV1HandlerServer registers the http handlers for service V1 to "mux". UnaryRPC :call V1Server directly. StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
func RegisterV1Server ¶
func ResolveHostIP ¶
If the passed address is "0.0.0.0" or "::" attempts to discover the actual ip address of the host
func RestConfig ¶
func SetBehavior ¶ added in v0.8.0
SetBehavior sets or clears the behavior depending on the boolean `set`
func ToTimeStamp ¶
ToTimeStamp is a convenience function to convert a time.Duration to a unix millisecond timestamp. Useful when working with gubernator request and response duration and reset_time fields.
Types ¶
type Algorithm ¶
type Algorithm int32
const ( // Token bucket algorithm https://en.wikipedia.org/wiki/Token_bucket Algorithm_TOKEN_BUCKET Algorithm = 0 // Leaky bucket algorithm https://en.wikipedia.org/wiki/Leaky_bucket Algorithm_LEAKY_BUCKET Algorithm = 1 )
func (Algorithm) EnumDescriptor ¶
type Behavior ¶
type Behavior int32
A set of int32 flags used to control the behavior of a rate limit in gubernator
const ( // BATCHING is the default behavior. This enables batching requests which protects the // service from thundering herd. IE: When a service experiences spikes of unexpected high // volume requests. // // Using this option introduces a small amount of latency depending on // the `batchWait` setting. Defaults to around 500 Microseconds of additional // latency in low throughput situations. For high volume loads, batching can reduce // the overall load on the system substantially. Behavior_BATCHING Behavior = 0 // Disables batching. Use this for super low latency rate limit requests when // thundering herd is not a concern but latency of requests is of paramount importance. Behavior_NO_BATCHING Behavior = 1 // Enables Global caching of the rate limit. Use this if the rate limit applies globally to // all ingress requests. (IE: Throttle hundreds of thousands of requests to an entire // datacenter or cluster of http servers) // // Using this option gubernator will continue to use a single peer as the rate limit coordinator // to increment and manage the state of the rate limit, however the result of the rate limit is // distributed to each peer and cached locally. A rate limit request received from any peer in the // cluster will first check the local cache for a rate limit answer, if it exists the peer will // immediately return the answer to the client and asynchronously forward the aggregate hits to // the peer coordinator. Because of GLOBALS async nature we lose some accuracy in rate limit // reporting, which may result in allowing some requests beyond the chosen rate limit. However we // gain massive performance as every request coming into the system does not have to wait for a // single peer to decide if the rate limit has been reached. Behavior_GLOBAL Behavior = 2 // Changes the behavior of the `Duration` field. When `Behavior` is set to `DURATION_IS_GREGORIAN` // the `Duration` of the rate limit is reset whenever the end of selected GREGORIAN calendar // interval is reached. // // Given the following `Duration` values // 0 = Minutes // 1 = Hours // 2 = Days // 3 = Weeks // 4 = Months // 5 = Years // // Examples when using `Behavior = DURATION_IS_GREGORIAN` // // If `Duration = 2` (Days) then the rate limit will expire at the end of the current day the // rate limit was created. // // If `Duration = 0` (Minutes) then the rate limit will expire at the end of the current minute // the rate limit was created. // // If `Duration = 4` (Months) then the rate limit will expire at the end of the current month // the rate limit was created. Behavior_DURATION_IS_GREGORIAN Behavior = 4 // If this flag is set causes the rate limit to reset any accrued hits stored in the cache, and will // ignore any `Hit` values provided in the current request. The effect this has is dependent on // algorithm chosen. For instance, if used with `TOKEN_BUCKET` it will immediately expire the // cache value. For `LEAKY_BUCKET` it sets the `Remaining` to `Limit`. Behavior_RESET_REMAINING Behavior = 8 // Pushes rate limits to other regions Behavior_MULTI_REGION Behavior = 16 )
func (Behavior) EnumDescriptor ¶
type BehaviorConfig ¶
type BehaviorConfig struct { // How long we should wait for a batched response from a peer BatchTimeout time.Duration // How long we should wait before sending a batched request BatchWait time.Duration // The max number of requests we can batch into a single peer request BatchLimit int // How long a non-owning peer should wait before syncing hits to the owning peer GlobalSyncWait time.Duration // How long we should wait for global sync responses from peers GlobalTimeout time.Duration // The max number of global updates we can batch into a single peer request GlobalBatchLimit int // How long the current region will collect request before pushing them to other regions MultiRegionSyncWait time.Duration // How long the current region will wait for responses from other regions MultiRegionTimeout time.Duration // The max number of requests the current region will collect MultiRegionBatchLimit int }
type Cache ¶ added in v0.7.1
type Cache interface { // Access methods Add(*CacheItem) bool UpdateExpiration(key interface{}, expireAt int64) bool GetItem(key interface{}) (value *CacheItem, ok bool) Each() chan *CacheItem Remove(key interface{}) // If the cache is exclusive, this will control access to the cache Unlock() Lock() }
So algorithms can interface with different cache implementations
type CacheItem ¶ added in v0.7.1
type CacheItem struct { Algorithm Algorithm Key string Value interface{} // Timestamp when rate limit expires ExpireAt int64 // Timestamp when the cache should invalidate this rate limit. This is useful when used in conjunction with // a persistent store to ensure our node has the most up to date info from the store. Ignored if set to `0` // It is set by the persistent store implementation to indicate when the node should query the persistent store // for the latest rate limit data. InvalidAt int64 }
type Config ¶
type Config struct { // (Required) A list of GRPC servers to register our instance with GRPCServers []*grpc.Server // (Optional) Adjust how gubernator behaviors are configured Behaviors BehaviorConfig // (Optional) The cache implementation Cache Cache // (Optional) A persistent store implementation. Allows the implementor the ability to store the rate limits this // instance of gubernator owns. It's up to the implementor to decide what rate limits to persist. // For instance an implementor might only persist rate limits that have an expiration of // longer than 1 hour. Store Store // (Optional) A loader from a persistent store. Allows the implementor the ability to load and save // the contents of the cache when the gubernator instance is started and stopped Loader Loader // (Optional) This is the peer picker algorithm the server will use decide which peer in the local cluster // will own the rate limit LocalPicker PeerPicker // (Optional) This is the peer picker algorithm the server will use when deciding which remote peer to forward // rate limits too when a `Config.DataCenter` is set to something other than empty string. RegionPicker RegionPeerPicker // (Optional) This is the name of our local data center. This value will be used by LocalPicker when // deciding who we should immediately connect too for our local picker. Should remain empty if not // using multi data center support. DataCenter string // (Optional) A Logger which implements the declared logger interface (typically *logrus.Entry) Logger logrus.FieldLogger // (Optional) The TLS config used when connecting to gubernator peers PeerTLS *tls.Config }
config for a gubernator instance
func (*Config) SetDefaults ¶
type ConsistentHash ¶
type ConsistentHash struct {
// contains filtered or unexported fields
}
Implements PeerPicker deprecated
func (*ConsistentHash) Add ¶
func (ch *ConsistentHash) Add(peer *PeerClient)
Adds a peer to the hash
func (*ConsistentHash) Get ¶
func (ch *ConsistentHash) Get(key string) (*PeerClient, error)
Given a key, return the peer that key is assigned too
func (*ConsistentHash) GetByPeerInfo ¶
func (ch *ConsistentHash) GetByPeerInfo(peer PeerInfo) *PeerClient
Returns the peer by peer info
func (*ConsistentHash) New ¶
func (ch *ConsistentHash) New() PeerPicker
func (*ConsistentHash) Peers ¶
func (ch *ConsistentHash) Peers() []*PeerClient
func (*ConsistentHash) Size ¶
func (ch *ConsistentHash) Size() int
Returns number of peers in the picker
type Daemon ¶
type Daemon struct { GRPCListeners []net.Listener HTTPListener net.Listener V1Server *V1Instance // contains filtered or unexported fields }
func SpawnDaemon ¶
func SpawnDaemon(ctx context.Context, conf DaemonConfig) (*Daemon, error)
SpawnDaemon starts a new gubernator daemon according to the provided DaemonConfig. This function will block until the daemon responds to connections as specified by GRPCListenAddress and HTTPListenAddress
func (*Daemon) Close ¶
func (s *Daemon) Close()
Close gracefully closes all server connections and listening sockets
func (*Daemon) Config ¶
func (s *Daemon) Config() DaemonConfig
Config returns the current config for this Daemon
type DaemonConfig ¶
type DaemonConfig struct { // (Required) The `address:port` that will accept GRPC requests GRPCListenAddress string // (Required) The `address:port` that will accept HTTP requests HTTPListenAddress string // (Optional) The `address:port` that is advertised to other Gubernator peers. // Defaults to `GRPCListenAddress` AdvertiseAddress string // (Optional) The number of items in the cache. Defaults to 50,000 CacheSize int // (Optional) Configure how behaviours behave Behaviors BehaviorConfig // (Optional) Identifies the datacenter this instance is running in. For // use with multi-region support DataCenter string // (Optional) Which pool to use when discovering other Gubernator peers // Valid options are [etcd, k8s, member-list] (Defaults to 'member-list') PeerDiscoveryType string // (Optional) Etcd configuration used for peer discovery EtcdPoolConf EtcdPoolConfig // (Optional) K8s configuration used for peer discovery K8PoolConf K8sPoolConfig // (Optional) Member list configuration used for peer discovery MemberListPoolConf MemberListPoolConfig // (Optional) The PeerPicker as selected by `GUBER_PEER_PICKER` Picker PeerPicker // (Optional) A Logger which implements the declared logger interface (typically *logrus.Entry) Logger logrus.FieldLogger // (Optional) TLS Configuration; SpawnDaemon() will modify the passed TLS config in an // attempt to build a complete TLS config if one is not provided. TLS *TLSConfig }
func SetupDaemonConfig ¶
func SetupDaemonConfig(logger *logrus.Logger, configFile string) (DaemonConfig, error)
SetupDaemonConfig returns a DaemonConfig object as configured by reading the provided config file and environment.
func (*DaemonConfig) ClientTLS ¶
func (d *DaemonConfig) ClientTLS() *tls.Config
func (*DaemonConfig) ServerTLS ¶
func (d *DaemonConfig) ServerTLS() *tls.Config
type EtcdPool ¶
type EtcdPool struct {
// contains filtered or unexported fields
}
func NewEtcdPool ¶
func NewEtcdPool(conf EtcdPoolConfig) (*EtcdPool, error)
type EtcdPoolConfig ¶
type EtcdPoolConfig struct { // (Required) This is the peer information that will be advertised to other gubernator instances Advertise PeerInfo // (Required) An etcd client currently connected to an etcd cluster Client *etcd.Client // (Required) Called when the list of gubernators in the pool updates OnUpdate UpdateFunc // (Optional) The etcd key prefix used when discovering other peers. Defaults to `/gubernator/peers/` KeyPrefix string // (Optional) The etcd config used to connect to the etcd cluster EtcdConfig *etcd.Config // (Optional) An interface through which logging will occur (Usually *logrus.Entry) Logger logrus.FieldLogger }
type GRPCStats ¶
func StatsFromContext ¶
Returns the `GRPCStats` previously associated with `ctx`.
type GRPCStatsHandler ¶
type GRPCStatsHandler struct {
// contains filtered or unexported fields
}
Implements the Prometheus collector interface. Such that when the /metrics handler is called this collector pulls all the stats from
func NewGRPCStatsHandler ¶
func NewGRPCStatsHandler() *GRPCStatsHandler
func (*GRPCStatsHandler) Close ¶
func (c *GRPCStatsHandler) Close()
func (*GRPCStatsHandler) Collect ¶
func (c *GRPCStatsHandler) Collect(ch chan<- prometheus.Metric)
func (*GRPCStatsHandler) Describe ¶
func (c *GRPCStatsHandler) Describe(ch chan<- *prometheus.Desc)
func (*GRPCStatsHandler) HandleConn ¶
func (c *GRPCStatsHandler) HandleConn(ctx context.Context, s stats.ConnStats)
func (*GRPCStatsHandler) HandleRPC ¶
func (c *GRPCStatsHandler) HandleRPC(ctx context.Context, s stats.RPCStats)
func (*GRPCStatsHandler) TagConn ¶
func (c *GRPCStatsHandler) TagConn(ctx context.Context, _ *stats.ConnTagInfo) context.Context
func (*GRPCStatsHandler) TagRPC ¶
func (c *GRPCStatsHandler) TagRPC(ctx context.Context, tagInfo *stats.RPCTagInfo) context.Context
type GetPeerRateLimitsReq ¶
type GetPeerRateLimitsReq struct { // Must specify at least one RateLimit. The peer that recives this request MUST be authoritative for // each rate_limit[x].unique_key provided, as the peer will not forward the request to any other peers Requests []*RateLimitReq `protobuf:"bytes,1,rep,name=requests,proto3" json:"requests,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*GetPeerRateLimitsReq) Descriptor ¶
func (*GetPeerRateLimitsReq) Descriptor() ([]byte, []int)
func (*GetPeerRateLimitsReq) GetRequests ¶
func (m *GetPeerRateLimitsReq) GetRequests() []*RateLimitReq
func (*GetPeerRateLimitsReq) ProtoMessage ¶
func (*GetPeerRateLimitsReq) ProtoMessage()
func (*GetPeerRateLimitsReq) Reset ¶
func (m *GetPeerRateLimitsReq) Reset()
func (*GetPeerRateLimitsReq) String ¶
func (m *GetPeerRateLimitsReq) String() string
func (*GetPeerRateLimitsReq) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *GetPeerRateLimitsReq) XXX_DiscardUnknown()
func (*GetPeerRateLimitsReq) XXX_Marshal ¶ added in v0.9.0
func (m *GetPeerRateLimitsReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*GetPeerRateLimitsReq) XXX_Merge ¶ added in v0.9.0
func (m *GetPeerRateLimitsReq) XXX_Merge(src proto.Message)
func (*GetPeerRateLimitsReq) XXX_Size ¶ added in v0.9.0
func (m *GetPeerRateLimitsReq) XXX_Size() int
func (*GetPeerRateLimitsReq) XXX_Unmarshal ¶ added in v0.9.0
func (m *GetPeerRateLimitsReq) XXX_Unmarshal(b []byte) error
type GetPeerRateLimitsResp ¶
type GetPeerRateLimitsResp struct { // Responses are in the same order as they appeared in the PeerRateLimitRequests RateLimits []*RateLimitResp `protobuf:"bytes,1,rep,name=rate_limits,json=rateLimits,proto3" json:"rate_limits,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*GetPeerRateLimitsResp) Descriptor ¶
func (*GetPeerRateLimitsResp) Descriptor() ([]byte, []int)
func (*GetPeerRateLimitsResp) GetRateLimits ¶
func (m *GetPeerRateLimitsResp) GetRateLimits() []*RateLimitResp
func (*GetPeerRateLimitsResp) ProtoMessage ¶
func (*GetPeerRateLimitsResp) ProtoMessage()
func (*GetPeerRateLimitsResp) Reset ¶
func (m *GetPeerRateLimitsResp) Reset()
func (*GetPeerRateLimitsResp) String ¶
func (m *GetPeerRateLimitsResp) String() string
func (*GetPeerRateLimitsResp) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *GetPeerRateLimitsResp) XXX_DiscardUnknown()
func (*GetPeerRateLimitsResp) XXX_Marshal ¶ added in v0.9.0
func (m *GetPeerRateLimitsResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*GetPeerRateLimitsResp) XXX_Merge ¶ added in v0.9.0
func (m *GetPeerRateLimitsResp) XXX_Merge(src proto.Message)
func (*GetPeerRateLimitsResp) XXX_Size ¶ added in v0.9.0
func (m *GetPeerRateLimitsResp) XXX_Size() int
func (*GetPeerRateLimitsResp) XXX_Unmarshal ¶ added in v0.9.0
func (m *GetPeerRateLimitsResp) XXX_Unmarshal(b []byte) error
type GetRateLimitsReq ¶
type GetRateLimitsReq struct { Requests []*RateLimitReq `protobuf:"bytes,1,rep,name=requests,proto3" json:"requests,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
Must specify at least one Request
func (*GetRateLimitsReq) Descriptor ¶
func (*GetRateLimitsReq) Descriptor() ([]byte, []int)
func (*GetRateLimitsReq) GetRequests ¶
func (m *GetRateLimitsReq) GetRequests() []*RateLimitReq
func (*GetRateLimitsReq) ProtoMessage ¶
func (*GetRateLimitsReq) ProtoMessage()
func (*GetRateLimitsReq) Reset ¶
func (m *GetRateLimitsReq) Reset()
func (*GetRateLimitsReq) String ¶
func (m *GetRateLimitsReq) String() string
func (*GetRateLimitsReq) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *GetRateLimitsReq) XXX_DiscardUnknown()
func (*GetRateLimitsReq) XXX_Marshal ¶ added in v0.9.0
func (m *GetRateLimitsReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*GetRateLimitsReq) XXX_Merge ¶ added in v0.9.0
func (m *GetRateLimitsReq) XXX_Merge(src proto.Message)
func (*GetRateLimitsReq) XXX_Size ¶ added in v0.9.0
func (m *GetRateLimitsReq) XXX_Size() int
func (*GetRateLimitsReq) XXX_Unmarshal ¶ added in v0.9.0
func (m *GetRateLimitsReq) XXX_Unmarshal(b []byte) error
type GetRateLimitsResp ¶
type GetRateLimitsResp struct { Responses []*RateLimitResp `protobuf:"bytes,1,rep,name=responses,proto3" json:"responses,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
RateLimits returned are in the same order as the Requests
func (*GetRateLimitsResp) Descriptor ¶
func (*GetRateLimitsResp) Descriptor() ([]byte, []int)
func (*GetRateLimitsResp) GetResponses ¶
func (m *GetRateLimitsResp) GetResponses() []*RateLimitResp
func (*GetRateLimitsResp) ProtoMessage ¶
func (*GetRateLimitsResp) ProtoMessage()
func (*GetRateLimitsResp) Reset ¶
func (m *GetRateLimitsResp) Reset()
func (*GetRateLimitsResp) String ¶
func (m *GetRateLimitsResp) String() string
func (*GetRateLimitsResp) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *GetRateLimitsResp) XXX_DiscardUnknown()
func (*GetRateLimitsResp) XXX_Marshal ¶ added in v0.9.0
func (m *GetRateLimitsResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*GetRateLimitsResp) XXX_Merge ¶ added in v0.9.0
func (m *GetRateLimitsResp) XXX_Merge(src proto.Message)
func (*GetRateLimitsResp) XXX_Size ¶ added in v0.9.0
func (m *GetRateLimitsResp) XXX_Size() int
func (*GetRateLimitsResp) XXX_Unmarshal ¶ added in v0.9.0
func (m *GetRateLimitsResp) XXX_Unmarshal(b []byte) error
type HashFunc64 ¶ added in v0.9.0
var DefaultHash64 HashFunc64 = fnv1.HashBytes64
type HealthCheckReq ¶
type HealthCheckReq struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*HealthCheckReq) Descriptor ¶
func (*HealthCheckReq) Descriptor() ([]byte, []int)
func (*HealthCheckReq) ProtoMessage ¶
func (*HealthCheckReq) ProtoMessage()
func (*HealthCheckReq) Reset ¶
func (m *HealthCheckReq) Reset()
func (*HealthCheckReq) String ¶
func (m *HealthCheckReq) String() string
func (*HealthCheckReq) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *HealthCheckReq) XXX_DiscardUnknown()
func (*HealthCheckReq) XXX_Marshal ¶ added in v0.9.0
func (m *HealthCheckReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*HealthCheckReq) XXX_Merge ¶ added in v0.9.0
func (m *HealthCheckReq) XXX_Merge(src proto.Message)
func (*HealthCheckReq) XXX_Size ¶ added in v0.9.0
func (m *HealthCheckReq) XXX_Size() int
func (*HealthCheckReq) XXX_Unmarshal ¶ added in v0.9.0
func (m *HealthCheckReq) XXX_Unmarshal(b []byte) error
type HealthCheckResp ¶
type HealthCheckResp struct { // Valid entries are 'healthy' or 'unhealthy' Status string `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` // If 'unhealthy', message indicates the problem Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"` // The number of peers we know about PeerCount int32 `protobuf:"varint,3,opt,name=peer_count,json=peerCount,proto3" json:"peer_count,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*HealthCheckResp) Descriptor ¶
func (*HealthCheckResp) Descriptor() ([]byte, []int)
func (*HealthCheckResp) GetMessage ¶
func (m *HealthCheckResp) GetMessage() string
func (*HealthCheckResp) GetPeerCount ¶
func (m *HealthCheckResp) GetPeerCount() int32
func (*HealthCheckResp) GetStatus ¶
func (m *HealthCheckResp) GetStatus() string
func (*HealthCheckResp) ProtoMessage ¶
func (*HealthCheckResp) ProtoMessage()
func (*HealthCheckResp) Reset ¶
func (m *HealthCheckResp) Reset()
func (*HealthCheckResp) String ¶
func (m *HealthCheckResp) String() string
func (*HealthCheckResp) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *HealthCheckResp) XXX_DiscardUnknown()
func (*HealthCheckResp) XXX_Marshal ¶ added in v0.9.0
func (m *HealthCheckResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*HealthCheckResp) XXX_Merge ¶ added in v0.9.0
func (m *HealthCheckResp) XXX_Merge(src proto.Message)
func (*HealthCheckResp) XXX_Size ¶ added in v0.9.0
func (m *HealthCheckResp) XXX_Size() int
func (*HealthCheckResp) XXX_Unmarshal ¶ added in v0.9.0
func (m *HealthCheckResp) XXX_Unmarshal(b []byte) error
type Interval ¶
type Interval struct { C chan struct{} // contains filtered or unexported fields }
func NewInterval ¶
NewInterval creates a new ticker like object, however the `C` channel does not return the current time and `C` channel will only get a tick after `Next()` has been called.
type K8sPool ¶
type K8sPool struct {
// contains filtered or unexported fields
}
func NewK8sPool ¶
func NewK8sPool(conf K8sPoolConfig) (*K8sPool, error)
type K8sPoolConfig ¶
type K8sPoolConfig struct { Logger logrus.FieldLogger Mechanism WatchMechanism OnUpdate UpdateFunc Namespace string Selector string PodIP string PodPort string }
type LRUCache ¶ added in v0.7.1
type LRUCache struct {
// contains filtered or unexported fields
}
Cache is an thread unsafe LRU cache that supports expiration
func NewLRUCache ¶ added in v0.7.1
New creates a new Cache with a maximum size
func (*LRUCache) Collect ¶ added in v0.7.1
func (c *LRUCache) Collect(ch chan<- prometheus.Metric)
Collect fetches metric counts and gauges from the cache
func (*LRUCache) Describe ¶ added in v0.7.1
func (c *LRUCache) Describe(ch chan<- *prometheus.Desc)
Describe fetches prometheus metrics to be registered
func (*LRUCache) Remove ¶ added in v0.7.1
func (c *LRUCache) Remove(key interface{})
Remove removes the provided key from the cache.
func (*LRUCache) UpdateExpiration ¶ added in v0.7.1
Update the expiration time for the key
type LeakyBucketItem ¶ added in v0.7.1
type Loader ¶ added in v0.7.1
type Loader interface { // Load is called by gubernator just before the instance is ready to accept requests. The implementation // should return a channel gubernator can read to load all rate limits that should be loaded into the // instance cache. The implementation should close the channel to indicate no more rate limits left to load. Load() (chan *CacheItem, error) // Save is called by gubernator just before the instance is shutdown. The passed channel should be // read until the channel is closed. Save(chan *CacheItem) error }
Loader interface allows implementors to store all or a subset of ratelimits into a persistent store during startup and shutdown of the gubernator instance.
type MemberListPool ¶
type MemberListPool struct {
// contains filtered or unexported fields
}
func NewMemberListPool ¶
func NewMemberListPool(ctx context.Context, conf MemberListPoolConfig) (*MemberListPool, error)
func (*MemberListPool) Close ¶
func (m *MemberListPool) Close()
type MemberListPoolConfig ¶
type MemberListPoolConfig struct { // (Required) This is the peer information that will be advertised to other members Advertise PeerInfo // (Required) This is the address:port the member list protocol listen for other members on MemberListAddress string // (Required) This is the address:port the member list will advertise to other members it finds AdvertiseAddress string // (Required) A list of nodes this member list instance can contact to find other members. KnownNodes []string // (Required) A callback function which is called when the member list changes OnUpdate UpdateFunc // (Optional) The name of the node this member list identifies itself as. NodeName string // (Optional) An interface through which logging will occur (Usually *logrus.Entry) Logger logrus.FieldLogger }
type MockLoader ¶ added in v0.7.1
func NewMockLoader ¶ added in v0.7.1
func NewMockLoader() *MockLoader
func (*MockLoader) Load ¶ added in v0.7.1
func (ml *MockLoader) Load() (chan *CacheItem, error)
func (*MockLoader) Save ¶ added in v0.7.1
func (ml *MockLoader) Save(in chan *CacheItem) error
type MockStore ¶ added in v0.7.1
func NewMockStore ¶ added in v0.7.1
func NewMockStore() *MockStore
func (*MockStore) Get ¶ added in v0.7.1
func (ms *MockStore) Get(r *RateLimitReq) (*CacheItem, bool)
func (*MockStore) OnChange ¶ added in v0.7.1
func (ms *MockStore) OnChange(r *RateLimitReq, item *CacheItem)
type PeerClient ¶
type PeerClient struct {
// contains filtered or unexported fields
}
func NewPeerClient ¶
func NewPeerClient(conf PeerConfig) *PeerClient
func (*PeerClient) GetLastErr ¶ added in v0.9.0
func (c *PeerClient) GetLastErr() []string
func (*PeerClient) GetPeerRateLimit ¶
func (c *PeerClient) GetPeerRateLimit(ctx context.Context, r *RateLimitReq) (*RateLimitResp, error)
GetPeerRateLimit forwards a rate limit request to a peer. If the rate limit has `behavior == BATCHING` configured this method will attempt to batch the rate limits
func (*PeerClient) GetPeerRateLimits ¶
func (c *PeerClient) GetPeerRateLimits(ctx context.Context, r *GetPeerRateLimitsReq) (*GetPeerRateLimitsResp, error)
GetPeerRateLimits requests a list of rate limit statuses from a peer
func (*PeerClient) Info ¶
func (c *PeerClient) Info() PeerInfo
Info returns PeerInfo struct that describes this PeerClient
func (*PeerClient) Shutdown ¶ added in v0.7.1
func (c *PeerClient) Shutdown(ctx context.Context) error
Shutdown will gracefully shutdown the client connection, until the context is cancelled
func (*PeerClient) UpdatePeerGlobals ¶
func (c *PeerClient) UpdatePeerGlobals(ctx context.Context, r *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error)
UpdatePeerGlobals sends global rate limit status updates to a peer
type PeerConfig ¶
type PeerConfig struct { TLS *tls.Config Behavior BehaviorConfig Info PeerInfo }
type PeerErr ¶ added in v0.9.0
type PeerErr struct {
// contains filtered or unexported fields
}
PeerErr is returned if the peer is not connected or is in a closing state
type PeerInfo ¶
type PeerInfo struct { // (Optional) The name of the data center this peer is in. Leave blank if not using multi data center support. DataCenter string `json:"data-center"` // (Optional) The http address:port of the peer HTTPAddress string `json:"http-address"` // (Required) The grpc address:port of the peer GRPCAddress string `json:"grpc-address"` // (Optional) Is true if PeerInfo is for this instance of gubernator IsOwner bool `json:"is-owner,omitempty"` }
func RandomPeer ¶
RandomPeer returns a random peer from the list of peers provided
type PeerPicker ¶
type PeerPicker interface { GetByPeerInfo(PeerInfo) *PeerClient Peers() []*PeerClient Get(string) (*PeerClient, error) New() PeerPicker Add(*PeerClient) Size() int // TODO: Might not be useful? }
type PeersV1Client ¶
type PeersV1Client interface { // Used by peers to relay batches of requests to an authoritative peer GetPeerRateLimits(ctx context.Context, in *GetPeerRateLimitsReq, opts ...grpc.CallOption) (*GetPeerRateLimitsResp, error) // Used by peers send global rate limit updates to other peers UpdatePeerGlobals(ctx context.Context, in *UpdatePeerGlobalsReq, opts ...grpc.CallOption) (*UpdatePeerGlobalsResp, error) }
PeersV1Client is the client API for PeersV1 service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
func NewPeersV1Client ¶
func NewPeersV1Client(cc *grpc.ClientConn) PeersV1Client
type PeersV1Server ¶
type PeersV1Server interface { // Used by peers to relay batches of requests to an authoritative peer GetPeerRateLimits(context.Context, *GetPeerRateLimitsReq) (*GetPeerRateLimitsResp, error) // Used by peers send global rate limit updates to other peers UpdatePeerGlobals(context.Context, *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error) }
PeersV1Server is the server API for PeersV1 service.
type PoolInterface ¶
type PoolInterface interface {
Close()
}
type RateLimitReq ¶
type RateLimitReq struct { // The name of the rate limit IE: 'requests_per_second', 'gets_per_minute` Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Uniquely identifies this rate limit IE: 'ip:10.2.10.7' or 'account:123445' UniqueKey string `protobuf:"bytes,2,opt,name=unique_key,json=uniqueKey,proto3" json:"unique_key,omitempty"` // Rate limit requests optionally specify the number of hits a request adds to the matched limit. If Hit // is zero, the request returns the current limit, but does not increment the hit count. Hits int64 `protobuf:"varint,3,opt,name=hits,proto3" json:"hits,omitempty"` // The number of requests that can occur for the duration of the rate limit Limit int64 `protobuf:"varint,4,opt,name=limit,proto3" json:"limit,omitempty"` // The duration of the rate limit in milliseconds // Second = 1000 Milliseconds // Minute = 60000 Milliseconds // Hour = 3600000 Milliseconds Duration int64 `protobuf:"varint,5,opt,name=duration,proto3" json:"duration,omitempty"` // The algorithm used to calculate the rate limit. The algorithm may change on // subsequent requests, when this occurs any previous rate limit hit counts are reset. Algorithm Algorithm `protobuf:"varint,6,opt,name=algorithm,proto3,enum=pb.gubernator.Algorithm" json:"algorithm,omitempty"` // Behavior is a set of int32 flags that control the behavior of the rate limit in gubernator Behavior Behavior `protobuf:"varint,7,opt,name=behavior,proto3,enum=pb.gubernator.Behavior" json:"behavior,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*RateLimitReq) Descriptor ¶
func (*RateLimitReq) Descriptor() ([]byte, []int)
func (*RateLimitReq) GetAlgorithm ¶
func (m *RateLimitReq) GetAlgorithm() Algorithm
func (*RateLimitReq) GetBehavior ¶
func (m *RateLimitReq) GetBehavior() Behavior
func (*RateLimitReq) GetDuration ¶
func (m *RateLimitReq) GetDuration() int64
func (*RateLimitReq) GetHits ¶
func (m *RateLimitReq) GetHits() int64
func (*RateLimitReq) GetLimit ¶
func (m *RateLimitReq) GetLimit() int64
func (*RateLimitReq) GetName ¶
func (m *RateLimitReq) GetName() string
func (*RateLimitReq) GetUniqueKey ¶
func (m *RateLimitReq) GetUniqueKey() string
func (*RateLimitReq) HashKey ¶
func (m *RateLimitReq) HashKey() string
func (*RateLimitReq) ProtoMessage ¶
func (*RateLimitReq) ProtoMessage()
func (*RateLimitReq) Reset ¶
func (m *RateLimitReq) Reset()
func (*RateLimitReq) String ¶
func (m *RateLimitReq) String() string
func (*RateLimitReq) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *RateLimitReq) XXX_DiscardUnknown()
func (*RateLimitReq) XXX_Marshal ¶ added in v0.9.0
func (m *RateLimitReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*RateLimitReq) XXX_Merge ¶ added in v0.9.0
func (m *RateLimitReq) XXX_Merge(src proto.Message)
func (*RateLimitReq) XXX_Size ¶ added in v0.9.0
func (m *RateLimitReq) XXX_Size() int
func (*RateLimitReq) XXX_Unmarshal ¶ added in v0.9.0
func (m *RateLimitReq) XXX_Unmarshal(b []byte) error
type RateLimitResp ¶
type RateLimitResp struct { // The status of the rate limit. Status Status `protobuf:"varint,1,opt,name=status,proto3,enum=pb.gubernator.Status" json:"status,omitempty"` // The currently configured request limit (Identical to RateLimitRequest.rate_limit_config.limit). Limit int64 `protobuf:"varint,2,opt,name=limit,proto3" json:"limit,omitempty"` // This is the number of requests remaining before the limit is hit. Remaining int64 `protobuf:"varint,3,opt,name=remaining,proto3" json:"remaining,omitempty"` // This is the time when the rate limit span will be reset, provided as a unix timestamp in milliseconds. ResetTime int64 `protobuf:"varint,4,opt,name=reset_time,json=resetTime,proto3" json:"reset_time,omitempty"` // Contains the error; If set all other values should be ignored Error string `protobuf:"bytes,5,opt,name=error,proto3" json:"error,omitempty"` // This is additional metadata that a client might find useful. (IE: Additional headers, corrdinator ownership, etc..) Metadata map[string]string `` /* 157-byte string literal not displayed */ XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*RateLimitResp) Descriptor ¶
func (*RateLimitResp) Descriptor() ([]byte, []int)
func (*RateLimitResp) GetError ¶
func (m *RateLimitResp) GetError() string
func (*RateLimitResp) GetLimit ¶
func (m *RateLimitResp) GetLimit() int64
func (*RateLimitResp) GetMetadata ¶
func (m *RateLimitResp) GetMetadata() map[string]string
func (*RateLimitResp) GetRemaining ¶
func (m *RateLimitResp) GetRemaining() int64
func (*RateLimitResp) GetResetTime ¶
func (m *RateLimitResp) GetResetTime() int64
func (*RateLimitResp) GetStatus ¶
func (m *RateLimitResp) GetStatus() Status
func (*RateLimitResp) ProtoMessage ¶
func (*RateLimitResp) ProtoMessage()
func (*RateLimitResp) Reset ¶
func (m *RateLimitResp) Reset()
func (*RateLimitResp) String ¶
func (m *RateLimitResp) String() string
func (*RateLimitResp) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *RateLimitResp) XXX_DiscardUnknown()
func (*RateLimitResp) XXX_Marshal ¶ added in v0.9.0
func (m *RateLimitResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*RateLimitResp) XXX_Merge ¶ added in v0.9.0
func (m *RateLimitResp) XXX_Merge(src proto.Message)
func (*RateLimitResp) XXX_Size ¶ added in v0.9.0
func (m *RateLimitResp) XXX_Size() int
func (*RateLimitResp) XXX_Unmarshal ¶ added in v0.9.0
func (m *RateLimitResp) XXX_Unmarshal(b []byte) error
type RegionPeerPicker ¶ added in v0.9.0
type RegionPeerPicker interface { GetClients(string) ([]*PeerClient, error) GetByPeerInfo(PeerInfo) *PeerClient Pickers() map[string]PeerPicker Peers() []*PeerClient Add(*PeerClient) New() RegionPeerPicker }
type RegionPicker ¶ added in v0.9.0
type RegionPicker struct { *ReplicatedConsistentHash // contains filtered or unexported fields }
RegionPicker encapsulates pickers for a set of regions
func NewRegionPicker ¶ added in v0.9.0
func NewRegionPicker(fn HashFunc64) *RegionPicker
func (*RegionPicker) Add ¶ added in v0.9.0
func (rp *RegionPicker) Add(peer *PeerClient)
func (*RegionPicker) GetByPeerInfo ¶ added in v0.9.0
func (rp *RegionPicker) GetByPeerInfo(info PeerInfo) *PeerClient
GetByPeerInfo returns the first PeerClient the PeerInfo.HasKey() matches
func (*RegionPicker) GetClients ¶ added in v0.9.0
func (rp *RegionPicker) GetClients(key string) ([]*PeerClient, error)
GetClients returns all the PeerClients that match this key in all regions
func (*RegionPicker) New ¶ added in v0.9.0
func (rp *RegionPicker) New() RegionPeerPicker
func (*RegionPicker) Peers ¶ added in v0.9.0
func (rp *RegionPicker) Peers() []*PeerClient
func (*RegionPicker) Pickers ¶ added in v0.9.0
func (rp *RegionPicker) Pickers() map[string]PeerPicker
Pickers returns a map of each region and its respective PeerPicker
type ReplicatedConsistentHash ¶
type ReplicatedConsistentHash struct {
// contains filtered or unexported fields
}
Implements PeerPicker
func NewReplicatedConsistentHash ¶
func NewReplicatedConsistentHash(fn HashFunc64, replicas int) *ReplicatedConsistentHash
func (*ReplicatedConsistentHash) Add ¶
func (ch *ReplicatedConsistentHash) Add(peer *PeerClient)
Adds a peer to the hash
func (*ReplicatedConsistentHash) Get ¶
func (ch *ReplicatedConsistentHash) Get(key string) (*PeerClient, error)
Given a key, return the peer that key is assigned too
func (*ReplicatedConsistentHash) GetByPeerInfo ¶
func (ch *ReplicatedConsistentHash) GetByPeerInfo(peer PeerInfo) *PeerClient
Returns the peer by hostname
func (*ReplicatedConsistentHash) New ¶
func (ch *ReplicatedConsistentHash) New() PeerPicker
func (*ReplicatedConsistentHash) Peers ¶
func (ch *ReplicatedConsistentHash) Peers() []*PeerClient
func (*ReplicatedConsistentHash) Size ¶
func (ch *ReplicatedConsistentHash) Size() int
Returns number of peers in the picker
type Store ¶ added in v0.7.1
type Store interface { // Called by gubernator *after* a rate limit item is updated. It's up to the store to // decide if this rate limit item should be persisted in the store. It's up to the // store to expire old rate limit items. The CacheItem represents the current state of // the rate limit item *after* the RateLimitReq has been applied. OnChange(r *RateLimitReq, item *CacheItem) // Called by gubernator when a rate limit is missing from the cache. It's up to the store // to decide if this request is fulfilled. Should return true if the request is fulfilled // and false if the request is not fulfilled or doesn't exist in the store. Get(r *RateLimitReq) (*CacheItem, bool) // Called by gubernator when an existing rate limit should be removed from the store. // NOTE: This is NOT called when an rate limit expires from the cache, store implementors // must expire rate limits in the store. Remove(key string) }
Store interface allows implementors to off load storage of all or a subset of ratelimits to some persistent store. Methods OnChange() and Get() should avoid blocking as much as possible as these methods are called on every rate limit request and will effect the performance of gubernator.
type TLSConfig ¶
type TLSConfig struct { // (Optional) The path to the Trusted Certificate Authority. CaFile string // (Optional) The path to the Trusted Certificate Authority private key. CaKeyFile string // (Optional) The path to the un-encrypted key for the server certificate. KeyFile string // (Optional) The path to the server certificate. CertFile string // (Optional) If true gubernator will generate self-signed certificates. If CaFile and CaKeyFile // is set but no KeyFile or CertFile is set then gubernator will generate a self-signed key using // the CaFile provided. AutoTLS bool // (Optional) Sets the Client Authentication type as defined in the 'tls' package. // Defaults to tls.NoClientCert.See the standard library tls.ClientAuthType for valid values. // If set to anything but tls.NoClientCert then SetupTLS() attempts to load ClientAuthCaFile, // ClientAuthKeyFile and ClientAuthCertFile and sets those certs into the ClientTLS struct. If // none of the ClientXXXFile's are set, uses KeyFile and CertFile for client authentication. ClientAuth tls.ClientAuthType // (Optional) The path to the Trusted Certificate Authority used for client auth. If ClientAuth is // set and this field is empty, then CaFile is used to auth clients. ClientAuthCaFile string // (Optional) The path to the client private key, which is used to create the ClientTLS config. If // ClientAuth is set and this field is empty then KeyFile is used to create the ClientTLS. ClientAuthKeyFile string // (Optional) The path to the client cert key, which is used to create the ClientTLS config. If // ClientAuth is set and this field is empty then KeyFile is used to create the ClientTLS. ClientAuthCertFile string // (Optional) If InsecureSkipVerify is true, TLS clients will accept any certificate // presented by the server and any host name in that certificate. InsecureSkipVerify bool // (Optional) A Logger which implements the declared logger interface (typically *logrus.Entry) Logger logrus.FieldLogger // (Optional) The CA Certificate in PEM format. Used if CaFile is unset CaPEM *bytes.Buffer // (Optional) The CA Private Key in PEM format. Used if CaKeyFile is unset CaKeyPEM *bytes.Buffer // (Optional) The Certificate Key in PEM format. Used if KeyFile is unset. KeyPEM *bytes.Buffer // (Optional) The Certificate in PEM format. Used if CertFile is unset. CertPEM *bytes.Buffer // (Optional) The client auth CA Certificate in PEM format. Used if ClientAuthCaFile is unset. ClientAuthCaPEM *bytes.Buffer // (Optional) The client auth private key in PEM format. Used if ClientAuthKeyFile is unset. ClientAuthKeyPEM *bytes.Buffer // (Optional) The client auth Certificate in PEM format. Used if ClientAuthCertFile is unset. ClientAuthCertPEM *bytes.Buffer // (Optional) The config created for use by the gubernator server. If set, all other // fields in this struct are ignored and this config is used. If unset, gubernator.SetupTLS() // will create a config using the above fields. ServerTLS *tls.Config // (Optional) The config created for use by gubernator clients and peer communication. If set, all other // fields in this struct are ignored and this config is used. If unset, gubernator.SetupTLS() // will create a config using the above fields. ClientTLS *tls.Config }
type TokenBucketItem ¶ added in v0.8.0
type UnimplementedPeersV1Server ¶ added in v0.9.0
type UnimplementedPeersV1Server struct { }
UnimplementedPeersV1Server can be embedded to have forward compatible implementations.
func (*UnimplementedPeersV1Server) GetPeerRateLimits ¶ added in v0.9.0
func (*UnimplementedPeersV1Server) GetPeerRateLimits(ctx context.Context, req *GetPeerRateLimitsReq) (*GetPeerRateLimitsResp, error)
func (*UnimplementedPeersV1Server) UpdatePeerGlobals ¶ added in v0.9.0
func (*UnimplementedPeersV1Server) UpdatePeerGlobals(ctx context.Context, req *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error)
type UnimplementedV1Server ¶ added in v0.9.0
type UnimplementedV1Server struct { }
UnimplementedV1Server can be embedded to have forward compatible implementations.
func (*UnimplementedV1Server) GetRateLimits ¶ added in v0.9.0
func (*UnimplementedV1Server) GetRateLimits(ctx context.Context, req *GetRateLimitsReq) (*GetRateLimitsResp, error)
func (*UnimplementedV1Server) HealthCheck ¶ added in v0.9.0
func (*UnimplementedV1Server) HealthCheck(ctx context.Context, req *HealthCheckReq) (*HealthCheckResp, error)
type UpdateFunc ¶
type UpdateFunc func([]PeerInfo)
type UpdatePeerGlobal ¶
type UpdatePeerGlobal struct { Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` Status *RateLimitResp `protobuf:"bytes,2,opt,name=status,proto3" json:"status,omitempty"` Algorithm Algorithm `protobuf:"varint,3,opt,name=algorithm,proto3,enum=pb.gubernator.Algorithm" json:"algorithm,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*UpdatePeerGlobal) Descriptor ¶
func (*UpdatePeerGlobal) Descriptor() ([]byte, []int)
func (*UpdatePeerGlobal) GetAlgorithm ¶ added in v0.7.1
func (m *UpdatePeerGlobal) GetAlgorithm() Algorithm
func (*UpdatePeerGlobal) GetKey ¶
func (m *UpdatePeerGlobal) GetKey() string
func (*UpdatePeerGlobal) GetStatus ¶
func (m *UpdatePeerGlobal) GetStatus() *RateLimitResp
func (*UpdatePeerGlobal) ProtoMessage ¶
func (*UpdatePeerGlobal) ProtoMessage()
func (*UpdatePeerGlobal) Reset ¶
func (m *UpdatePeerGlobal) Reset()
func (*UpdatePeerGlobal) String ¶
func (m *UpdatePeerGlobal) String() string
func (*UpdatePeerGlobal) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *UpdatePeerGlobal) XXX_DiscardUnknown()
func (*UpdatePeerGlobal) XXX_Marshal ¶ added in v0.9.0
func (m *UpdatePeerGlobal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*UpdatePeerGlobal) XXX_Merge ¶ added in v0.9.0
func (m *UpdatePeerGlobal) XXX_Merge(src proto.Message)
func (*UpdatePeerGlobal) XXX_Size ¶ added in v0.9.0
func (m *UpdatePeerGlobal) XXX_Size() int
func (*UpdatePeerGlobal) XXX_Unmarshal ¶ added in v0.9.0
func (m *UpdatePeerGlobal) XXX_Unmarshal(b []byte) error
type UpdatePeerGlobalsReq ¶
type UpdatePeerGlobalsReq struct { // Must specify at least one RateLimit Globals []*UpdatePeerGlobal `protobuf:"bytes,1,rep,name=globals,proto3" json:"globals,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*UpdatePeerGlobalsReq) Descriptor ¶
func (*UpdatePeerGlobalsReq) Descriptor() ([]byte, []int)
func (*UpdatePeerGlobalsReq) GetGlobals ¶
func (m *UpdatePeerGlobalsReq) GetGlobals() []*UpdatePeerGlobal
func (*UpdatePeerGlobalsReq) ProtoMessage ¶
func (*UpdatePeerGlobalsReq) ProtoMessage()
func (*UpdatePeerGlobalsReq) Reset ¶
func (m *UpdatePeerGlobalsReq) Reset()
func (*UpdatePeerGlobalsReq) String ¶
func (m *UpdatePeerGlobalsReq) String() string
func (*UpdatePeerGlobalsReq) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *UpdatePeerGlobalsReq) XXX_DiscardUnknown()
func (*UpdatePeerGlobalsReq) XXX_Marshal ¶ added in v0.9.0
func (m *UpdatePeerGlobalsReq) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*UpdatePeerGlobalsReq) XXX_Merge ¶ added in v0.9.0
func (m *UpdatePeerGlobalsReq) XXX_Merge(src proto.Message)
func (*UpdatePeerGlobalsReq) XXX_Size ¶ added in v0.9.0
func (m *UpdatePeerGlobalsReq) XXX_Size() int
func (*UpdatePeerGlobalsReq) XXX_Unmarshal ¶ added in v0.9.0
func (m *UpdatePeerGlobalsReq) XXX_Unmarshal(b []byte) error
type UpdatePeerGlobalsResp ¶
type UpdatePeerGlobalsResp struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*UpdatePeerGlobalsResp) Descriptor ¶
func (*UpdatePeerGlobalsResp) Descriptor() ([]byte, []int)
func (*UpdatePeerGlobalsResp) ProtoMessage ¶
func (*UpdatePeerGlobalsResp) ProtoMessage()
func (*UpdatePeerGlobalsResp) Reset ¶
func (m *UpdatePeerGlobalsResp) Reset()
func (*UpdatePeerGlobalsResp) String ¶
func (m *UpdatePeerGlobalsResp) String() string
func (*UpdatePeerGlobalsResp) XXX_DiscardUnknown ¶ added in v0.9.0
func (m *UpdatePeerGlobalsResp) XXX_DiscardUnknown()
func (*UpdatePeerGlobalsResp) XXX_Marshal ¶ added in v0.9.0
func (m *UpdatePeerGlobalsResp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*UpdatePeerGlobalsResp) XXX_Merge ¶ added in v0.9.0
func (m *UpdatePeerGlobalsResp) XXX_Merge(src proto.Message)
func (*UpdatePeerGlobalsResp) XXX_Size ¶ added in v0.9.0
func (m *UpdatePeerGlobalsResp) XXX_Size() int
func (*UpdatePeerGlobalsResp) XXX_Unmarshal ¶ added in v0.9.0
func (m *UpdatePeerGlobalsResp) XXX_Unmarshal(b []byte) error
type V1Client ¶
type V1Client interface { // Given a list of rate limit requests, return the rate limits of each. GetRateLimits(ctx context.Context, in *GetRateLimitsReq, opts ...grpc.CallOption) (*GetRateLimitsResp, error) // This method is for round trip benchmarking and can be used by // the client to determine connectivity to the server HealthCheck(ctx context.Context, in *HealthCheckReq, opts ...grpc.CallOption) (*HealthCheckResp, error) }
V1Client is the client API for V1 service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
func DialV1Server ¶
DialV1Server is a convenience function for dialing gubernator instances
func NewV1Client ¶
func NewV1Client(cc *grpc.ClientConn) V1Client
type V1Instance ¶
type V1Instance struct {
// contains filtered or unexported fields
}
func NewV1Instance ¶
func NewV1Instance(conf Config) (*V1Instance, error)
NewV1Instance instantiate a single instance of a gubernator peer and registers this instance with the provided GRPCServer.
func (*V1Instance) Close ¶
func (s *V1Instance) Close() error
func (*V1Instance) Collect ¶
func (s *V1Instance) Collect(ch chan<- prometheus.Metric)
Collect fetches metrics from the server for use by prometheus
func (*V1Instance) Describe ¶
func (s *V1Instance) Describe(ch chan<- *prometheus.Desc)
Describe fetches prometheus metrics to be registered
func (*V1Instance) GetPeer ¶
func (s *V1Instance) GetPeer(key string) (*PeerClient, error)
GetPeers returns a peer client for the hash key provided
func (*V1Instance) GetPeerList ¶
func (s *V1Instance) GetPeerList() []*PeerClient
func (*V1Instance) GetPeerRateLimits ¶
func (s *V1Instance) GetPeerRateLimits(ctx context.Context, r *GetPeerRateLimitsReq) (*GetPeerRateLimitsResp, error)
GetPeerRateLimits is called by other peers to get the rate limits owned by this peer.
func (*V1Instance) GetRateLimits ¶
func (s *V1Instance) GetRateLimits(ctx context.Context, r *GetRateLimitsReq) (*GetRateLimitsResp, error)
GetRateLimits is the public interface used by clients to request rate limits from the system. If the rate limit `Name` and `UniqueKey` is not owned by this instance then we forward the request to the peer that does.
func (*V1Instance) GetRegionPickers ¶
func (s *V1Instance) GetRegionPickers() map[string]PeerPicker
func (*V1Instance) HealthCheck ¶
func (s *V1Instance) HealthCheck(ctx context.Context, r *HealthCheckReq) (*HealthCheckResp, error)
HealthCheck Returns the health of our instance.
func (*V1Instance) SetPeers ¶
func (s *V1Instance) SetPeers(peerInfo []PeerInfo)
SetPeers is called by the implementor to indicate the pool of peers has changed
func (*V1Instance) UpdatePeerGlobals ¶
func (s *V1Instance) UpdatePeerGlobals(ctx context.Context, r *UpdatePeerGlobalsReq) (*UpdatePeerGlobalsResp, error)
UpdatePeerGlobals updates the local cache with a list of global rate limits. This method should only be called by a peer who is the owner of a global rate limit.
type V1Server ¶
type V1Server interface { // Given a list of rate limit requests, return the rate limits of each. GetRateLimits(context.Context, *GetRateLimitsReq) (*GetRateLimitsResp, error) // This method is for round trip benchmarking and can be used by // the client to determine connectivity to the server HealthCheck(context.Context, *HealthCheckReq) (*HealthCheckResp, error) }
V1Server is the server API for V1 service.
type WatchMechanism ¶
type WatchMechanism string
const ( WatchEndpoints WatchMechanism = "endpoints" WatchPods WatchMechanism = "pods" )
func WatchMechanismFromString ¶
func WatchMechanismFromString(mechanism string) (WatchMechanism, error)
Source Files ¶
- algorithms.go
- cache.go
- client.go
- config.go
- daemon.go
- etcd.go
- global.go
- grpc_stats.go
- gubernator.go
- gubernator.pb.go
- gubernator.pb.gw.go
- hash.go
- interval.go
- kubernetes.go
- kubernetesconfig.go
- memberlist.go
- multiregion.go
- net.go
- peer_client.go
- peers.pb.go
- region_picker.go
- replicated_hash.go
- store.go
- tls.go