Documentation
¶
Index ¶
- Variables
- func DbToUserSettings(fsettings []*edgeproto.FlowRateLimitSettings, ...) []*edgeproto.RateLimitSettings
- func GetDmeStreamRateLimiterInterceptor(limiter Limiter) grpc.StreamServerInterceptor
- func GetDmeUnaryRateLimiterInterceptor(limiter Limiter) grpc.UnaryServerInterceptor
- func UserToDbSettings(settings []*edgeproto.RateLimitSettings) ([]*edgeproto.FlowRateLimitSettings, []*edgeproto.MaxReqsRateLimitSettings)
- type CallerInfo
- type CompositeLimiter
- type IntervalLimiter
- type LeakyBucketLimiter
- type Limiter
- type LimiterStreamWrapper
- type RateLimitManager
- func (r *RateLimitManager) CreateApiEndpointLimiter(api string, allRequestsRateLimitSettings *edgeproto.RateLimitSettings, ...)
- func (r *RateLimitManager) Limit(ctx context.Context, info *CallerInfo) error
- func (r *RateLimitManager) PruneFlowRateLimitSettings(keys map[edgeproto.FlowRateLimitSettingsKey]struct{})
- func (r *RateLimitManager) PruneMaxReqsRateLimitSettings(keys map[edgeproto.MaxReqsRateLimitSettingsKey]struct{})
- func (r *RateLimitManager) RemoveFlowRateLimitSettings(key edgeproto.FlowRateLimitSettingsKey)
- func (r *RateLimitManager) RemoveMaxReqsRateLimitSettings(key edgeproto.MaxReqsRateLimitSettingsKey)
- func (r *RateLimitManager) Type() string
- func (r *RateLimitManager) UpdateDisableRateLimit(disable bool)
- func (r *RateLimitManager) UpdateFlowRateLimitSettings(flowRateLimitSettings *edgeproto.FlowRateLimitSettings)
- func (r *RateLimitManager) UpdateMaxReqsRateLimitSettings(maxReqsRateLimitSettings *edgeproto.MaxReqsRateLimitSettings)
- func (r *RateLimitManager) UpdateMaxTrackedIps(max int)
- func (r *RateLimitManager) UpdateMaxTrackedUsers(max int)
- type TokenBucketLimiter
Constants ¶
This section is empty.
Variables ¶
var DefaultReqsPerSecondPerApi = 100.0
var DefaultTokenBucketSize int64 = 10 // equivalent to burst size
Functions ¶
func DbToUserSettings ¶
func DbToUserSettings(fsettings []*edgeproto.FlowRateLimitSettings, msettings []*edgeproto.MaxReqsRateLimitSettings) []*edgeproto.RateLimitSettings
Convert db-based objects to user-based objects
func GetDmeStreamRateLimiterInterceptor ¶
func GetDmeStreamRateLimiterInterceptor(limiter Limiter) grpc.StreamServerInterceptor
Return a grpc stream server interceptor that does rate limiting for DME
func GetDmeUnaryRateLimiterInterceptor ¶
func GetDmeUnaryRateLimiterInterceptor(limiter Limiter) grpc.UnaryServerInterceptor
Grpc unary server interceptor that does rate limiting for DME
func UserToDbSettings ¶
func UserToDbSettings(settings []*edgeproto.RateLimitSettings) ([]*edgeproto.FlowRateLimitSettings, []*edgeproto.MaxReqsRateLimitSettings)
Convert user-based objects to db-based objects
Types ¶
type CallerInfo ¶
Struct used to supply client/caller information to Limiters
type CompositeLimiter ¶
type CompositeLimiter struct {
// contains filtered or unexported fields
}
* Composes multiple limiters into one
func NewCompositeLimiter ¶
func NewCompositeLimiter(limiters ...Limiter) *CompositeLimiter
func (*CompositeLimiter) Limit ¶
func (c *CompositeLimiter) Limit(ctx context.Context, info *CallerInfo) error
func (*CompositeLimiter) Type ¶
func (c *CompositeLimiter) Type() string
type IntervalLimiter ¶
* Limits requests based on requestLimit set for the specified interval * For example, if the interval is 1 hour and requestLimit is 100, the Limit function will reject requests once the 100 requests is reached, but will reset the count when an hour has passed.
func NewIntervalLimiter ¶
func NewIntervalLimiter(reqLimit int, interval time.Duration) *IntervalLimiter
Creates IntervalLimiter
func (*IntervalLimiter) Limit ¶
func (i *IntervalLimiter) Limit(ctx context.Context, info *CallerInfo) error
TODO: Charge once surpass api limit
func (*IntervalLimiter) Type ¶
func (i *IntervalLimiter) Type() string
type LeakyBucketLimiter ¶
type LeakyBucketLimiter struct {
// contains filtered or unexported fields
}
* The time/rate package limiter that uses Wait() with maxBurstSize == 1 implements the leaky bucket algorithm as a queue (to use leaky bucket as a meter, use TokenBucketLimiter) * Requests are never rejected, just queued up and then "leaked" out of the bucket at a set rate (reqsPerSecond) * Useful for throttling requests (eg. grpc interceptor) * FlowRateLimitAlgorithm
func NewLeakyBucketLimiter ¶
func NewLeakyBucketLimiter(reqsPerSecond float64) *LeakyBucketLimiter
func (*LeakyBucketLimiter) Limit ¶
func (l *LeakyBucketLimiter) Limit(ctx context.Context, info *CallerInfo) error
func (*LeakyBucketLimiter) Type ¶
func (l *LeakyBucketLimiter) Type() string
type Limiter ¶
type Limiter interface { Limit(ctx context.Context, info *CallerInfo) error Type() string }
* Limiter Interface * Structs that implement this inferface must provide a limit function that returns whether or not to allow a request to go through * Return value is an error * If the return value is non-nil, we will reject the request (ie. limit), a return value of nil will pass the request. * Current implementations in: api_ratelimitmgr.go, apiendpoint-limiter.go, leakybucket.go, tokenbucket.go
type LimiterStreamWrapper ¶
type LimiterStreamWrapper struct { grpc.ServerStream // contains filtered or unexported fields }
func (*LimiterStreamWrapper) Context ¶
func (s *LimiterStreamWrapper) Context() context.Context
type RateLimitManager ¶
* RateLimitManager manages all the rate limits per API for a node (DME, Controller, and MC) * limitsPerApi maps an API to a ApiEndpointLimiter struct, which will handle all of the rate limiting for the endpoint and per ip, user, and/or org * apisPerRateLimitSettingsKey maps the RateLimitSettingsKey to a list of APIs (eg. CreateApp, CreateCloudlet, etc.). This is used to update the rate limit settings if the rate limit settings api is updated
func NewRateLimitManager ¶
func NewRateLimitManager(disableRateLimit bool, maxTrackedIps int, maxTrackedUsers int) *RateLimitManager
Create a RateLimitManager
func (*RateLimitManager) CreateApiEndpointLimiter ¶
func (r *RateLimitManager) CreateApiEndpointLimiter(api string, allRequestsRateLimitSettings *edgeproto.RateLimitSettings, perIpRateLimitSettings *edgeproto.RateLimitSettings, perUserRateLimitSettings *edgeproto.RateLimitSettings)
Initialize an ApiEndpointLimiter struct for the specified API. This is called for each API during the initialization of the node (ie. in dme-main.go)
func (*RateLimitManager) Limit ¶
func (r *RateLimitManager) Limit(ctx context.Context, info *CallerInfo) error
Implements the Limiter interface
func (*RateLimitManager) PruneFlowRateLimitSettings ¶
func (r *RateLimitManager) PruneFlowRateLimitSettings(keys map[edgeproto.FlowRateLimitSettingsKey]struct{})
Remove FlowRateLimitSettings whose keys are not in the keys map
func (*RateLimitManager) PruneMaxReqsRateLimitSettings ¶
func (r *RateLimitManager) PruneMaxReqsRateLimitSettings(keys map[edgeproto.MaxReqsRateLimitSettingsKey]struct{})
Remove MaxReqsRateLimitSettings whose keys are not in the keys map
func (*RateLimitManager) RemoveFlowRateLimitSettings ¶
func (r *RateLimitManager) RemoveFlowRateLimitSettings(key edgeproto.FlowRateLimitSettingsKey)
* Remove the flow rate limit settings for API associated with the specified RateLimitSettingsKey * For example, this might remove the PerIp rate limiting for VerifyLocation
func (*RateLimitManager) RemoveMaxReqsRateLimitSettings ¶
func (r *RateLimitManager) RemoveMaxReqsRateLimitSettings(key edgeproto.MaxReqsRateLimitSettingsKey)
func (*RateLimitManager) Type ¶
func (r *RateLimitManager) Type() string
func (*RateLimitManager) UpdateDisableRateLimit ¶
func (r *RateLimitManager) UpdateDisableRateLimit(disable bool)
Update DisableRateLimit when settings are updated
func (*RateLimitManager) UpdateFlowRateLimitSettings ¶
func (r *RateLimitManager) UpdateFlowRateLimitSettings(flowRateLimitSettings *edgeproto.FlowRateLimitSettings)
Update the flow rate limit settings for API that use the rate limit settings associated with the specified RateLimitSettingsKey
func (*RateLimitManager) UpdateMaxReqsRateLimitSettings ¶
func (r *RateLimitManager) UpdateMaxReqsRateLimitSettings(maxReqsRateLimitSettings *edgeproto.MaxReqsRateLimitSettings)
Update the maxreqs rate limit settings for API that use the rate limit settings associated with the specified RateLimitSettingsKey
func (*RateLimitManager) UpdateMaxTrackedIps ¶
func (r *RateLimitManager) UpdateMaxTrackedIps(max int)
Update MaxTrackedIps when settings are updated
func (*RateLimitManager) UpdateMaxTrackedUsers ¶
func (r *RateLimitManager) UpdateMaxTrackedUsers(max int)
Update MaxTrackedUsers when settings are updated
type TokenBucketLimiter ¶
type TokenBucketLimiter struct {
// contains filtered or unexported fields
}
* The time/rate package limiter that uses Allow() implements the token bucket algorithm (which is equivalent to the leaky bucket algorithm as a meter. To use leaky bucket as a queue, use LeakyBucketLimiter) * A bucket is filled with tokens at tokensPerSecond and the bucket has a maximum size of bucketSize (bucketSize also acts as a burst size (ie. the number of requests that come at the same time that can be fulfilled) * A token is taken out on each request * Requests that come in when there are no tokens in the bucket are rejected * Useful for throttling requests (eg. grpc interceptor) * FlowRateLimitAlgorithm
func NewTokenBucketLimiter ¶
func NewTokenBucketLimiter(tokensPerSecond float64, bucketSize int) *TokenBucketLimiter
func (*TokenBucketLimiter) Limit ¶
func (t *TokenBucketLimiter) Limit(ctx context.Context, info *CallerInfo) error
func (*TokenBucketLimiter) Type ¶
func (t *TokenBucketLimiter) Type() string