cache

package
v6.0.1+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 9, 2021 License: Apache-2.0, BSD-2-Clause, BSD-3-Clause, + 1 more Imports: 18 Imported by: 0

Documentation

Index

Constants

View Source
const CodeConnectFailure = http.StatusBadGateway
View Source
const ModifiedSinceHdr = "If-Modified-Since"

Variables

This section is empty.

Functions

func DefaultParentRespData

func DefaultParentRespData() cachedata.ParentRespData

func DefaultRespCode

func DefaultRespCode() *int

func GetAndCache

func GetAndCache(
	req *http.Request,
	proxyURL *url.URL,
	cacheKey string,
	remapName string,
	reqHeader http.Header,
	reqTime time.Time,
	strictRFC bool,
	cache icache.Cache,
	ruleThrottler thread.Throttler,
	revalidateObj *cacheobj.CacheObj,
	timeout time.Duration,
	cacheFailure bool,
	retryNum int,
	retryCodes map[int]struct{},
	transport *http.Transport,
	reqID uint64,
) *cacheobj.CacheObj

GetAndCache makes a client request for the given `http.Request` and caches it if `CanCache`. THe `ruleThrottler` may be nil, in which case the request will be unthrottled.

Types

type Handler

type Handler struct {
	// contains filtered or unexported fields
}

func NewHandler

func NewHandler(
	remapper remap.HTTPRequestRemapper,
	ruleLimit uint64,
	stats stat.Stats,
	scheme string,
	port string,
	conns *web.ConnMap,
	strictRFC bool,
	connectionClose bool,
	plugins plugin.Plugins,
	pluginContext map[string]*interface{},
	httpConns *web.ConnMap,
	httpsConns *web.ConnMap,
	interfaceName string,
) *Handler

NewHandler returns an http.Handler object, which may be pipelined with other http.Handlers via `http.ListenAndServe`. If you prefer pipelining functions, use `GetHandlerFunc`.

This needs rate-limited in 3 ways. 1. ruleLimit - Simultaneous requests to the origin (remap rule) should be configurably limited. For example, "only allow 1000 simultaneous requests to the origin 2. keyLimit - Simultaneous requests, on cache miss, for the same key (Method+Path+Qstring), should be configurably limited. For example, "Only allow 10 simultaneous requests per unique URL on cache miss. Additional requestors must wait until others complete. Once another requestor completes, all waitors for the same URL are signalled to use the cache, or proceed to the third uncacheable limiter" 3. nocacheLimit - If simultaneous requestors exceed the URL limiter, and some request for the same key gets a result which is uncacheable, waitors for the same URL may then proceed at a third configurable limit for uncacheable requests.

Note these only apply to cache misses. Cache hits are not limited in any way, the origin is not hit and the cache value is immediately returned to the client.

This prevents a large number of uncacheable requests for the same URL from timing out because they're required to proceed serially from the low simultaneous-requests-per-URL limit, while at the same time only hitting the origin with a very low limit for many simultaneous cacheable requests.

Example: Origin limit is 10,000, key limit is 1, the uncacheable limit is 1,000. Then, 2,000 requests come in for the same URL, simultaneously. They are all within the Origin limit, so they are all allowed to proceed to the key limiter. Then, the first request is allowed to make an actual request to the origin, while the other 1,999 wait at the key limiter.

The connectionClose parameter determines whether to send a `Connection: close` header. This is primarily designed for maintenance, to drain the cache of incoming requestors. This overrides rule-specific `connection-close: false` configuration, under the assumption that draining a cache is a temporary maintenance operation, and if connectionClose is true on the service and false on some rules, those rules' configuration is probably a permament setting whereas the operator probably wants to drain all connections if the global setting is true. If it's necessary to leave connection close false on some rules, set all other rules' connectionClose to true and leave the global connectionClose unset.

func (*Handler) ServeHTTP

func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request)

type HandlerPointer

type HandlerPointer struct {
	// contains filtered or unexported fields
}

func NewHandlerPointer

func NewHandlerPointer(realHandler *Handler) *HandlerPointer

func (*HandlerPointer) ServeHTTP

func (h *HandlerPointer) ServeHTTP(w http.ResponseWriter, r *http.Request)

func (*HandlerPointer) Set

func (h *HandlerPointer) Set(newHandler *Handler)

type RespondFunc

type RespondFunc func() (uint64, error)

type Responder

type Responder struct {
	W             http.ResponseWriter
	RequestID     uint64
	PluginCfg     map[string]interface{}
	Plugins       plugin.Plugins
	PluginContext map[string]*interface{}
	Stats         stat.Stats
	F             RespondFunc
	ResponseCode  *int
	cachedata.ParentRespData
	cachedata.SrvrData
	cachedata.ReqData
}

Responder is an object encapsulating the cache's response to the client. It holds all the data necessary to respond, log the response, and add the stats.

func NewResponder

func NewResponder(w http.ResponseWriter, pluginCfg map[string]interface{}, pluginContext map[string]*interface{}, srvrData cachedata.SrvrData, reqData cachedata.ReqData, plugins plugin.Plugins, stats stat.Stats, reqID uint64) *Responder

NewResponder creates a Responder, which defaults to a generic error response.

func (*Responder) Do

func (r *Responder) Do()

Do responds to the client, according to the data in r, with the given code, headers, and body. It additionally writes to the event log, and adds statistics about this request. This should always be called for the final response to a client, in order to properly log, stat, and other final operations. For cache misses, reuse should be ReuseCannot. For parent connect failures, originCode should be 0.

func (*Responder) SetResponse

func (r *Responder) SetResponse(code *int, hdrs *http.Header, body *[]byte, connectionClose bool)

SetResponse is a helper which sets the RespondFunc of r to `web.Respond` with the given code, headers, body, and connectionClose. Note it takes a pointer to the headers and body, which may be modified after calling this but before the Do() sends the response.

type Retrier

type Retrier struct {
	H                 *Handler
	ReqHdr            http.Header
	ReqTime           time.Time
	ReqCacheControl   rfc.CacheControlMap
	RemappingProducer *remap.RemappingProducer
	ReqID             uint64
}

func NewRetrier

func NewRetrier(h *Handler, reqHdr http.Header, reqTime time.Time, reqCacheControl rfc.CacheControlMap, remappingProducer *remap.RemappingProducer, reqID uint64) *Retrier

func (*Retrier) Get

func (r *Retrier) Get(req *http.Request, obj *cacheobj.CacheObj) (*cacheobj.CacheObj, *string, error)

Get takes the HTTP request and the cached object if there is one, and makes a new request, retrying according to its RemappingProducer. If no cached object exists, pass a nil obj. Along with the cacheobj.CacheObj, a string pointer to the request hostname used to fetch the cacheobj.CacheObj is returned.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL