skipper

package module
v0.21.223 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 21, 2024 License: Apache-2.0, MIT Imports: 67 Imported by: 12

README

Build Status Doc Go Reference License Go Report Card Coverage Status GitHub release OpenSSF Best Practices OpenSSF Scorecard Slack CodeQL

Skipper

Skipper

Skipper is an HTTP router and reverse proxy for service composition. It's designed to handle >300k HTTP route definitions with detailed lookup conditions, and flexible augmentation of the request flow with filters. It can be used out of the box or extended with custom lookup, filter logic and configuration sources.

Main features:

An overview of deployments and data-clients shows some use cases to run skipper.

Skipper

  • identifies routes based on the requests' properties, such as path, method, host and headers
  • allows modification of the requests and responses with filters that are independently configured for each route
  • simultaneously streams incoming requests and backend responses
  • optionally acts as a final endpoint (shunt), e.g. as a static file server or a mock backend for diagnostics
  • updates routing rules without downtime, while supporting multiple types of data sources — including etcd, Kubernetes Ingress, static files, route string and custom configuration sources
  • can serve as a Kubernetes Ingress controller without reloads. You can use it in combination with a controller that will route public traffic to your skipper fleet; see AWS example
  • shipped with
    • eskip: a descriptive configuration language designed for routing rules
    • routesrv: proxy to omit kube-apiserver overload leveraging Etag header to reduce amount of CPU used in your skipper data plane
    • webhook: Kubernetes validation webhook to make sure your manifests are deployed safely

Skipper provides a default executable command with a few built-in filters. However, its primary use case is to be extended with custom filters, predicates or data sources. Go here for additional documentation.

A few examples for extending Skipper:

Getting Started
Prerequisites/Requirements

In order to build and run Skipper, only the latest version of Go needs to be installed. Skipper can use Innkeeper or Etcd as data sources for routes, or for the simplest cases, a local configuration file. See more details in the documentation: https://pkg.go.dev/github.com/zalando/skipper

Installation
From Binary

Download binary tgz from https://github.com/zalando/skipper/releases/latest

Example, assumes that you have $GOBIN set to a directory that exists and is in your $PATH:

% curl -LO https://github.com/zalando/skipper/releases/download/v0.14.8/skipper-v0.14.8-linux-amd64.tar.gz
% tar xzf skipper-v0.14.8-linux-amd64.tar.gz
% mv skipper-v0.14.8-linux-amd64/* $GOBIN/
% skipper -version
Skipper version v0.14.8 (commit: 95057948, runtime: go1.19.1)
From Source
% git clone https://github.com/zalando/skipper.git
% make
% ./bin/skipper -version
Skipper version v0.14.8 (commit: 95057948, runtime: go1.19.3)
Running

Create a file with a route:

echo 'hello: Path("/hello") -> "https://www.example.org"' > example.eskip

Optionally, verify the file's syntax:

eskip check example.eskip

If no errors are detected nothing is logged, else a descriptive error is logged.

Start Skipper and make an HTTP request:

skipper -routes-file example.eskip &
curl localhost:9090/hello
Docker

To run the latest Docker container:

docker run registry.opensource.zalan.do/teapot/skipper:latest

To run eskip you first mount the .eskip file, into the container, and run the command

docker run \
  -v $(PWD)/doc-docker-intro.eskip:/doc-docker-intro.eskip \
  registry.opensource.zalan.do/teapot/skipper:latest eskip print doc-docker-intro.eskip

To run skipper you first mount the .eskip file, into the container, expose the ports and run the command

docker run -it \
    -v $(PWD)/doc-docker-intro.eskip:/doc-docker-intro.eskip \
    -p 9090:9090 \
    -p 9911:9911 \
    registry.opensource.zalan.do/teapot/skipper:latest skipper -routes-file doc-docker-intro.eskip

Skipper will then be available on http://localhost:9090

Authentication Proxy

Skipper can be used as an authentication proxy, to check incoming requests with Basic auth or an OAuth2 provider or an OpenID Connect provider including audit logging. See the documentation at: https://pkg.go.dev/github.com/zalando/skipper/filters/auth.

Working with the code

Getting the code with the test dependencies (-t switch):

git clone https://github.com/zalando/skipper.git
cd skipper

Build and test all packages:

make deps
make install
make lint
make shortcheck

On Mac the tests may fail because of low max open file limit. Please make sure you have correct limits setup by following these instructions.

Working from IntelliJ / GoLand

To run or debug skipper from IntelliJ IDEA or GoLand, you need to create this configuration:

Parameter Value
Template Go Build
Run kind Directory
Directory skipper source dir + /cmd/skipper
Working directory skipper source dir (usually the default)
Kubernetes Ingress

Skipper can be used to run as an Kubernetes Ingress controller. Details with examples of Skipper's capabilities and an overview you will can be found in our ingress-controller deployment docs.

For AWS integration, we provide an ingress controller https://github.com/zalando-incubator/kube-ingress-aws-controller, that manage ALBs or NLBs in front of your skipper deployment. A production example for skipper and a production example for kube-ingress-aws-controller, can be found in our Kubernetes configuration https://github.com/zalando-incubator/kubernetes-on-aws.

Documentation

Skipper's Documentation and Godoc developer documentation, includes information about deployment use cases and detailed information on these topics:

1 Minute Skipper introduction

The following example shows a skipper routes file in eskip format, that has 3 named routes: baidu, google and yandex.

% cat doc-1min-intro.eskip
baidu:
        Path("/baidu")
        -> setRequestHeader("Host", "www.baidu.com")
        -> setPath("/s")
        -> setQuery("wd", "godoc skipper")
        -> "http://www.baidu.com";
google:
        *
        -> setPath("/search")
        -> setQuery("q", "godoc skipper")
        -> "https://www.google.com";
yandex:
        * && Cookie("yandex", "true")
        -> setPath("/search/")
        -> setQuery("text", "godoc skipper")
        -> tee("http://127.0.0.1:12345/")
        -> "https://yandex.ru";

Matching the route:

  • baidu is using Path() matching to differentiate the HTTP requests to select the route.
  • google is the default matching with wildcard *
  • yandex is the default matching with wildcard * if you have a cookie yandex=true

Request Filters:

  • If baidu is selected, skipper sets the Host header, changes the path and sets a query string to the http request to the backend "http://www.baidu.com".
  • If google is selected, skipper changes the path and sets a query string to the http request to the backend "https://www.google.com".
  • If yandex is selected, skipper changes the path and sets a query string to the http request to the backend "https://yandex.ru". The modified request will be copied to "http://127.0.0.1:12345/"

Run skipper with the routes file doc-1min-intro.eskip shown above

% skipper -routes-file doc-1min-intro.eskip

To test each route you can use curl:

% curl -v localhost:9090/baidu
% curl -v localhost:9090/
% curl -v --cookie "yandex=true" localhost:9090/

To see the shadow traffic request that is made by the tee() filter you can use nc:

[terminal1]% nc -l 12345
[terminal2]% curl -v --cookie "yandex=true" localhost:9090/
3 Minutes Skipper in Kubernetes introduction

This introduction was moved to ingress controller documentation.

For More details, please check out our Kubernetes ingress controller docs, our ingress usage and how to handle common backend problems in Kubernetes.

Packaging support

See https://github.com/zalando/skipper/blob/master/packaging/readme.md

In case you want to implement and link your own modules into your skipper, there is https://github.com/skipper-plugins organization to enable you to do so. In order to explain you the build process with custom Go modules there is https://github.com/skipper-plugins/skipper-tracing-build, that was used to build skipper's opentracing package. We moved the opentracing plugin source into the tracing package, so there is no need to use plugins for this case.

Because Go plugins are not very well supported by Go itself we do not recommend to use plugins, but you can extend skipper and build your own proxy.

Community

User or developer questions can be asked in our public Google Group

We have a slack channel #skipper in gophers.slack.com. Get an invite. If for some reason this link doesn't work, you can find more information about the gophers communities here.

The preferred communication channel is the slack channel, because the google group is a manual process to add members. Feel also free to create an issue, if you dislike chat and post your questions there.

Proposals

We do our proposals open in Skipper's Google drive. If you want to make a proposal feel free to create an issue and if it is a bigger change we will invite you to a document, such that we can work together.

Users

Zalando used this project as shop frontend http router with 350000 routes. We use it as Kubernetes ingress controller in more than 100 production clusters. With every day traffic between 500k and 7M RPS serving 15000 ingress and 3750 RouteGroups at less than ¢5/1M requests. We also run several custom skipper instances that use skipper as library.

Sergio Ballesteros from spotahome said 2018:

We also ran tests with several ingress controllers and skipper gave us the more reliable results. Currently we are running skipper since almost 2 years with like 20K Ingress rules. The fact that skipper is written in go let us understand the code, add features and fix bugs since all of our infra stack is golang.

In the media

Blog posts:

Conference/Meetups talks

Version promise

Skipper will update the minor version in case we have either:

We expect that skipper library users will use skipper.Run(skipper.Options{}) as main interface that we do not want to break. Besides the Kubernetes v1beta1 removal there was never a change that removed an option. We also do not want to break generic useful packages like net. Sometimes we mark library functions, that we expect to be useful as experimental, because we want to try and learn over time if this is a good API decision or if this limits us.

This promise we hold considering the main, filter, predicate, dataclient, eskip interfaces and generic packages. For other packages, we have more weak promise with backwards compatibility as these are more internal packages. We try to omit breaking changes also in internal packages. If this would mean too much work or impossible to build new functionality as we would like, we will do a breaking change considering strictly semantic versioning rules.

How to update

Every update that changes the minor version (the m in v0.m.p), should be done by +1 only. So v0.N.x to v0.N+1.y and you should read v0.N+1.0 release page to see what can break and what you have to do in order to have no issues while updating.

Documentation

Overview

Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules.

Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host.

Skipper can load and update the route definitions from multiple data sources without being restarted.

It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'.

Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand.

Quickstart

Skipper is 'go get' compatible. If needed, create a 'go workspace' first:

mkdir ws
cd ws
export GOPATH=$(pwd)
export PATH=$PATH:$GOPATH/bin

Get the Skipper packages:

go get github.com/zalando/skipper/...

Create a file with a route:

echo 'hello: Path("/hello") -> "https://www.example.org"' > example.eskip

Optionally, verify the syntax of the file:

eskip check example.eskip

Start Skipper and make an HTTP request:

skipper -routes-file example.eskip &
curl localhost:9090/hello

Routing Mechanism

The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request.

Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context.

Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision.

For further details, see the 'proxy' and 'filters' package documentation.

Matching Requests

Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions:

- method

- path (optionally with wildcards)

- path regular expressions

- host regular expressions

- headers

- header regular expressions

It is also possible to create custom predicates with any other matching criteria.

The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route.

For further details, see the 'routing' package documentation.

Filters - Augmenting Requests

Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route.

For further details, see the 'filters' package documentation.

Service Backends

Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic.

Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.)

A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response.

A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect.

A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly.

Route Definitions

Route definitions consist of the following:

- request matching conditions (predicates)

- filter chain (optional)

- backend

The eskip package implements the in-memory and text representations of route definitions, including a parser.

(Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.)

For further details, see the 'eskip' package documentation

Authentication and Authorization

Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth.

Data Sources

Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients:

- Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ .

- Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper.

- etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package.

- static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates.

Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package.

Circuit Breaker

Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit.

Running Skipper

Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package.

Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter:

skipper -help

An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line:

eskip -help

Extending Skipper

Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources.

Custom Predicates

To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments.

Example, randompredicate.go:

package main

import (
    "github.com/zalando/skipper/routing"
    "math/rand"
    "net/http"
)

type randomSpec struct {}

type randomPredicate struct {
    chance float64
}

func (s *randomSpec) Name() string { return "Random" }

func (s *randomSpec) Create(args []interface{}) (routing.Predicate, error) {
    p := &randomPredicate{.5}
    if len(args) > 0 {
        if c, ok := args[0].(float64); ok {
            p.chance = c
        }
    }

    return p, nil
}

func (p *randomPredicate) Match(_ *http.Request) bool {
    return rand.Float64() < p.chance
}

In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random':

Random(.33) -> "https://test.example.org";
* -> "https://www.example.org"

Custom Filters

To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed.

Example, hellofilter.go:

package main

import (
    "fmt"
    "github.com/zalando/skipper/filters"
)

type helloSpec struct {}

type helloFilter struct {
    who string
}

func (s *helloSpec) Name() string { return "hello" }

func (s *helloSpec) CreateFilter(config []interface{}) (filters.Filter, error) {
    if len(config) == 0 {
        return nil, filters.ErrInvalidFilterParameters
    }

    if who, ok := config[0].(string); ok {
        return &helloFilter{who}, nil
    } else {
        return nil, filters.ErrInvalidFilterParameters
    }
}

func (f *helloFilter) Request(ctx filters.FilterContext) {}

func (f *helloFilter) Response(ctx filters.FilterContext) {
    ctx.Response().Header.Set("X-Hello", fmt.Sprintf("Hello, %s!", f.who))
}

The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as:

r: * -> hello("world") -> "https://www.example.org";

Custom Build

The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command.

Example, hello.go:

package main

import (
    "log"

    "github.com/zalando/skipper"
    "github.com/zalando/skipper/filters"
    "github.com/zalando/skipper/routing"
)

func main() {
    log.Fatal(skipper.Run(skipper.Options{
        Address: ":9090",
        RoutesFile: "routes.eskip",
        CustomPredicates: []routing.PredicateSpec{&randomSpec{}},
        CustomFilters: []filters.Spec{&helloSpec{}}}))
}

A file containing the routes, routes.eskip:

random:
    Random(.05) -> hello("fish?") -> "https://fish.example.org";
hello:
    * -> hello("world") -> "https://www.example.org"

Start the custom router:

go run hello.go

Proxy Package Used Individually

The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing.

Logging and Metrics

Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots.

For details, see the 'logging' and 'metrics' packages documentation.

Performance Considerations

The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure.

Benchmarks for the tree lookup can be run by:

go test github.com/zalando/skipper/routing -bench=Tree

In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.

Example (RatelimitRegistryBinding)
s := &customRatelimitSpec{}

o := Options{
	Address:            ":9090",
	InlineRoutes:       `* -> customRatelimit() -> <shunt>`,
	EnableRatelimiters: true,
	EnableSwarm:        true,
	SwarmRedisURLs:     []string{":6379"},
	CustomFilters:      []filters.Spec{s},
	SwarmRegistry: func(registry *ratelimit.Registry) {
		s.registry = registry
	},
}

log.Fatal(Run(o))
// Example functions without output comments are compiled but not executed
Output:

Index

Examples

Constants

View Source
const DefaultPluginDir = "./plugins"

Variables

This section is empty.

Functions

func Run

func Run(o Options) error

Run skipper.

Types

type Options

type Options struct {
	// WaitForHealthcheckInterval sets the time that skipper waits
	// for the loadbalancer in front to become unhealthy. Defaults
	// to 0.
	WaitForHealthcheckInterval time.Duration

	// StatusChecks is an experimental feature. It defines a
	// comma separated list of HTTP URLs to do GET requests to,
	// that have to return 200 before skipper becomes ready
	StatusChecks []string

	// WhitelistedHealthcheckCIDR appends the whitelisted IP Range to the inernalIPS range for healthcheck purposes
	WhitelistedHealthCheckCIDR []string

	// Network address that skipper should listen on.
	Address string

	// Insecure network address skipper should listen on when TLS is enabled
	InsecureAddress string

	// EnableTCPQueue enables controlling the
	// concurrently processed requests at the TCP listener.
	EnableTCPQueue bool

	// ExpectedBytesPerRequest is used by the TCP LIFO listener.
	// It defines the expected average memory required to process an incoming
	// request. It is used only when MaxTCPListenerConcurrency is not defined.
	// It is used together with the memory limit defined in:
	// cgroup v1 /sys/fs/cgroup/memory/memory.limit_in_bytes
	// or
	// cgroup v2 /sys/fs/cgroup/memory.max
	//
	// See also:
	// cgroup v1: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
	// cgroup v2: https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#memory-interface-files
	ExpectedBytesPerRequest int

	// MaxTCPListenerConcurrency is used by the TCP LIFO listener.
	// It defines the max number of concurrently accepted connections, excluding
	// the pending ones in the queue.
	//
	// When undefined and the EnableTCPQueue is true,
	MaxTCPListenerConcurrency int

	// MaxTCPListenerQueue is used by the TCP LIFO listener.
	// If defines the maximum number of pending connection waiting in the queue.
	MaxTCPListenerQueue int

	// List of custom filter specifications.
	CustomFilters []filters.Spec

	// RegisterFilters callback can be used to register additional filters.
	// Built-in and custom filters are registered before the callback is called.
	RegisterFilters func(registry filters.Registry)

	// Urls of nodes in an etcd cluster, storing route definitions.
	EtcdUrls []string

	// Path prefix for skipper related data in the etcd storage.
	EtcdPrefix string

	// Timeout used for a single request when querying for updates
	// in etcd. This is independent of, and an addition to,
	// SourcePollTimeout. When not set, the internally defined 1s
	// is used.
	EtcdWaitTimeout time.Duration

	// Skip TLS certificate check for etcd connections.
	EtcdInsecure bool

	// If set this value is used as Bearer token for etcd OAuth authorization.
	EtcdOAuthToken string

	// If set this value is used as username for etcd basic authorization.
	EtcdUsername string

	// If set this value is used as password for etcd basic authorization.
	EtcdPassword string

	// If set enables skipper to generate based on ingress resources in kubernetes cluster
	Kubernetes bool

	// If set makes skipper authenticate with the kubernetes API server with service account assigned to the
	// skipper POD.
	// If omitted skipper will rely on kubectl proxy to authenticate with API server
	KubernetesInCluster bool

	// Kubernetes API base URL. Only makes sense if KubernetesInCluster is set to false. If omitted and
	// skipper is not running in-cluster, the default API URL will be used.
	KubernetesURL string

	// KubernetesTokenFile configures path to the token file.
	// Defaults to /var/run/secrets/kubernetes.io/serviceaccount/token when running in-cluster.
	KubernetesTokenFile string

	// KubernetesHealthcheck, when Kubernetes ingress is set, indicates
	// whether an automatic healthcheck route should be generated. The
	// generated route will report healthyness when the Kubernetes API
	// calls are successful. The healthcheck endpoint is accessible from
	// internal IPs, with the path /kube-system/healthz.
	KubernetesHealthcheck bool

	// KubernetesHTTPSRedirect, when Kubernetes ingress is set, indicates
	// whether an automatic redirect route should be generated to redirect
	// HTTP requests to their HTTPS equivalent. The generated route will
	// match requests with the X-Forwarded-Proto and X-Forwarded-Port,
	// expected to be set by the load-balancer.
	KubernetesHTTPSRedirect bool

	// KubernetesHTTPSRedirectCode overrides the default redirect code (308)
	// when used together with -kubernetes-https-redirect.
	KubernetesHTTPSRedirectCode int

	// KubernetesDisableCatchAllRoutes, when set, tells the data client to not create catchall routes.
	KubernetesDisableCatchAllRoutes bool

	// KubernetesIngressClass is a regular expression, that will make
	// skipper load only the ingress resources that have a matching
	// kubernetes.io/ingress.class annotation. For backwards compatibility,
	// the ingresses without an annotation, or an empty annotation, will
	// be loaded, too.
	KubernetesIngressClass string

	// KubernetesRouteGroupClass is a regular expression, that will make skipper
	// load only the RouteGroup resources that have a matching
	// zalando.org/routegroup.class annotation. Any RouteGroups without the
	// annotation, or which an empty annotation, will be loaded too.
	KubernetesRouteGroupClass string

	// KubernetesIngressLabelSelectors is a map of kubernetes labels to their values that must be present on a resource to be loaded
	// by the client. A label and its value on an Ingress must be match exactly to be loaded by Skipper.
	// If the value is irrelevant for a given configuration, it can be left empty. The default
	// value is no labels required.
	// Examples:
	//  Config [] will load all Ingresses.
	// 	Config ["skipper-enabled": ""] will load only Ingresses with a label "skipper-enabled", no matter the value.
	// 	Config ["skipper-enabled": "true"] will load only Ingresses with a label "skipper-enabled: true"
	// 	Config ["skipper-enabled": "", "foo": "bar"] will load only Ingresses with both labels while label "foo" must have a value "bar".
	KubernetesIngressLabelSelectors map[string]string

	// KubernetesServicesLabelSelectors is a map of kubernetes labels to their values that must be present on a resource to be loaded
	// by the client. Read documentation for IngressLabelSelectors for examples and more details.
	// The default value is no labels required.
	KubernetesServicesLabelSelectors map[string]string

	// KubernetesEndpointsLabelSelectors is a map of kubernetes labels to their values that must be present on a resource to be loaded
	// by the client. Read documentation for IngressLabelSelectors for examples and more details.
	// The default value is no labels required.
	KubernetesEndpointsLabelSelectors map[string]string

	// KubernetesSecretsLabelSelectors is a map of kubernetes labels to their values that must be present on a resource to be loaded
	// by the client. Read documentation for IngressLabelSelectors for examples and more details.
	// The default value is no labels required.
	KubernetesSecretsLabelSelectors map[string]string

	// KubernetesRouteGroupsLabelSelectors is a map of kubernetes labels to their values that must be present on a resource to be loaded
	// by the client. Read documentation for IngressLabelSelectors for examples and more details.
	// The default value is no labels required.
	KubernetesRouteGroupsLabelSelectors map[string]string

	// PathMode controls the default interpretation of ingress paths in cases
	// when the ingress doesn't specify it with an annotation.
	KubernetesPathMode kubernetes.PathMode

	// KubernetesNamespace is used to switch between monitoring ingresses in the cluster-scope or limit
	// the ingresses to only those in the specified namespace. Defaults to "" which means monitor ingresses
	// in the cluster-scope.
	KubernetesNamespace string

	// KubernetesEnableEndpointslices if set skipper will fetch
	// endpointslices instead of endpoints to scale more than 1000
	// pods within a service
	KubernetesEnableEndpointslices bool

	// *DEPRECATED* KubernetesEnableEastWest enables cluster internal service to service communication, aka east-west traffic
	KubernetesEnableEastWest bool

	// *DEPRECATED* KubernetesEastWestDomain sets the cluster internal domain used to create additional routes in skipper, defaults to skipper.cluster.local
	KubernetesEastWestDomain string

	// KubernetesEastWestRangeDomains set the the cluster internal domains for
	// east west traffic. Identified routes to such domains will include
	// the KubernetesEastWestRangePredicates.
	KubernetesEastWestRangeDomains []string

	// KubernetesEastWestRangePredicates set the Predicates that will be
	// appended to routes identified as to KubernetesEastWestRangeDomains.
	KubernetesEastWestRangePredicates []*eskip.Predicate

	// KubernetesOnlyAllowedExternalNames will enable validation of ingress external names and route groups network
	// backend addresses, explicit LB endpoints validation against the list of patterns in
	// AllowedExternalNames.
	KubernetesOnlyAllowedExternalNames bool

	// KubernetesAllowedExternalNames contains regexp patterns of those domain names that are allowed to be
	// used with external name services (type=ExternalName).
	KubernetesAllowedExternalNames []*regexp.Regexp

	// KubernetesRedisServiceNamespace to be used to lookup ring shards dynamically
	KubernetesRedisServiceNamespace string

	// KubernetesRedisServiceName to be used to lookup ring shards dynamically
	KubernetesRedisServiceName string

	// KubernetesRedisServicePort to be used to lookup ring shards dynamically
	KubernetesRedisServicePort int

	// KubernetesForceService overrides the default Skipper functionality to route traffic using Kubernetes Endpoints,
	// instead using Kubernetes Services.
	KubernetesForceService bool

	// KubernetesBackendTrafficAlgorithm specifies the algorithm to calculate the backend traffic
	KubernetesBackendTrafficAlgorithm kubernetes.BackendTrafficAlgorithm

	// KubernetesDefaultLoadBalancerAlgorithm sets the default algorithm to be used for load balancing between backend endpoints,
	// available options: roundRobin, consistentHash, random, powerOfRandomNChoices
	KubernetesDefaultLoadBalancerAlgorithm string

	// File containing static route definitions. Multiple may be given comma separated.
	RoutesFile string

	// File containing route definitions with file watch enabled.
	// Multiple may be given comma separated. (For the skipper
	// command this option is used when starting it with the -routes-file flag.)
	WatchRoutesFile string

	// RouteURLs are URLs pointing to route definitions, in eskip format, with change watching enabled.
	RoutesURLs []string

	// InlineRoutes can define routes as eskip text.
	InlineRoutes string

	// Polling timeout of the routing data sources.
	SourcePollTimeout time.Duration

	// DefaultFilters will be applied to all routes automatically.
	DefaultFilters *eskip.DefaultFilters

	// DisabledFilters is a list of filters unavailable for use
	DisabledFilters []string

	// CloneRoute is a slice of PreProcessors that will be applied to all routes
	// automatically. They will clone all matching routes and apply changes to the
	// cloned routes.
	CloneRoute []*eskip.Clone

	// EditRoute will be applied to all routes automatically and
	// will apply changes to all matching routes.
	EditRoute []*eskip.Editor

	// A list of custom routing pre-processor implementations that will
	// be applied to all routes.
	CustomRoutingPreProcessors []routing.PreProcessor

	// Deprecated. See ProxyFlags. When used together with ProxyFlags,
	// the values will be combined with |.
	ProxyOptions proxy.Options

	// Flags controlling the proxy behavior.
	ProxyFlags proxy.Flags

	// Tells the proxy maximum how many idle connections can it keep
	// alive.
	IdleConnectionsPerHost int

	// Defines the time period of how often the idle connections maintained
	// by the proxy are closed.
	CloseIdleConnsPeriod time.Duration

	// Defines ReadTimeoutServer for server http connections.
	ReadTimeoutServer time.Duration

	// Defines ReadHeaderTimeout for server http connections.
	ReadHeaderTimeoutServer time.Duration

	// Defines WriteTimeout for server http connections.
	WriteTimeoutServer time.Duration

	// Defines IdleTimeout for server http connections.
	IdleTimeoutServer time.Duration

	// KeepaliveServer configures maximum age for server http connections.
	// The connection is closed after it existed for this duration.
	KeepaliveServer time.Duration

	// KeepaliveRequestsServer configures maximum number of requests for server http connections.
	// The connection is closed after serving this number of requests.
	KeepaliveRequestsServer int

	// Defines MaxHeaderBytes for server http connections.
	MaxHeaderBytes int

	// Enable connection state metrics for server http connections.
	EnableConnMetricsServer bool

	// TimeoutBackend sets the TCP client connection timeout for
	// proxy http connections to the backend.
	TimeoutBackend time.Duration

	// ResponseHeaderTimeout sets the HTTP response timeout for
	// proxy http connections to the backend.
	ResponseHeaderTimeoutBackend time.Duration

	// ExpectContinueTimeoutBackend sets the HTTP timeout to expect a
	// response for status Code 100 for proxy http connections to
	// the backend.
	ExpectContinueTimeoutBackend time.Duration

	// KeepAliveBackend sets the TCP keepalive for proxy http
	// connections to the backend.
	KeepAliveBackend time.Duration

	// DualStackBackend sets if the proxy TCP connections to the
	// backend should be dual stack.
	DualStackBackend bool

	// TLSHandshakeTimeoutBackend sets the TLS handshake timeout
	// for proxy connections to the backend.
	TLSHandshakeTimeoutBackend time.Duration

	// MaxIdleConnsBackend sets MaxIdleConns, which limits the
	// number of idle connections to all backends, 0 means no
	// limit.
	MaxIdleConnsBackend int

	// DisableHTTPKeepalives sets DisableKeepAlives, which forces
	// a backend to always create a new connection.
	DisableHTTPKeepalives bool

	// Flag indicating to ignore trailing slashes in paths during route
	// lookup.
	IgnoreTrailingSlash bool

	// Priority routes that are matched against the requests before
	// the standard routes from the data clients.
	PriorityRoutes []proxy.PriorityRoute

	// Specifications of custom, user defined predicates.
	CustomPredicates []routing.PredicateSpec

	// Custom data clients to be used together with the default etcd and Innkeeper.
	CustomDataClients []routing.DataClient

	// CustomHttpHandlerWrap provides ability to wrap http.Handler created by skipper.
	// http.Handler is used for accepting incoming http requests.
	// It allows to add additional logic (for example tracing) by providing a wrapper function
	// which accepts original skipper handler as an argument and returns a wrapped handler
	CustomHttpHandlerWrap func(http.Handler) http.Handler

	// CustomHttpRoundTripperWrap provides ability to wrap http.RoundTripper created by skipper.
	// http.RoundTripper is used for making outgoing requests (backends)
	// It allows to add additional logic (for example tracing) by providing a wrapper function
	// which accepts original skipper http.RoundTripper as an argument and returns a wrapped roundtripper
	CustomHttpRoundTripperWrap func(http.RoundTripper) http.RoundTripper

	// WaitFirstRouteLoad prevents starting the listener before the first batch
	// of routes were applied.
	WaitFirstRouteLoad bool

	// SuppressRouteUpdateLogs indicates to log only summaries of the routing updates
	// instead of full details of the updated/deleted routes.
	SuppressRouteUpdateLogs bool

	// Dev mode. Currently this flag disables prioritization of the
	// consumer side over the feeding side during the routing updates to
	// populate the updated routes faster.
	DevMode bool

	// Network address for the support endpoints
	SupportListener string

	// Deprecated: Network address for the /metrics endpoint
	MetricsListener string

	// Skipper provides a set of metrics with different keys which are exposed via HTTP in JSON
	// You can customize those key names with your own prefix
	MetricsPrefix string

	// EnableProfile exposes profiling information on /profile of the
	// metrics listener.
	EnableProfile bool

	// BlockProfileRate calls runtime.SetBlockProfileRate(BlockProfileRate) if non zero value, deactivate with <0
	BlockProfileRate int

	// MutexProfileFraction calls runtime.SetMutexProfileFraction(MutexProfileFraction) if non zero value, deactivate with <0
	MutexProfileFraction int

	// MemProfileRate calls runtime.SetMemProfileRate(MemProfileRate) if non zero value, deactivate with <0
	MemProfileRate int

	// Flag that enables reporting of the Go garbage collector statistics exported in debug.GCStats
	EnableDebugGcMetrics bool

	// Flag that enables reporting of the Go runtime statistics exported in runtime and specifically runtime.MemStats
	EnableRuntimeMetrics bool

	// If set, detailed response time metrics will be collected
	// for each route, additionally grouped by status and method.
	EnableServeRouteMetrics bool

	// If set, a counter for each route is generated, additionally
	// grouped by status and method. It differs from the automatically
	// generated counter from `EnableServeRouteMetrics` because it will
	// always contain the status and method labels, independently of the
	// `EnableServeMethodMetric` and `EnableServeStatusCodeMetric` flags.
	EnableServeRouteCounter bool

	// If set, detailed response time metrics will be collected
	// for each host, additionally grouped by status and method.
	EnableServeHostMetrics bool

	// If set, a counter for each host is generated, additionally
	// grouped by status and method. It differs from the automatically
	// generated counter from `EnableServeHostMetrics` because it will
	// always contain the status and method labels, independently of the
	// `EnableServeMethodMetric` and `EnableServeStatusCodeMetric` flags.
	EnableServeHostCounter bool

	// If set, the detailed total response time metrics will contain the
	// HTTP method as a domain of the metric. It affects both route and
	// host split metrics.
	EnableServeMethodMetric bool

	// If set, the detailed total response time metrics will contain the
	// HTTP Response status code as a domain of the metric. It affects
	// both route and host split metrics.
	EnableServeStatusCodeMetric bool

	// If set, detailed response time metrics will be collected
	// for each backend host
	EnableBackendHostMetrics bool

	// EnableAllFiltersMetrics enables collecting combined filter
	// metrics per each route. Without the DisableMetricsCompatibilityDefaults,
	// it is enabled by default.
	EnableAllFiltersMetrics bool

	// EnableCombinedResponseMetrics enables collecting response time
	// metrics combined for every route.
	EnableCombinedResponseMetrics bool

	// EnableRouteResponseMetrics enables collecting response time
	// metrics per each route. Without the DisableMetricsCompatibilityDefaults,
	// it is enabled by default.
	EnableRouteResponseMetrics bool

	// EnableRouteBackendErrorsCounters enables counters for backend
	// errors per each route. Without the DisableMetricsCompatibilityDefaults,
	// it is enabled by default.
	EnableRouteBackendErrorsCounters bool

	// EnableRouteStreamingErrorsCounters enables counters for streaming
	// errors per each route. Without the DisableMetricsCompatibilityDefaults,
	// it is enabled by default.
	EnableRouteStreamingErrorsCounters bool

	// EnableRouteBackendMetrics enables backend response time metrics
	// per each route. Without the DisableMetricsCompatibilityDefaults, it is
	// enabled by default.
	EnableRouteBackendMetrics bool

	// EnableRouteCreationMetrics enables the OriginMarker to track route creation times. Disabled by default
	EnableRouteCreationMetrics bool

	// When set, makes the histograms use an exponentially decaying sample
	// instead of the default uniform one.
	MetricsUseExpDecaySample bool

	// Use custom buckets for prometheus histograms.
	HistogramMetricBuckets []float64

	// The following options, for backwards compatibility, are true
	// by default: EnableAllFiltersMetrics, EnableRouteResponseMetrics,
	// EnableRouteBackendErrorsCounters, EnableRouteStreamingErrorsCounters,
	// EnableRouteBackendMetrics. With this compatibility flag, the default
	// for these options can be set to false.
	DisableMetricsCompatibilityDefaults bool

	// Implementation of a Metrics handler. If provided this is going to be used
	// instead of creating a new one based on the Kind of metrics wanted. This
	// is useful in case you want to report metrics to a custom aggregator.
	MetricsBackend metrics.Metrics

	// Output file for the application log. Default value: /dev/stderr.
	//
	// When /dev/stderr or /dev/stdout is passed in, it will be resolved
	// to os.Stderr or os.Stdout.
	//
	// Warning: passing an arbitrary file will try to open it append
	// on start and use it, or fail on start, but the current
	// implementation doesn't support any more proper handling
	// of temporary failures or log-rolling.
	ApplicationLogOutput string

	// Application log prefix. Default value: "[APP]".
	ApplicationLogPrefix string

	// Enables logs in JSON format
	ApplicationLogJSONEnabled bool

	// ApplicationLogJsonFormatter, when set and JSON logging is enabled, is passed along to to the underlying
	// Logrus logger for application logs. To enable structured logging, use ApplicationLogJSONEnabled.
	ApplicationLogJsonFormatter *log.JSONFormatter

	// Output file for the access log. Default value: /dev/stderr.
	//
	// When /dev/stderr or /dev/stdout is passed in, it will be resolved
	// to os.Stderr or os.Stdout.
	//
	// Warning: passing an arbitrary file will try to open for append
	// it on start and use it, or fail on start, but the current
	// implementation doesn't support any more proper handling
	// of temporary failures or log-rolling.
	AccessLogOutput string

	// Disables the access log.
	AccessLogDisabled bool

	// Enables logs in JSON format
	AccessLogJSONEnabled bool

	// AccessLogStripQuery, when set, causes the query strings stripped
	// from the request URI in the access logs.
	AccessLogStripQuery bool

	// AccessLogJsonFormatter, when set and JSON logging is enabled, is passed along to to the underlying
	// Logrus logger for access logs. To enable structured logging, use AccessLogJSONEnabled.
	AccessLogJsonFormatter *log.JSONFormatter

	DebugListener string

	// Path of certificate(s) when using TLS, multiple may be given comma separated
	CertPathTLS string
	// Path of key(s) when using TLS, multiple may be given comma separated. For
	// multiple keys, the order must match the one given in CertPathTLS
	KeyPathTLS string

	// TLSClientAuth sets the policy the server will follow for
	// TLS Client Authentication, see [tls.ClientAuthType]
	TLSClientAuth tls.ClientAuthType

	// TLS Settings for Proxy Server
	ProxyTLS *tls.Config

	// Client TLS to connect to Backends
	ClientTLS *tls.Config

	// TLSMinVersion to set the minimal TLS version for all TLS configurations
	TLSMinVersion uint16

	// CipherSuites sets the list of cipher suites to use for TLS 1.2
	CipherSuites []uint16

	// Flush interval for upgraded Proxy connections
	BackendFlushInterval time.Duration

	// Experimental feature to handle protocol Upgrades for Websockets, SPDY, etc.
	ExperimentalUpgrade bool

	// ExperimentalUpgradeAudit enables audit log of both the request line
	// and the response messages during web socket upgrades.
	ExperimentalUpgradeAudit bool

	// MaxLoopbacks defines the maximum number of loops that the proxy can execute when the routing table
	// contains loop backends (<loopback>).
	MaxLoopbacks int

	// EnableBreakers enables the usage of the breakers in the route definitions without initializing any
	// by default. It is a shortcut for setting the BreakerSettings to:
	//
	// 	[]circuit.BreakerSettings{{Type: BreakerDisabled}}
	//
	EnableBreakers bool

	// BreakerSettings contain global and host specific settings for the circuit breakers.
	BreakerSettings []circuit.BreakerSettings

	// EnableRatelimiters enables the usage of the ratelimiter in the route definitions without initializing any
	// by default. It is a shortcut for setting the RatelimitSettings to:
	//
	// 	[]ratelimit.Settings{{Type: DisableRatelimit}}
	//
	EnableRatelimiters bool

	// RatelimitSettings contain global and host specific settings for the ratelimiters.
	RatelimitSettings []ratelimit.Settings

	// EnableRouteFIFOMetrics enables metrics for the individual route FIFO queues, if any.
	EnableRouteFIFOMetrics bool

	// EnableRouteLIFOMetrics enables metrics for the individual route LIFO queues, if any.
	EnableRouteLIFOMetrics bool

	// OpenTracing enables opentracing
	OpenTracing []string

	// OpenTracingInitialSpan can override the default initial, pre-routing, span name.
	// Default: "ingress".
	OpenTracingInitialSpan string

	// OpenTracingExcludedProxyTags can disable a tag so that it is not recorded. By default every tag is included.
	OpenTracingExcludedProxyTags []string

	// OpenTracingDisableFilterSpans flag is used to disable creation of spans representing request and response filters.
	OpenTracingDisableFilterSpans bool

	// OpenTracingLogFilterLifecycleEvents flag is used to enable/disable the logs for events marking request and
	// response filters' start & end times.
	OpenTracingLogFilterLifecycleEvents bool

	// OpenTracingLogStreamEvents flag is used to enable/disable the logs that marks the
	// times when response headers & payload are streamed to the client
	OpenTracingLogStreamEvents bool

	// OpenTracingBackendNameTag enables an additional tracing tag containing a backend name
	// for a route when it's available (e.g. for RouteGroups)
	OpenTracingBackendNameTag bool

	// OpenTracingTracer allows pre-created tracer to be passed on to skipper. Providing the
	// tracer instance overrides options provided under OpenTracing property.
	OpenTracingTracer ot.Tracer

	// PluginDir defines the directory to load plugins from, DEPRECATED, use PluginDirs
	PluginDir string
	// PluginDirs defines the directories to load plugins from
	PluginDirs []string

	// FilterPlugins loads additional filters from modules. The first value in each []string
	// needs to be the plugin name (as on disk, without path, without ".so" suffix). The
	// following values are passed as arguments to the plugin while loading, see also
	// https://opensource.zalando.com/skipper/reference/plugins/
	FilterPlugins [][]string

	// PredicatePlugins loads additional predicates from modules. See above for FilterPlugins
	// what the []string should contain.
	PredicatePlugins [][]string

	// DataClientPlugins loads additional data clients from modules. See above for FilterPlugins
	// what the []string should contain.
	DataClientPlugins [][]string

	// Plugins combine multiple types of the above plugin types in one plugin (where
	// necessary because of shared data between e.g. a filter and a data client).
	Plugins [][]string

	// DefaultHTTPStatus is the HTTP status used when no routes are found
	// for a request.
	DefaultHTTPStatus int

	// EnablePrometheusMetrics enables Prometheus format metrics.
	//
	// This option is *deprecated*. The recommended way to enable prometheus metrics is to
	// use the MetricsFlavours option.
	EnablePrometheusMetrics bool

	// EnablePrometheusStartLabel adds start label to each prometheus counter with the value of counter creation
	// timestamp as unix nanoseconds.
	EnablePrometheusStartLabel bool

	// An instance of a Prometheus registry. It allows registering and serving custom metrics when skipper is used as a
	// library.
	// A new registry is created if this option is nil.
	PrometheusRegistry *prometheus.Registry

	// MetricsFlavours sets the metrics storage and exposed format
	// of metrics endpoints.
	MetricsFlavours []string

	// LoadBalancerHealthCheckInterval is *deprecated* and not in use anymore
	LoadBalancerHealthCheckInterval time.Duration

	// ReverseSourcePredicate enables the automatic use of IP
	// whitelisting in different places to use the reversed way of
	// identifying a client IP within the X-Forwarded-For
	// header. Amazon's ALB for example writes the client IP to
	// the last item of the string list of the X-Forwarded-For
	// header, in this case you want to set this to true.
	ReverseSourcePredicate bool

	// EnableOAuth2GrantFlow, enables OAuth2 Grant Flow filter
	EnableOAuth2GrantFlow bool

	// OAuth2AuthURL, the url to redirect the requests to when login is required.
	OAuth2AuthURL string

	// OAuth2TokenURL, the url where the access code should be exchanged for the
	// access token.
	OAuth2TokenURL string

	// OAuth2RevokeTokenURL, the url where the access and refresh tokens can be
	// revoked during a logout.
	OAuth2RevokeTokenURL string

	// OAuthTokeninfoURL sets the the URL to be queried for
	// information for all auth.NewOAuthTokeninfo*() filters.
	OAuthTokeninfoURL string

	// OAuthTokeninfoTimeout sets timeout duration while calling oauth token service
	OAuthTokeninfoTimeout time.Duration

	// OAuthTokeninfoCacheSize configures the maximum number of cached tokens.
	// Zero value disables tokeninfo cache.
	OAuthTokeninfoCacheSize int

	// OAuthTokeninfoCacheTTL limits the lifetime of a cached tokeninfo.
	// Tokeninfo is cached for the duration of "expires_in" field value seconds or
	// for the duration of OAuthTokeninfoCacheTTL if it is not zero and less than "expires_in" value.
	OAuthTokeninfoCacheTTL time.Duration

	// OAuth2SecretFile contains the filename with the encryption key for the
	// authentication cookie and grant flow state stored in Secrets.
	OAuth2SecretFile string

	// OAuth2ClientID, the OAuth2 client id of the current service, used to exchange
	// the access code.
	OAuth2ClientID string

	// OAuth2ClientSecret, the secret associated with the ClientID, used to exchange
	// the access code.
	OAuth2ClientSecret string

	// OAuth2ClientIDFile, the path of the file containing the OAuth2 client id of
	// the current service, used to exchange the access code.
	// File name may contain {host} placeholder which will be replaced by the request host.
	OAuth2ClientIDFile string

	// OAuth2ClientSecretFile, the path of the file containing the secret associated
	// with the ClientID, used to exchange the access code.
	// File name may contain {host} placeholder which will be replaced by the request host.
	OAuth2ClientSecretFile string

	// OAuth2CallbackPath contains the path where the OAuth2 callback requests with the
	// authorization code should be redirected to. Defaults to /.well-known/oauth2-callback
	OAuth2CallbackPath string

	// OAuthTokenintrospectionTimeout sets timeout duration while calling oauth tokenintrospection service
	OAuthTokenintrospectionTimeout time.Duration

	// OAuth2AuthURLParameters the additional parameters to send to OAuth2 authorize and token endpoints.
	OAuth2AuthURLParameters map[string]string

	// OAuth2AccessTokenHeaderName the name of the header to which the access token
	// should be assigned after the oauthGrant filter.
	OAuth2AccessTokenHeaderName string

	// OAuth2TokeninfoSubjectKey the key of the subject ID attribute in the
	// tokeninfo map. Used for downstream oidcClaimsQuery compatibility.
	OAuth2TokeninfoSubjectKey string

	// OAuth2GrantTokeninfoKeys, if not empty keys not in this list are removed from the tokeninfo map.
	OAuth2GrantTokeninfoKeys []string

	// OAuth2TokenCookieName the name of the cookie that Skipper sets after a
	// successful OAuth2 token exchange. Stores the encrypted access token.
	OAuth2TokenCookieName string

	// OAuth2TokenCookieRemoveSubdomains sets the number of subdomains to remove from
	// the callback request hostname to obtain token cookie domain.
	OAuth2TokenCookieRemoveSubdomains int

	// OAuth2GrantInsecure omits Secure attribute of the token cookie and uses http scheme for callback url.
	OAuth2GrantInsecure bool

	// OAuthGrantConfig specifies configuration for OAuth grant flow.
	// A new instance will be created from OAuth* options when not specified.
	OAuthGrantConfig *auth.OAuthConfig

	// CompressEncodings, if not empty replace default compression encodings
	CompressEncodings []string

	// OIDCSecretsFile path to the file containing key to encrypt OpenID token
	OIDCSecretsFile string

	// OIDCCookieValidity sets validity time duration for Cookies to calculate expiration time. (default 1h).
	OIDCCookieValidity time.Duration

	// OIDCDistributedClaimsTimeout sets timeout duration while calling Distributed Claims endpoint.
	OIDCDistributedClaimsTimeout time.Duration

	// SecretsRegistry to store and load secretsencrypt
	SecretsRegistry *secrets.Registry

	// CredentialsPaths directories or files where credentials are stored one secret per file
	CredentialsPaths []string

	// CredentialsUpdateInterval sets the interval to update secrets
	CredentialsUpdateInterval time.Duration

	// API Monitoring feature is active (feature toggle)
	ApiUsageMonitoringEnable                bool
	ApiUsageMonitoringRealmKeys             string
	ApiUsageMonitoringClientKeys            string
	ApiUsageMonitoringRealmsTrackingPattern string
	// *DEPRECATED* ApiUsageMonitoringDefaultClientTrackingPattern
	ApiUsageMonitoringDefaultClientTrackingPattern string

	// Default filters directory enables default filters mechanism and sets the directory where the filters are located
	DefaultFiltersDir string

	// WebhookTimeout sets timeout duration while calling a custom webhook auth service
	WebhookTimeout time.Duration

	// MaxAuditBody sets the maximum read size of the body read by the audit log filter
	MaxAuditBody int

	// MaxMatcherBufferSize sets the maximum read buffer size of blockContent filter defaults to 2MiB
	MaxMatcherBufferSize uint64

	// EnableSwarm enables skipper fleet communication, required by e.g.
	// the cluster ratelimiter
	EnableSwarm bool
	// redis based swarm
	SwarmRedisURLs                []string
	SwarmRedisPassword            string
	SwarmRedisHashAlgorithm       string
	SwarmRedisDialTimeout         time.Duration
	SwarmRedisReadTimeout         time.Duration
	SwarmRedisWriteTimeout        time.Duration
	SwarmRedisPoolTimeout         time.Duration
	SwarmRedisMinIdleConns        int
	SwarmRedisMaxIdleConns        int
	SwarmRedisEndpointsRemoteURL  string
	SwarmRedisConnMetricsInterval time.Duration
	SwarmRedisUpdateInterval      time.Duration
	// swim based swarm
	SwarmKubernetesNamespace          string
	SwarmKubernetesLabelSelectorKey   string
	SwarmKubernetesLabelSelectorValue string
	SwarmPort                         int
	SwarmMaxMessageBuffer             int
	SwarmLeaveTimeout                 time.Duration
	// swim based swarm for local testing
	SwarmStaticSelf  string // 127.0.0.1:9001
	SwarmStaticOther string // 127.0.0.1:9002,127.0.0.1:9003

	// SwarmRegistry specifies an optional callback function that is
	// called after ratelimit registry is initialized
	SwarmRegistry func(*ratelimit.Registry)

	// ClusterRatelimitMaxGroupShards specifies the maximum number of group shards for the clusterRatelimit filter
	ClusterRatelimitMaxGroupShards int

	// KubernetesEnableTLS enables kubernetes to use resources to terminate tls
	KubernetesEnableTLS bool

	// LuaModules that are allowed to be used.
	//
	// Use <module>.<symbol> to selectively enable module symbols,
	// for example: package,base._G,base.print,json
	LuaModules []string

	// LuaSources that are allowed as input sources. Valid sources
	// are "", "file", "inline", "file","inline". Empty list
	// defaults to "file","inline" and "none" disables lua
	// filters.
	LuaSources []string

	EnableOpenPolicyAgent                bool
	OpenPolicyAgentConfigTemplate        string
	OpenPolicyAgentEnvoyMetadata         string
	OpenPolicyAgentCleanerInterval       time.Duration
	OpenPolicyAgentStartupTimeout        time.Duration
	OpenPolicyAgentMaxRequestBodySize    int64
	OpenPolicyAgentRequestBodyBufferSize int64
	OpenPolicyAgentMaxMemoryBodyParsing  int64

	PassiveHealthCheck map[string]string
}

Options to start skipper.

func (*Options) KubernetesDataClientOptions added in v0.16.117

func (o *Options) KubernetesDataClientOptions() kubernetes.Options

func (*Options) OAuthGrantOptions added in v0.21.50

func (o *Options) OAuthGrantOptions() *auth.OAuthConfig

type RedisEndpoint added in v0.16.5

type RedisEndpoint struct {
	Address string `json:"address"`
}

type RedisEndpoints added in v0.16.5

type RedisEndpoints struct {
	Endpoints []RedisEndpoint `json:"endpoints"`
}

Directories

Path Synopsis
Package circuit implements circuit breaker functionality for the proxy.
Package circuit implements circuit breaker functionality for the proxy.
cmd
eskip
This utility can be used to verify, print, update or delete eskip formatted routes from and to different data sources.
This utility can be used to verify, print, update or delete eskip formatted routes from and to different data sources.
skipper
This command provides an executable version of skipper with the default set of filters.
This command provides an executable version of skipper with the default set of filters.
dataclients
kubernetes
Package kubernetes implements Kubernetes Ingress support for Skipper.
Package kubernetes implements Kubernetes Ingress support for Skipper.
kubernetes/definitions
Package definitions provides type definitions, parsing, marshaling and validation for Kubernetes resources used by Skipper.
Package definitions provides type definitions, parsing, marshaling and validation for Kubernetes resources used by Skipper.
routestring
Package routestring provides a DataClient implementation for setting route configuration in form of simple eskip string.
Package routestring provides a DataClient implementation for setting route configuration in form of simple eskip string.
Package eskip implements an in-memory representation of Skipper routes and a DSL for describing Skipper route expressions, route definitions and complete routing tables.
Package eskip implements an in-memory representation of Skipper routes and a DSL for describing Skipper route expressions, route definitions and complete routing tables.
Package eskipfile implements the DataClient interface for reading the skipper route definitions from an eskip formatted file.
Package eskipfile implements the DataClient interface for reading the skipper route definitions from an eskip formatted file.
Package etcd implements a DataClient for reading the skipper route definitions from an etcd service.
Package etcd implements a DataClient for reading the skipper route definitions from an etcd service.
etcdtest
Package etcdtest implements an easy startup script to start a local etcd instance for testing purpose.
Package etcdtest implements an easy startup script to start a local etcd instance for testing purpose.
Package filters contains definitions for skipper filtering and a default, built-in set of filters.
Package filters contains definitions for skipper filtering and a default, built-in set of filters.
accesslog
Package accesslog provides request filters that give the ability to override AccessLogDisabled setting.
Package accesslog provides request filters that give the ability to override AccessLogDisabled setting.
apiusagemonitoring
Package apiusagemonitoring provides filters gathering metrics around API calls
Package apiusagemonitoring provides filters gathering metrics around API calls
auth
Package auth provides authentication related filters.
Package auth provides authentication related filters.
builtin
Package builtin provides a small, generic set of filters.
Package builtin provides a small, generic set of filters.
circuit
Package circuit provides filters to control the circuit breaker settings on the route level.
Package circuit provides filters to control the circuit breaker settings on the route level.
cookie
Package cookie implements filters to append to requests or responses.
Package cookie implements filters to append to requests or responses.
cors
Package cors implements the origin header for CORS.
Package cors implements the origin header for CORS.
diag
Package diag provides a set of network throttling filters for diagnostic purpose.
Package diag provides a set of network throttling filters for diagnostic purpose.
filtertest
Package filtertest implements mock versions of the Filter, Spec and FilterContext interfaces used during tests.
Package filtertest implements mock versions of the Filter, Spec and FilterContext interfaces used during tests.
flowid
Package flowid implements a filter used for identifying incoming requests through their complete lifecycle for logging and monitoring or else.
Package flowid implements a filter used for identifying incoming requests through their complete lifecycle for logging and monitoring or else.
log
Package log provides a request logging filter, usable also for audit logging.
Package log provides a request logging filter, usable also for audit logging.
ratelimit
Package ratelimit provides filters to control the rate limiter settings on the route level.
Package ratelimit provides filters to control the rate limiter settings on the route level.
rfc
scheduler
Package scheduler implements filter logic that changes the http request scheduling behavior of the proxy.
Package scheduler implements filter logic that changes the http request scheduling behavior of the proxy.
sed
Package sed provides stream editor filters for request and response payload.
Package sed provides stream editor filters for request and response payload.
serve
Package serve provides a wrapper of net/http.Handler to be used as a filter.
Package serve provides a wrapper of net/http.Handler to be used as a filter.
tee
Package tee provides a unix-like tee feature for routing.
Package tee provides a unix-like tee feature for routing.
tls
tracing
Package tracing provides filters to instrument distributed tracing.
Package tracing provides filters to instrument distributed tracing.
Package loadbalancer implements load balancer algorithms that are applied by the proxy.
Package loadbalancer implements load balancer algorithms that are applied by the proxy.
Package logging implements application log instrumentation and Apache combined access log.
Package logging implements application log instrumentation and Apache combined access log.
Package metrics implements collection of common performance metrics.
Package metrics implements collection of common performance metrics.
net
Package net provides generic network related functions used across Skipper, which might be useful also in other contexts than Skipper.
Package net provides generic network related functions used across Skipper, which might be useful also in other contexts than Skipper.
packaging
Package pathmux implements a tree lookup for values associated to paths.
Package pathmux implements a tree lookup for values associated to paths.
auth
Package auth implements custom predicates to match based on content of the HTTP Authorization header.
Package auth implements custom predicates to match based on content of the HTTP Authorization header.
cookie
Package cookie implements predicate to check parsed cookie headers by name and value.
Package cookie implements predicate to check parsed cookie headers by name and value.
cron
Package cron implements custom predicates to match routes only when they also match the system time matches the given cron-like expressions.
Package cron implements custom predicates to match routes only when they also match the system time matches the given cron-like expressions.
forwarded
Package forwarded implements a set of custom predicate to match routes based on the standardized Forwarded header.
Package forwarded implements a set of custom predicate to match routes based on the standardized Forwarded header.
interval
Package interval implements custom predicates to match routes only during some period of time.
Package interval implements custom predicates to match routes only during some period of time.
methods
Package methods implements a custom predicate to match routes based on the http method in request
Package methods implements a custom predicate to match routes based on the http method in request
query
Package source implements a custom predicate to match routes based on the Query Params in URL
Package source implements a custom predicate to match routes based on the Query Params in URL
source
Package source implements a custom predicate to match routes based on the source IP of a request.
Package source implements a custom predicate to match routes based on the source IP of a request.
tee
traffic
Package traffic implements a predicate to control the matching probability for a given route by setting its weight.
Package traffic implements a predicate to control the matching probability for a given route by setting its weight.
Package proxy implements an HTTP reverse proxy based on continuously updated skipper routing rules.
Package proxy implements an HTTP reverse proxy based on continuously updated skipper routing rules.
proxytest
Generate a self-signed X.509 certificate for a TLS server.
Generate a self-signed X.509 certificate for a TLS server.
Package ratelimit implements rate limiting functionality for the proxy.
Package ratelimit implements rate limiting functionality for the proxy.
Package rfc provides standards related functions.
Package rfc provides standards related functions.
Package routing implements matching of http requests to a continuously updatable set of skipper routes.
Package routing implements matching of http requests to a continuously updatable set of skipper routes.
testdataclient
Package testdataclient provides a test implementation for the DataClient interface of the skipper/routing package.
Package testdataclient provides a test implementation for the DataClient interface of the skipper/routing package.
Package scheduler provides a registry to be used as a postprocessor for the routes that use a LIFO filter.
Package scheduler provides a registry to be used as a postprocessor for the routes that use a LIFO filter.
Package script provides lua scripting for skipper
Package script provides lua scripting for skipper
base64
Package base64 provides an easy way to encode and decode base64
Package base64 provides an easy way to encode and decode base64
Package secrets implements features we need to create, get, update, rotate secrets and encryption decryption across a fleet of skipper instances.
Package secrets implements features we need to create, get, update, rotate secrets and encryption decryption across a fleet of skipper instances.
Package swarm implements the exchange of information between Skipper instances using a gossip protocol called SWIM.
Package swarm implements the exchange of information between Skipper instances using a gossip protocol called SWIM.
Package tracing handles opentracing support for skipper
Package tracing handles opentracing support for skipper
tracingtest
Package tracingtest provides an OpenTracing implementation for testing purposes.
Package tracingtest provides an OpenTracing implementation for testing purposes.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL