nginx-third-party

command
v0.0.0-...-45802e4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 25, 2016 License: Apache-2.0 Imports: 25 Imported by: 0

README

Nginx Ingress Controller

This is a nginx Ingress controller that uses ConfigMap to store the nginx configuration. See Ingress controller documentation for details on how it works.

What it provides?

  • Ingress controller
  • nginx 1.9.x with
  • SSL support
  • custom ssl_dhparam (optional). Just mount a secret with a file named dhparam.pem.
  • support for TCP services (flag --tcp-services-configmap)
  • custom nginx configuration using ConfigMap
  • custom error pages. Using the flag --custom-error-service is possible to use a custom compatible 404-server image nginx-error-server that provides an additional /errors route that returns custom content for a particular error code. This is completely optional

Requirements

  • default backend 404-server (or a custom compatible image)

TLS

You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg:

apiVersion: v1
data:
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
kind: Secret
metadata:
  name: testsecret
  namespace: default
type: Opaque

Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: no-rules-map
spec:
  tls:
    secretName: testsecret
  backend:
    serviceName: s1
    servicePort: 80

Please follow test.sh as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate.

Optimizing TLS Time To First Byte (TTTFB)

NGINX provides the configuration option (ssl_buffer_size)[http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size] to allow the optimization of the TLS record size. This improves the (Time To First Byte)[https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/] (TTTFB). The default value in the Ingress controller it is 4k(nginx default is16k`);

Examples:

First we need to deploy some application to publish. To keep this simple we will use the echoheaders app that just returns information about the http request as output

kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.1 --replicas=1 --port=8080

Now we expose the same application in two different services (so we can create different Ingress rules)

kubectl expose rc echoheaders --port=80 --target-port=8080 --name=echoheaders-x 
kubectl expose rc echoheaders --port=80 --target-port=8080 --name=echoheaders-y

Next we create a couple of Ingress rules

kubectl create -f examples/ingress.yaml

we check that ingress rules are defined:

$ kubectl get ing
NAME      RULE          BACKEND   ADDRESS
echomap   -
          foo.bar.com
          /foo          echoheaders-x:80
          bar.baz.com
          /bar          echoheaders-y:80
          /foo          echoheaders-x:80

Before the deploy of nginx we need a default backend 404-server (or a compatible custom image)

kubectl create -f examples/default-backend.yaml
kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend

Default configuration

The last step is the deploy of nginx Ingress rc (from the examples directory)

kubectl create -f examples/rc-default.yaml

To test if evertyhing is working correctly:

curl -v http://<node IP address>:80/foo -H 'Host: foo.bar.com'

You should see an output similar to

*   Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
> GET /foo HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.8
< Date: Tue, 15 Dec 2015 13:45:13 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
CLIENT VALUES:
client_address=10.2.84.43
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://foo.bar.com:8080/foo

SERVER VALUES:
server_version=nginx: 1.9.7 - lua: 9019

HEADERS RECEIVED:
accept=*/*
connection=close
host=foo.bar.com
user-agent=curl/7.43.0
x-forwarded-for=172.17.4.1
x-forwarded-host=foo.bar.com
x-forwarded-server=foo.bar.com
x-real-ip=172.17.4.1
BODY:
* Connection #0 to host 172.17.4.99 left intact

If we try to get a non exising route like /foobar we should see

$ curl -v 172.17.4.99/foobar -H 'Host: foo.bar.com'
*   Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
> GET /foobar HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.9.8
< Date: Tue, 15 Dec 2015 13:48:18 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
default backend - 404
* Connection #0 to host 172.17.4.99 left intact

(this test checked that the default backend is properly working)

Replacing the default backend with a custom one we can change the default error pages provided by nginx

Exposing TCP services

First we need to remove the running

kubectl delete rc nginx-ingress-3rdpartycfg

To configure which services and ports will be exposed

kubectl create -f examples/tcp-configmap-example.yaml

The file examples/tcp-configmap-example.yaml uses a ConfigMap where the key is the external port to use and the value is <namespace/service name>:<service port> It is possible to use a number or the name of the port.

kubectl create -f examples/rc-tcp.yaml

Now we can test the new service:

$ (sleep 1; echo "GET / HTTP/1.1"; echo "Host: 172.17.4.99:9000"; echo;echo;sleep 2) | telnet 172.17.4.99 9000

Trying 172.17.4.99...
Connected to 172.17.4.99.
Escape character is '^]'.
HTTP/1.1 200 OK
Server: nginx/1.9.7
Date: Tue, 15 Dec 2015 14:46:28 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive

f
CLIENT VALUES:

1a
client_address=10.2.84.45

c
command=GET

c
real path=/

a
query=nil

14
request_version=1.1

25
request_uri=http://172.17.4.99:8080/

1


f
SERVER VALUES:

28
server_version=nginx: 1.9.7 - lua: 9019

1


12
HEADERS RECEIVED:

16
host=172.17.4.99:9000

6
BODY:

14
-no body in request-
0

SSL

First create a secret containing the ssl certificate and key. This example creates the certificate and the secret (json):

SECRET_NAME=secret-echoheaders-1 HOSTS=foo.bar.com ./examples/certs.sh

Create the secret:

kubectl create -f secret-secret-echoheaders-1-foo.bar.com.json

Check if the secret was created:

$ kubectl get secrets
NAME                   TYPE                                  DATA      AGE
secret-echoheaders-1   Opaque                                2         9m

Like before we need to remove the running nginx rc

kubectl delete rc nginx-ingress-3rdpartycfg

Next create a new rc that uses the secret

kubectl create -f examples/rc-ssl.yaml

Note: this example uses a self signed certificate.

Example output:

$ curl -v https://172.17.4.99/foo -H 'Host: bar.baz.com' -k
*   Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 4444 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: foo.bar.com
> GET /foo HTTP/1.1
> Host: bar.baz.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.8
< Date: Thu, 17 Dec 2015 14:57:03 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
CLIENT VALUES:
client_address=10.2.84.34
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://bar.baz.com:8080/foo

SERVER VALUES:
server_version=nginx: 1.9.7 - lua: 9019

HEADERS RECEIVED:
accept=*/*
connection=close
host=bar.baz.com
user-agent=curl/7.43.0
x-forwarded-for=172.17.4.1
x-forwarded-host=bar.baz.com
x-forwarded-server=bar.baz.com
x-real-ip=172.17.4.1
BODY:
* Connection #0 to host 172.17.4.99 left intact
-no body in request-

Custom errors

The default backend provides a way to customize the default 404 page. This helps but sometimes is not enough. Using the flag --custom-error-service is possible to use an image that must be 404 compatible and provide the route /error Here there is an example of the the image

The route /error expects two arguments: code and format

  • code defines the wich error code is expected to be returned (502,503,etc.)
  • format the format that should be returned For instance /error?code=504&format=json or /error?code=502&format=html

Using a volume pointing to /var/www/html directory is possible to use a custom error

Debug

Using the flag --v=XX it is possible to increase the level of logging. In particular:

  • --v=2 shows details using diff about the changes in the configuration in nginx
I0316 12:24:37.581267       1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf
I0316 12:24:37.581356       1 utils.go:149] --- /tmp/922554809  2016-03-16 12:24:37.000000000 +0000
+++ /tmp/079811012  2016-03-16 12:24:37.000000000 +0000
@@ -235,7 +235,6 @@

     upstream default-echoheadersx {
         least_conn;
-        server 10.2.112.124:5000;
         server 10.2.208.50:5000;

     }
I0316 12:24:37.610073       1 command.go:69] change in configuration detected. Reloading...
  • --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
  • --v=5 configures NGINX in debug mode

Custom NGINX configuration

Using a ConfigMap it is possible to customize the defaults in nginx. The next command shows the defaults:

$ ./nginx-third-party-lb --dump-nginx—configuration
Example of ConfigMap to customize NGINX configuration:
data:
  body-size: 1m
  error-log-level: info
  gzip-types: application/atom+xml application/javascript application/json application/rss+xml
    application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json
    application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon
    text/css text/plain text/x-component
  hts-include-subdomains: "true"
  hts-max-age: "15724800"
  keep-alive: "75"
  max-worker-connections: "16384"
  proxy-connect-timeout: "30"
  proxy-read-timeout: "30"
  proxy-real-ip-cidr: 0.0.0.0/0
  proxy-send-timeout: "30"
  server-name-hash-bucket-size: "64"
  server-name-hash-max-size: "512"
  ssl-buffer-size: 4k
  ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
  ssl-protocols: TLSv1 TLSv1.1 TLSv1.2
  ssl-session-cache: "true"
  ssl-session-cache-size: 10m
  ssl-session-tickets: "true"
  ssl-session-timeout: 10m
  use-gzip: "true"
  use-hts: "true"
  worker-processes: "8"
metadata:
  name: custom-name
  namespace: a-valid-namespace

For instance, if we want to change the timeouts we need to create a ConfigMap:

$ cat nginx-load-balancer-conf.yaml
apiVersion: v1
data:
  proxy-connect-timeout: "10"
  proxy-read-timeout: "120"
  proxy-send-imeout: "120"
kind: ConfigMap
metadata:
  name: nginx-load-balancer-conf

$ kubectl create -f nginx-load-balancer-conf.yaml

Please check the example rc-custom-configuration.yaml

If the Configmap it is updated, NGINX will be reloaded with the new configuration

Troubleshooting

Problems encountered during 1.2.0-alpha7 deployment:

  • make setup-files.sh file in hypercube does not provide 10.0.0.1 IP to make-ca-certs, resulting in CA certs that are issued to the external cluster IP address rather then 10.0.0.1 -> this results in nginx-third-party-lb appearing to get stuck at "Utils.go:177 - Waiting for default/default-http-backend" in the docker logs. Kubernetes will eventually kill the container before nginx-third-party-lb times out with a message indicating that the CA certificate issuer is invalid (wrong ip), to verify this add zeros to the end of initialDelaySeconds and timeoutSeconds and reload the RC, and docker will log this error before kubernetes kills the container.

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL