contour

module
v0.4.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 5, 2018 License: Apache-2.0

README

Heptio Contour

Maintainers: Heptio

Build Status

Overview

Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Unlike other Ingress controllers, Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile.

This is an early release so that we can start sharing with the community. Check out the roadmap to see where we plan to go with the project.

And see the launch blog post for our vision of how Contour fits into the larger Kubernetes ecosystem.

Prerequisites

Contour is tested with Kubernetes clusters running version 1.7 and later, but should work with earlier versions.

Get started

You can try out Contour by creating a deployment from a hosted manifest -- no clone or local install necessary.

What you do need:

  • A Kubernetes cluster that supports Service objects of type: LoadBalancer (AWS Quickstart cluster or Minikube, for example)
  • kubectl configured with admin access to your cluster

See the deployment documentation for more deployment options if you don't meet these requirements.

Add Contour to your cluster

Run:

$ kubectl apply -f https://j.hept.io/contour-deployment-rbac

If RBAC isn't enabled on your cluster (for example, if you're on GKE with legacy authorization), run:

$ kubectl apply -f https://j.hept.io/contour-deployment-norbac

This command creates:

  • A new namespace heptio-contour with two instances of Contour in the namespace
  • A Service of type: LoadBalancer that points to the Contour instances
  • Depending on your configuration, new cloud resources -- for example, ELBs in AWS

See also TLS support for details on configuring TLS support. TLS is available in Contour version 0.3 and later.

Example workload

If you don't have an application ready to run with Contour, you can explore with kuard.

Run:

$ kubectl apply -f https://j.hept.io/contour-kuard-example

This example specifies a default backend for all hosts, so that you can test your Contour install. It's recommended for exploration and testing only, however, because it responds to all requests regardless of the incoming DNS that is mapped. You probably want to run with specific Ingress rules for specific hostnames.

Access your cluster

Now you can retrieve the external address of Contour's load balancer:

$ kubectl get -n heptio-contour service contour -o wide
NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour

On Minikube:

$ minikube service -n heptio-contour contour --url
http://192.168.99.100:30588

Configuring DNS

How you configure DNS depends on your platform:

  • On AWS, create a CNAME record that maps the host in your Ingress object to the ELB address.
  • If you have an IP address instead (on GCE, for example), create an A record.
  • On Minikube, you can fake DNS by editing /etc/hosts or you can use the provided example and not have to modify dns on your local machine.

Run:

$ kubectl apply -f https://j.hept.io/contour-kuard-minikube-example

This example yaml specifies kuard.192.168.99.100.nip.io as a specific ingress backend for kuard. It uses nip.io and the minikube ip address to have kuard only respond to http://kuard.192.168.99.100.nip.io. Once that is applied you can visit http://kuard.192.168.99.100.nip.io and see the kuard example application.

More information and documentation

For more deployment options, including uninstalling Contour, see the deployment documentation.

See also the Kubernetes documentation for Services and Ingress.

The detailed documentation provides additional information, including an introduction to Envoy and an explanation of how Contour maps key Envoy concepts to Kubernetes.

We've also got an FAQ for short-answer questions and conceptual stuff that doesn't quite belong in the docs.

Troubleshooting

If you encounter any problems that the documentation does not address, file an issue.

Contributing

Thanks for taking the time to join our community and start contributing!

  • Please familiarize yourself with the Code of Conduct before contributing.
  • See CONTRIBUTING.md for information about setting up your environment, the workflow that we expect, and instructions on the developer certificate of origin that we require.
  • Check out the issues and our roadmap.

Changelog

See the list of releases to find out about feature changes.

Directories

Path Synopsis
cmd
internal
contour
Package contour contains the translation business logic that listens to Kubernetes ResourceEventHandler events and translates those into additions/deletions in caches connected to the Envoy xDS gRPC API server.
Package contour contains the translation business logic that listens to Kubernetes ResourceEventHandler events and translates those into additions/deletions in caches connected to the Envoy xDS gRPC API server.
e2e
envoy
Package envoy contains a configuration writer for v2 YAML config.
Package envoy contains a configuration writer for v2 YAML config.
grpc
Package grpc provides a gRPC implementation of the Envoy v2 xDS API.
Package grpc provides a gRPC implementation of the Envoy v2 xDS API.
k8s
Package k8s containers adapters to watch k8s api servers.
Package k8s containers adapters to watch k8s api servers.
workgroup
workgroup provides a mechanism for controlling the lifetime of a group of related goroutines (workers).
workgroup provides a mechanism for controlling the lifetime of a group of related goroutines (workers).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL