Cross-cluster Connectivity
Multi-cluster DNS for Cluster API
What?
This project aims to enables a Kubernetes service in one cluster to be
discovered and used by a client pod in a different cluster, while putting
minimal requirements on the clusters or their networking environment.
Why?
There are various ways to share Kubernetes Services across clusters. Each has
its limitations, often due to assumptions made about the infrastructure network,
DNS, or tenancy model of the clusters.
This project aims to provide cross-cluster service discovery and connectivity
even when:
- Service type:LoadBalancer is too expensive to use for every Service
- Pod and ClusterIP CIDR blocks are identical on all clusters
- Users do not have permissions to edit public DNS records
- Clusters do not follow the rules of Namespace
Sameness
Architecture

Cross-cluster Connectivity (XCC) adds two new controllers (green in the diagram
above):
xcc-dns-controller
on the management cluster
dns-server
on any workload cluster where cross-cluster DNS resolution is
desired
Limitations
The current project has some important limitations:
- End-to-end use cases rely on an L7 Ingress (Gateway) for HTTP(S) or SNI-based
routing
- Potential future feature: resolve NodePort Services, e.g.
#71
- No resolution across management-cluster namespaces
- Ingress gateway must be behind a Service type:LoadBalancer
- Future feature: maybe support ingress listening on node-ports or host-ports
(#44
and #45)
- Hostnames include the name of the cluster hosting the service
- Future: add some kind of CNAME support to hide the cluster name?
Walkthrough
This walkthrough assumes:
- A management
cluster
exists, running the Cluster API.
- Two workload
clusters
exist, with support for services type LoadBalancer. For the sake of this doc, assume
cluster-a
and cluster-b
exist and both of these Clusters belong to the
dev-team
namespace on the management cluster.
- If you're using
kind
, you'll need to BYO your own load balancer.
Consider using MetalLB.
Install Multi-cluster DNS on management cluster
-
Install GatewayDNS
CRD on the management cluster
kubectl --kubeconfig management.kubeconfig \
apply -f manifests/crds/connectivity.tanzu.vmware.com_gatewaydns.yaml
-
Install xcc-dns-controller
on the management cluster
kubectl --kubeconfig management.kubeconfig \
apply -f manifests/xcc-dns-controller/deployment.yaml
Note: by default, this manifest configures the controller to generate
cross-cluster DNS records with the suffix xcc.test
.
To use a different suffix, customize your deployment yaml, e.g.
cat manifests/xcc-dns-controller/deployment.yaml \
| sed 's/xcc\.test/multi-cluster.example.com/g' \
| kubectl --kubeconfig management.kubeconfig apply -f -
Install Multi-cluster DNS on each workload cluster
- Deploy
dns-server
controller on both workload clusters
kubectl --kubeconfig cluster-a.kubeconfig \
apply -f manifests/dns-server/
- Configure your cluster's root DNS server to forward queries for the
xcc.test
zone to
the xcc dns-server
. This can be done by running the dns-config-patcher
job.
kubectl --kubeconfig cluster-a.kubeconfig \
apply -f manifests/dns-config-patcher/deployment.yaml
Note: to use a DNS zone other than xcc.test
, customize your deployment yaml, e.g.
cat manifests/dns-config-patcher/deployment.yaml \
| sed 's/xcc\.test/multi-cluster.example.com/g' \
| kubectl --kubeconfig cluster-a.kubeconfig apply -f -
Repeat the steps above for cluster-b
.
Deploy a load balanced service to cluster-a
- Install Contour
kubectl --kubeconfig cluster-a.kubeconfig \
apply -f manifests/contour/
- Deploy a workload (kuard)
kubectl --kubeconfig cluster-a.kubeconfig \
apply -f manifests/example/kuard.yaml
Lastly, wire up cross cluster connectivity
-
On the management cluster, label the Clusters that have services that shall
be discoverable by other clusters. The GatewayDNS
record created later will
use this label as a ClusterSelector
. In this example, the clusters are
using the label hasContour=true
.
kubectl --kubeconfig management.kubeconfig \
-n dev-team label cluster cluster-a hasContour=true --overwrite
kubectl --kubeconfig management.kubeconfig \
-n dev-team label cluster cluster-b hasContour=true --overwrite
-
On the management cluster, create a GatewayDNS Record.
kubectl --kubeconfig management.kubeconfig \
-n dev-team apply -f manifests/example/dev-team-gateway-dns.yaml
The GatewayDNS's spec has a clusterSelector
that tells the
xcc-dns-controller
which clusters shall be watched for services. The
controller will look for a service with the namespace/name of service
. In
this example, Contour runs a service in the projectcontour/envoy
namespace/name. The resolutionType
in this example's service is type
loadBalancer
.
---
apiVersion: connectivity.tanzu.vmware.com/v1alpha1
kind: GatewayDNS
metadata:
name: dev-team-gateway-dns
namespace: dev-team
spec:
clusterSelector:
matchLabels:
hasContour: "true"
service: projectcontour/envoy
resolutionType: loadBalancer
Test DNS resolution from cluster-b
to cluster-a
At this point the kuard application deployed to cluster-a
should be
addressable from cluster-b
.
kubectl --kubeconfig cluster-b.kubeconfig \
run kuard-test -i --rm --image=curlimages/curl \
--restart=Never -- curl -v --connect-timeout 3 \
http://kuard.gateway.cluster-a.dev-team.clusters.xcc.test
Contributing
Please read CONTRIBUTING.md for details on the process for
running tests, making changes, and submitting issues and pull requests to the
project.
Code of Conduct
Please familiarize yourself with the Code of Conduct
before contributing. This code of conduct applies to the Cross-cluster
Connectivity community at large (Slack, mailing lists, Twitter, etc.).
License
This project is licensed under the Apache-2.0 License - see the
LICENSE file for details.