KubeVPN
中文 | English | Wiki
KubeVPN is Cloud Native Dev Environment, connect to kubernetes cluster network, you can access remote kubernetes
cluster network, remote
kubernetes cluster service can also access your local service. and more, you can run your kubernetes pod on local Docker
container with same environment、volume、and network. you can develop your application on local PC totally.
QuickStart
Install from GitHub release
LINK
Install from custom krew index
(
kubectl krew index add kubevpn https://github.com/wencaiwulue/kubevpn.git && \
kubectl krew install kubevpn/kubevpn && kubectl kubevpn
)
Install from build it manually
(
git clone https://github.com/wencaiwulue/kubevpn.git && \
cd kubevpn && make kubevpn && ./bin/kubevpn
)
Install bookinfo as demo application
kubectl apply -f https://raw.githubusercontent.com/wencaiwulue/kubevpn/master/samples/bookinfo.yaml
Functions
Connect to k8s cluster network
➜ ~ kubevpn connect
get cidr from cluster info...
get cidr from cluster info ok
get cidr from cni...
get cidr from svc...
get cidr from svc ok
traffic manager not exist, try to create it...
pod [kubevpn-traffic-manager] status is Pending
Container Reason Message
pod [kubevpn-traffic-manager] status is Pending
Container Reason Message
control-plane ContainerCreating
vpn ContainerCreating
webhook ContainerCreating
pod [kubevpn-traffic-manager] status is Running
Container Reason Message
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully
port forward ready
your ip is 223.254.0.101
tunnel connected
dns service ok
---------------------------------------------------------------------------
Now you can access resources in the kubernetes cluster, enjoy it :)
---------------------------------------------------------------------------
after you see this prompt, then leave this terminal alone, open a new terminal, continue operation
➜ ~ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-7db5668668-mq9qr 1/1 Running 0 7m 172.27.0.199 172.30.0.14 <none> <none>
kubevpn-traffic-manager-99f8c8d77-x9xjt 1/1 Running 0 74s 172.27.0.207 172.30.0.14 <none> <none>
productpage-8f9d86644-z8snh 1/1 Running 0 6m59s 172.27.0.206 172.30.0.14 <none> <none>
ratings-859b96848d-68d7n 1/1 Running 0 6m59s 172.27.0.201 172.30.0.14 <none> <none>
reviews-dcf754f9d-46l4j 1/1 Running 0 6m59s 172.27.0.202 172.30.0.14 <none> <none>
➜ ~ ping 172.27.0.206
PING 172.27.0.206 (172.27.0.206): 56 data bytes
64 bytes from 172.27.0.206: icmp_seq=0 ttl=63 time=49.563 ms
64 bytes from 172.27.0.206: icmp_seq=1 ttl=63 time=43.014 ms
64 bytes from 172.27.0.206: icmp_seq=2 ttl=63 time=43.841 ms
64 bytes from 172.27.0.206: icmp_seq=3 ttl=63 time=44.004 ms
64 bytes from 172.27.0.206: icmp_seq=4 ttl=63 time=43.484 ms
^C
--- 172.27.0.206 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 43.014/44.781/49.563/2.415 ms
➜ ~ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
details ClusterIP 172.27.255.92 <none> 9080/TCP 9m7s app=details
productpage ClusterIP 172.27.255.48 <none> 9080/TCP 9m6s app=productpage
ratings ClusterIP 172.27.255.154 <none> 9080/TCP 9m7s app=ratings
reviews ClusterIP 172.27.255.155 <none> 9080/TCP 9m6s app=reviews
➜ ~ curl 172.27.255.48:9080
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
Domain resolve
➜ ~ curl productpage.default.svc.cluster.local:9080
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
Short domain resolve
➜ ~ curl productpage:9080
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
...
Reverse proxy
➜ ~ kubevpn connect --workloads deployment/productpage
got cidr from cache
traffic manager not exist, try to create it...
pod [kubevpn-traffic-manager] status is Running
Container Reason Message
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out
port forward ready
your ip is 223.254.0.101
tunnel connected
dns service ok
---------------------------------------------------------------------------
Now you can access resources in the kubernetes cluster, enjoy it :)
---------------------------------------------------------------------------
package main
import (
"io"
"net/http"
)
func main() {
http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
_, _ = io.WriteString(writer, "Hello world!")
})
_ = http.ListenAndServe(":9080", nil)
}
➜ ~ curl productpage:9080
Hello world!%
➜ ~ curl productpage.default.svc.cluster.local:9080
Hello world!%
Reverse proxy with mesh
Support HTTP, GRPC and WebSocket etc. with specific header "a: 1"
will route to your local machine
➜ ~ kubevpn connect --workloads=deployment/productpage --headers a=1
got cidr from cache
traffic manager not exist, try to create it...
pod [kubevpn-traffic-manager] status is Running
Container Reason Message
control-plane ContainerRunning
vpn ContainerRunning
webhook ContainerRunning
update ref count successfully
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "productpage" rollout to finish: 1 old replicas are pending termination...
deployment "productpage" successfully rolled out
port forward ready
your ip is 223.254.0.101
tunnel connected
dns service ok
---------------------------------------------------------------------------
Now you can access resources in the kubernetes cluster, enjoy it :)
---------------------------------------------------------------------------
➜ ~ curl productpage:9080
<!DOCTYPE html>
<html>
<head>
<title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
...
➜ ~ curl productpage:9080 -H "a: 1"
Hello world!%
本地进入开发模式
将 Kubernetes pod 运行在本地的 Docker 容器中,同时配合 service mesh, 拦截带有制定 header 的流量到本地,或者所有的流量到本地
➜ ~ kubevpn dev deployment/authors -n kube-system --headers a=1 -p 9080:9080 -p 80:80
got cidr from cache
update ref count successfully
traffic manager already exist, reuse it
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "authors" rollout to finish: 1 old replicas are pending termination...
deployment "authors" successfully rolled out
port forward ready
tunnel connected
dns service ok
tar: removing leading '/' from member names
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/3264799524258261475:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/4472770436329940969:/var/run/secrets/kubernetes.io/serviceaccount
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
/var/folders/4_/wt19r8113kq_mfws8sb_w1z00000gn/T/359584695576599326:/var/run/secrets/kubernetes.io/serviceaccount
Created container: authors_kube-system_kubevpn_a7d82
Wait container authors_kube-system_kubevpn_a7d82 to be running...
Container authors_kube-system_kubevpn_a7d82 is running on port 9080/tcp:32771 now
Created container: nginx_kube-system_kubevpn_a7d82
Wait container nginx_kube-system_kubevpn_a7d82 to be running...
Container nginx_kube-system_kubevpn_a7d82 is running now
/opt/microservices # ls
app
/opt/microservices # ps -ef
PID USER TIME COMMAND
1 root 0:00 ./app
10 root 0:00 nginx: master process nginx -g daemon off;
32 root 0:00 /bin/sh
44 101 0:00 nginx: worker process
45 101 0:00 nginx: worker process
46 101 0:00 nginx: worker process
47 101 0:00 nginx: worker process
49 root 0:00 ps -ef
/opt/microservices # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r5)
(4/4) Installing curl (7.79.1-r5)
Executing busybox-1.33.1-r3.trigger
OK: 8 MiB in 19 packages
/opt/microservices # curl localhost:9080
404 page not found
/opt/microservices # curl localhost:9080/health
{"status":"Authors is healthy"}/opt/microservices # exit
prepare to exit, cleaning up
update ref count successfully
clean up successful
You can see that it will start up two containers with docker, mapping to pod two container, and share port with same
network, you can use localhost:port
to access another container. And more, all environment、volume and network are the same as remote kubernetes pod, it is
truly consistent with the kubernetes runtime. Makes develop on local PC comes true.
➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de9e2f8ab57d nginx:latest "/docker-entrypoint.…" 5 seconds ago Up 5 seconds nginx_kube-system_kubevpn_e21d8
28aa30e8929e naison/authors:latest "./app" 6 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:9080->9080/tcp authors_kube-system_kubevpn_e21d8
➜ ~
Multiple Protocol
- TCP
- UDP
- ICMP
- GRPC
- WebSocket
- HTTP
- ...
on Windows platform, you need to
install PowerShell
in advance
FAQ
- What should I do if the dependent image cannot be pulled, or the inner environment cannot access docker.io?
- Answer: In the network that can access docker.io, transfer the image in the command
kubevpn version
to your own
private image registry, and then add option --image
to special image when starting the command.
Example:
➜ ~ kubevpn version
KubeVPN: CLI
Version: v1.1.14
Image: docker.io/naison/kubevpn:v1.1.14
Branch: master
Git commit: 87dac42dad3d8f472a9dcdfc2c6cd801551f23d1
Built time: 2023-01-15 04:19:45
Built OS/Arch: linux/amd64
Built Go version: go1.18.10
➜ ~
Image is docker.io/naison/kubevpn:v1.1.14
, transfer this image to private docker registry
docker pull docker.io/naison/kubevpn:v1.1.14
docker tag docker.io/naison/kubevpn:v1.1.14 [docker registry]/[namespace]/[repo]:[tag]
docker push [docker registry]/[namespace]/[repo]:[tag]
Then you can use this image, as follows:
➜ ~ kubevpn connect --image [docker registry]/[namespace]/[repo]:[tag]
got cidr from cache
traffic manager not exist, try to create it...
pod [kubevpn-traffic-manager] status is Running
...