Communicate with the kubernetes cluster
Ok, Let schedule a pod. Wait, what pod should we to schedule and where to schedule. Looks like we lack many inputs. Where can we get them and How?
Sure, these inputs must be in the kubernetes cluster, but how cloud we get from it. This a good question and let see how to get these inputs.
Let up setup a kubernetes cluster for our test, there many guide in the internet to help you to set one.
Bootstrapping clusters with kubeadm
You may face up some problems for this guide to setup the cluster due to pull images from k8s.gcr.io failure. Here is a script for user to pull it from a mirror and rename it. You need change the version compliant with your kubeadm. This script is for kubernetes v1.20.2. You also can change the mirror to the one fastest in your area. You would better setup a CNI plugin for network, but it is fine without it.
In our whole tutorial, one control node is enough.
List pods
We can write our first scheduler to get pod information. The full code is here scheduer.go. We need a kubeconfig to be authorized by the cluster, we just copy the default scheduler one. The output should like this
# sudo cp /etc/kubernetes/scheduler.conf ~
# go run scheduler.go ~/scheduler.conf
add event for pod kube-system/etcd-test
add event for pod kube-system/kube-proxy-khkvc
add event for pod kube-system/coredns-74ff55c5b-8pvsn
add event for pod kube-system/coredns-74ff55c5b-f4sdn
add event for pod kube-system/kube-controller-manager-test
add event for pod kube-system/kube-scheduler-test
add event for pod kube-system/kube-apiserver-test
Great, we now could see all the pods in the cluster. The next step is to schedule it. But before that, let see dive more on this. Let us see the snippet of code of our first scheduler.
...
informerfactory := NewInformerFactory(client, 0)
informerfactory.Core().V1().Pods().Lister()
// Start all informers.
informerfactory.Start(ctx.Done())
// Wait for all caches to sync before scheduling.
informerfactory.WaitForCacheSync(ctx.Done())
informerfactory.Core().V1().Pods().Informer().AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: addPod,
},
)
...
What is the informerfactory and what is the meaning of lister and informer?
Recall the component of kubernetes, the control plane includes the kube-apiserver, kube-controller, kube-scheduler, etcd etc. Now you can guess all the communication with the cluster is through kube-apiserver. We ask the kube-apiserver to get the things we want or tell others what we want(we will learn this in later chapter).
However, this does not answer the previous question. The centralized configuration would dismiss many inconsistency problems in a distribute system. But every time to query the configuration is a heavy burden for the database(api server and etcd). Store a local cache and only watch the increment change could greatly save the cost. This is the so-called list-and-watch mechanism.
The list and watch mechanism was already widely used by the kube-controller and kube-scheduler. And they had abstracted it out as tool for deverloper to extend the kubernetes management. Here we employ the client-go SDK to communicate with the cluster. This SDK hides the details of cache and authorization. For our purpose, this is enough to implement a scheduler.
Let have a look on the client-go SDK, we see most of the code is generated(What?).
//k8s.io/client-go@v0.20.0/informers/core/v1/interface.go
...
// Code generated by informer-gen. DO NOT EDIT.
package v1
import (
internalinterfaces "k8s.io/client-go/informers/internalinterfaces"
)
// Interface provides access to all the informers in this group version.
type Interface interface {
// ComponentStatuses returns a ComponentStatusInformer.
ComponentStatuses() ComponentStatusInformer
// ConfigMaps returns a ConfigMapInformer.
ConfigMaps() ConfigMapInformer
// Endpoints returns a EndpointsInformer.
Endpoints() EndpointsInformer
// Events returns a EventInformer.
Events() EventInformer
// LimitRanges returns a LimitRangeInformer.
...
Custom Resource definition(CRD)
Definition and updating API is a nuisance for manage plane. The kubenetes employ the CRD to solving this, once define the resouce we can easily generate the list and watch style api and SDKs. The more detail could find here custom-resources and deep-dive-code-generation-customresources