kube-upgrade
Kubernetes controller and daemon for managing cluster updates.
Table of Contents
Introduction
Usage
Prerequisite
- A kubernetes cluster installed with kubeadm
- Nodes using Fedora CoreOS with upgraded already installed (See Links)
Installation
To install kube-upgrade, follow these steps:
- Create a configuration file for upgraded on each node under
/etc/kube-upgraded/config.yaml
. An example config can be found here.
- Deploy upgrade-controller
kubectl apply -f https://raw.githubusercontent.com/heathcliff26/kube-upgrade/main/examples/upgrade-controller/upgrade-controller.yaml
- Create the upgrade-plan
https://raw.githubusercontent.com/heathcliff26/kube-upgrade/main/examples/upgrade-controller/upgrade-cr.yaml
Manual installation of upgraded
To install upgraded manually on you nodes:
- Download the binary for your architecture from the latest release
- Install the binary into the path of your choice
- Create a systemd service file for upgraded in
/etc/systemd/system/upgraded.service
. An example service can be found here.
- Create a configuration file for upgraded under
/etc/kube-upgraded/config.yaml
. An example config can be found here.
- Enable the service
sudo systemctl daemon-reload
sudo systemctl enable --now upgraded.service
Container Images
Image location
Container Registry |
Image |
Github Container |
ghcr.io/heathcliff26/kube-upgrade-controller |
Docker Hub |
docker.io/heathcliff26/kube-upgrade-controller |
There are different flavors of the image:
Tag(s) |
Description |
latest |
Last released version of the image |
rolling |
Rolling update of the image, always build from main branch. |
vX.Y.Z |
Released version of the image |
Architecture
Kube-upgrade consists of 2 components, the upgrade-controller and upgraded. They work together to ensure automatic kubernetes updates across your cluster.
It does depend on a fleetlock server to ensure nodes are not updated simoultaneously, as well as for draining nodes beforehand.
Important Notice: When creating a plan, it is always necessary to ensure that the control-plane nodes are upgraded first.
upgrade-controller
The controller runs in the cluster coordinates the upgrades across the cluster by reading the KubeUpgradePlan
and annotating nodes with the correct settings.
It will do this per group, depending on how the order is defined in the plan.
upgraded
The upgraded daemon runs on each node and upgrades the node in accordance with the annotations provided by upgrade-controller.
Even without kubernetes version upgrades, it will constantly check for new Fedora CoreOS versions in the same stream and update to them.
When it detects an update for kubernetes, it will execute the following:
- Reserve a slot with the fleetlock server
- Rebase the node into the new version using rpm-ostree
- Run
kubeadm upgrade node
or kubeadm upgrade apply <version>
, depending on if it is the first node.
Possible problems when upgrading
So far as i tested, upgrading between patches (e.g. 1.30.3 -> 1.30.4) is going fine. However when upgrading between 1.30 and 1.31, the static pods for kubernetes do not start with a version mismatch (1.30 pod, 1.31 kubelet). This causes the preflight checks to fail. The solution in this case was for me to ignore preflight errors anyway and simply upgrade to 1.31. This fixed the problem.
I think the reason is, that while it is not explicitly stated in the docs (See Links) and kind of hinted it could be done the other way around, the expected way for the upgrade is the following:
- Upgrade kubeadm to the newest version
- Run
kubeadm upgrade (node|apply)
- Upgrade kubelet
What upgraded is doing instead is, it upgrades both kubeadm and kubelet at the same time (by rebasing). So in conclusion:
TLDR; Upgrading between patches (e,g, 1.x.y -> 1.x.z) is fine, upgrading between minor versions (e.g. 1.x -> 1.y) should be done with proper planning, lots of tests, caution and quite possible manually.
Links