Basic Example
This example code is a super-basic one-file example of a very simple kubernetes custom resource operator using the SDK. It doesn't handle synchronization or missed events, and is extremely noisy due to not filtering updates.
To Run
Using the run script
- Make sure you have k3d and go1.18+ installed
- Run the following to create a local k3d cluster and start the operator:
$ ./run.sh
Manually
Start a local kubernetes cluster or use a remote one to which you have permission to create CRD's and monitor them.
Set your kube context to the appropriate cluster, then run the operator:
$ go run basic.go --kubecfg="path_to_your_kube_config"
You may see one each of an error message about failing to watch and list the custom resource,
this is due to a slight delay between the custom resource being added, and the control plane knowing it exists.
This will only happen on the first run, when the resource definition doesn't yet exist in your cluster.
Usage
The operator will monitor for changes to any BasicCustomResource in your cluster, and log them. To demonstrate, run:
$ kubectl create -f example.yaml
You should see a log line for the added resource. The "noisy" part comes next, as you'll see repeated log lines for near-constant updates by kubernetes itself. These are all metadata updates, and can be filtered in a practical setting. For examples of such filtering, see the opinionated or /example/operator/boilerplate
examples.
Finally, you can delete the custom resource and see the delete event with
$ kubectl delete BasicCustomResource test-resource
Managing Custom Resources
You can see the custom resource created by the operator (and all custom resources in your cluster) with
$ kubectl get CustomResourceDefinitions
If you want to remove the custom resource definition created by the operator, you can do so with
$ kubectl delete CustomResourceDefinition basiccustomresources.example.grafana.com
Note that if there are any BasicCustomResources in your cluster, they will be deleted.
If the operator is running, you will begin seeing errors in the console output, as the list/watch requests to kubernetes will now result in errors.