kwok

package
v0.0.0-...-7df4f84 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 19, 2024 License: Apache-2.0 Imports: 30 Imported by: 0

README

With the kwok provider you can:

  • Run CA (cluster-autoscaler) in your terminal and connect it to a cluster (like a kubebuilder controller). You don't have to run CA in an actual cluster to test things out.
  • Perform a "dry-run" to test the autoscaling behavior of CA without creating actual VMs in your cloud provider.
  • Run CA in your local kind cluster with nodes and workloads from a remote cluster (you can also use nodes from the same cluster).
  • Test the behavior of CA against a large number of fake nodes (of your choice) with metrics.
  • etc.,

What is a kwok provider? Why kwok provider?

Check the doc around motivation.

How to use kwok provider

In a Kubernetes cluster:

1. Install kwok controller

Follow the official docs to install kwok in a cluster.

2. Configure cluster-autoscaler to use kwok cloud provider

Using helm chart:

helm repo add cluster-autoscaler https://kubernetes.github.io/autoscaler
helm upgrade --install <release-name> charts/cluster-autoscaler  \
--set "serviceMonitor.enabled"=true --set "serviceMonitor.namespace"=default \
--set "cloudprovider"=kwok --set "image.tag"="<image-tag>" \
--set "image.repository"="<image-repo>" \
--set "autoDiscovery.clusterName"="kind-kind" \
--set "serviceMonitor.selector.release"="prom"

Replace <release-name> with the release name you want. Replace <image-tag> with the image tag you want. Replace <image-repo> with the image repo you want (check releases for the official image repos and tags)

Note that the kwok provider doesn't use autoDiscovery.clusterName. You can use a fake value for autoDiscovery.clusterName.

Replace "release"="prom" with the label selector for ServiceMonitor in your grafana/prometheus installation.

For example, if you are using the prometheus operator, you can find the service monitor label selector using

kubectl get prometheus -ojsonpath='{.items[*].spec.serviceMonitorSelector}' | jq # using jq is optional

Here's what it looks like

helm upgrade ... command above installs cluster-autoscaler with kwok cloud provider settings. The helm chart by default installs a default kwok provider configuration (kwok-provider-config ConfigMap) and sample template nodes (kwok-provider-templates ConfigMap) to get you started. Replace the content of these ConfigMaps according to your need.

If you already have cluster-autoscaler running and don't want to use helm ..., you can make the following changes to get kwok provider working:

  1. Create kwok-provider-config ConfigMap for kwok provider config
  2. Create kwok-provider-templates ConfigMap for node templates
  3. Set POD_NAMESPACE env variable in the CA Deployment (if it is not there already)
  4. Set --cloud-provider=kwok in the CA Deployment
  5. That's all.

For 1 and 2, you can refer to the helm chart for the ConfigMaps. You can render them from the helm chart using:

helm template charts/cluster-autoscaler/  --set "cloudProvider"="kwok" -s templates/configmap.yaml --namespace=default

Replace --namespace with the namespace where your CA pod is running.

If you want to temporarily revert to your previous cloud provider, just change the --cloud-provider=kwok. No other provider uses kwok-provider-config and kwok-provider-templates ConfigMap (you can keep them in the cluster or delete them if you want to revert completely). POD_NAMESPACE is used only by the kwok provider (at the time of writing this).

3. Configure kwok cloud provider

Decide if you want to use static template nodes or dynamic template nodes (check the FAQ to understand the difference).

If you want to use static template nodes,

kwok-provider-config ConfigMap in the helm chart by default is set to use static template nodes (readNodesFrom is set to configmap). CA helm chart also installs a kwok-provider-templates ConfigMap with sample node yamls by default. If you want to use your own node yamls,

# delete the existing configmap
kubectl delete configmap kwok-provider-templates
# create a new configmap with your node yamls
kubectl create configmap kwok-provider-templates --from-file=templates=template-nodes.yaml

Replace template-nodes.yaml with the path to your template nodes file.

If you are using your template nodes in the kwok-provider-templates ConfigMap, make sure you have set the correct value for nodegroups.fromNodeLabelKey/nodegroups.fromNodeAnnotation. Not doing so will make CA not scale up nodes (it won't throw any error either).

If you want to use dynamic template nodes,

Set readNodesFrom in kwok-provider-config ConfigMap to cluster. This tells the kwok provider to use live nodes from the cluster as template nodes.

If you are using live nodes from the cluster as template nodes in the kwok-provider-templates ConfigMap, make sure you have set the correct value for nodegroups.fromNodeLabelKey/nodegroups.fromNodeAnnotation. Not doing so will make CA not scale up nodes (it won't throw any error either).

For local development

  1. Point your kubeconfig to the cluster where you want to test your changes using kubectx:
kubectx <cluster-name>

Using kubectl:

kubectl config get-contexts

  1. Create kwok-provider-config and kwok-provider-templates ConfigMap in the cluster you want to test your changes.

This is important because even if you run CA locally with the kwok provider, the kwok provider still searches for the kwok-provider-config ConfigMap and kwok-provider-templates (because by default kwok-provider-config has readNodesFrom set to configmap) in the cluster it connects to.

You can create both the ConfigMap resources from the helm chart like this:

helm template charts/cluster-autoscaler/  --set "cloudProvider"="kwok" -s templates/configmap.yaml --namespace=default | kubectl apply -f -

--namespace has to match POD_NAMESPACE env variable you set below.

  1. Run CA locally
# replace `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT`
# with your kubernetes api server url
# you can find it with `kubectl cluster-info`
# example:
# $ kubectl cluster-info
# Kubernetes control plane is running at https://127.0.0.1:36357
# ...
export KUBERNETES_SERVICE_HOST=https://127.0.0.1
export KUBERNETES_SERVICE_PORT=36357
# POD_NAMESPACE is the namespace where you want to look for
# your `kwok-provider-config` and `kwok-provider-templates` ConfigMap
export POD_NAMESPACE=default
# KWOK_PROVIDER_MODE tells the kwok provider that we are running the CA locally
export KWOK_PROVIDER_MODE=local
# `2>&1` redirects both stdout and stderr to VS Code (remove `| code -` if you don't use VS Code)
go run main.go --kubeconfig=/home/suraj/.kube/config --cloud-provider=kwok --namespace=default --logtostderr=true --stderrthreshold=info --v=5 2>&1 | code -

This is what it looks like in action:

Tweaking the kwok provider

You can change the behavior of the kwok provider by tweaking the kwok provider configuration in kwok-provider-config ConfigMap:

# only v1alpha1 is supported right now
apiVersion: v1alpha1
# possible values: [cluster,configmap]
# cluster: use nodes from the cluster as template nodes
# configmap: use node yamls from a configmap as template nodes
readNodesFrom: configmap
# nodegroups specifies nodegroup level config
nodegroups:
  # fromNodeLabelKey's value is used to group nodes together into nodegroups
  # For example, say you want to group nodes with the same value for `node.kubernetes.io/instance-type`
  # label as a nodegroup. Here are the nodes you have:
  # node1: m5.xlarge
  # node2: c5.xlarge
  # node3: m5.xlarge
  # Your nodegroups will look like this:
  # nodegroup1: [node1,node3]
  # nodegroup2: [node2]
  fromNodeLabelKey: "node.kubernetes.io/instance-type"

  # fromNodeAnnotation's value is used to group nodes together into nodegroups
  # (basically same as `fromNodeLabelKey` except based on annotation)
  # you can specify either of `fromNodeLabelKey` OR `fromNodeAnnotation`
  # (both are not allowed)
  fromNodeAnnotation: "eks.amazonaws.com/nodegroup"
# nodes specifies node level config
nodes:
  # skipTaint is used to enable/disable adding kwok provider taint on the template nodes
  # default is false so that even if you run the provider in a production cluster
  # you don't have to worry about the production workload
  # getting accidentally scheduled on the fake nodes
  skipTaint: true # default: false
  # gpuConfig is used to specify gpu config for the node
  gpuConfig:
    # to tell the kwok provider what label should be considered as GPU label
    gpuLabelKey: "k8s.amazonaws.com/accelerator"

# availableGPUTypes is used to specify available GPU types
availableGPUTypes:
 "nvidia-tesla-k80": {}
 "nvidia-tesla-p100": {}
# configmap specifies config map name and key which stores the kwok provider templates in the cluster
# Only applicable when `readNodesFrom: configmap`
configmap:
  name: kwok-provider-templates
  key: kwok-config # default: config

By default, the kwok provider looks for kwok-provider-config ConfigMap. If you want to use a different ConfigMap name, set the env variable KWOK_PROVIDER_CONFIGMAP (e.g., KWOK_PROVIDER_CONFIGMAP=kpconfig). You can set this env variable in the helm chart using kwokConfigMapName OR you can set it directly in the cluster-autoscaler Deployment with kubectl edit deployment ....

FAQ

1. What is the difference between kwok and the kwok provider?

kwok is an open source project under sig-scheduling.

KWOK is a toolkit that enables setting up a cluster of thousands of Nodes in seconds. Under the scene, all Nodes are simulated to behave like real ones, so the overall approach employs a pretty low resource footprint that you can easily play around with on your laptop.

https://kwok.sigs.k8s.io/

kwok provider refers to the cloud provider extension/plugin in cluster-autoscaler which uses kwok to create fake nodes.

2. What does a template node exactly mean?

A template node is the base node yaml the kwok provider uses to create a new node in the cluster.

3. What is the difference between static template nodes and dynamic template nodes?

Static template nodes are template nodes created using the node yaml specified by the user in kwok-provider-templates ConfigMap while dynamic template nodes are template nodes based on the node yaml of the current running nodes in the cluster.

4. Can I use both static and dynamic template nodes together?

As of now, no you can't (but it's an interesting idea). If you have a specific usecase, please create an issue and we can talk more there!

5. What is the difference between the kwok provider config and template nodes config?

kwok provider config is a configuration to change the behavior of the kwok provider (and not the underlying kwok toolkit) while template nodes config is the ConfigMap you can use to specify static node templates.

Gotchas

  1. The kwok provider by default taints the template nodes with kwok-provider: true taint so that production workloads don't get scheduled on these nodes accidentally. You have to tolerate the taint to schedule your workload on the nodes created by the kwok provider. You can turn this off by setting nodes.skipTaint: true in the kwok provider config.
  2. Make sure the label/annotation for fromNodeLabelKey/fromNodeAnnotation in the kwok provider config is actually present on the template nodes. If it isn't present on the template nodes, the kwok provider will not be able to create new nodes.
  3. Note that the kwok provider makes the following changes to all the template nodes: (pseudocode)
node.status.nodeInfo.kubeletVersion = "fake"
node.annotations["kwok.x-k8s.io/node"] = "fake"
node.annotations["cluster-autoscaler.kwok.nodegroup/name"] = "<name-of-the-nodegroup>"
node.spec.providerID = "kwok:<name-of-the-node>"
node.spec.taints = append(node.spec.taints, {
		key:    "kwok-provider",
		value:  "true",
		effect: "NoSchedule",
	})

I have a problem/suggestion/question/idea/feature request. What should I do?

Awesome! Please:

Please don't think too much about creating an issue. We can always close it if it doesn't make sense.

What is not supported?

  • Creating kwok nodegroups based on the kubernetes/hostname node label. Why? Imagine you have a Deployment (replicas: 2) with pod anti-affinity on the kubernetes/hostname label like this: Imagine you have only 2 unique hostname values for the kubernetes/hostname node label in your cluster:

    • hostname1
    • hostname2

    If you increase the number of replicas in the Deployment to 3, CA creates a fake node internally and runs simulations on it to decide if it should scale up. This fake node has kubernetes/hostname set to the name of the fake node which looks like template-node-xxxx-xxxx (the second xxxx is random). Since the value of kubernetes/hostname on the fake node is not hostname1 or hostname2, CA thinks it can schedule the Pending pod on the fake node and hence keeps on scaling up to infinity (or until it can't).

Troubleshooting

  1. Pods are still stuck in Running even after CA has cleaned up all the kwok nodes
    • The kwok provider doesn't drain the nodes when it deletes them. It just deletes the nodes. You should see pods running on these nodes change from a Running state to a Pending state in a minute or two. But if you don't, try scaling down your workload and scaling it up again. If the issue persists, please create an issue 🙏.

I want to contribute

Thank you ❤️

It is expected that you know how to build and run CA locally. If you don't, I recommend starting from the Makefile. Check the CA FAQ to know more about CA in general (including info around building CA and submitting a PR). CA is a big and complex project. If you have any questions or if you get stuck anywhere, reach out for help.

Get yourself familiar with the kwok project

Check https://kwok.sigs.k8s.io/

Try out the kwok provider

Go through the README.

Look for a good first issue

Check this filter for good first issues around kwok provider.

Reach out for help if you get stuck

You can get help in the following ways:

  • Mention @vadasambar in the issue/PR you are working on.
  • Start a slack thread in #sig-autoscaling mentioning @vadasambar (to join Kubernetes slack click here).
  • Add it to the weekly sig-autoscaling meeting agenda (happens on Mondays)

Documentation

Index

Constants

View Source
const (
	// ProviderName is the cloud provider name for kwok
	ProviderName = "kwok"

	//NGNameAnnotation is the annotation kwok provider uses to track the nodegroups
	NGNameAnnotation = "cluster-autoscaler.kwok.nodegroup/name"
	// NGMinSizeAnnotation is annotation on template nodes which specify min size of the nodegroup
	NGMinSizeAnnotation = "cluster-autoscaler.kwok.nodegroup/min-count"
	// NGMaxSizeAnnotation is annotation on template nodes which specify max size of the nodegroup
	NGMaxSizeAnnotation = "cluster-autoscaler.kwok.nodegroup/max-count"
	// NGDesiredSizeAnnotation is annotation on template nodes which specify desired size of the nodegroup
	NGDesiredSizeAnnotation = "cluster-autoscaler.kwok.nodegroup/desired-count"

	// KwokManagedAnnotation is the default annotation
	// that kwok manages to decide if it should manage
	// a node it sees in the cluster
	KwokManagedAnnotation = "kwok.x-k8s.io/node"
)

Variables

This section is empty.

Functions

func LoadNodeTemplatesFromConfigMap

func LoadNodeTemplatesFromConfigMap(configMapName string,
	kubeClient kubernetes.Interface) ([]*apiv1.Node, error)

LoadNodeTemplatesFromConfigMap loads template nodes from a k8s configmap check https://github.com/vadafoss/node-templates for more info on the parsing logic

Types

type ConfigMapConfig

type ConfigMapConfig struct {
	Name string `json:"name" yaml:"name"`
	Key  string `json:"key" yaml:"key"`
}

ConfigMapConfig allows setting the kwok provider configmap name

type GPUConfig

type GPUConfig struct {
	GPULabelKey       string              `json:"gpuLabelKey" yaml:"gpuLabelKey"`
	AvailableGPUTypes map[string]struct{} `json:"availableGPUTypes" yaml:"availableGPUTypes"`
}

GPUConfig defines GPU related config for the node

type GroupingConfig

type GroupingConfig struct {
	// contains filtered or unexported fields
}

GroupingConfig defines different

type KwokCloudProvider

type KwokCloudProvider struct {
	// contains filtered or unexported fields
}

KwokCloudProvider implements CloudProvider interface for kwok

func BuildKwokProvider

func BuildKwokProvider(ko *kwokOptions) (*KwokCloudProvider, error)

BuildKwokProvider builds the kwok provider

func (*KwokCloudProvider) Cleanup

func (kwok *KwokCloudProvider) Cleanup() error

Cleanup cleans up all resources before the cloud provider is removed

func (*KwokCloudProvider) GPULabel

func (kwok *KwokCloudProvider) GPULabel() string

GPULabel returns the label added to nodes with GPU resource.

func (*KwokCloudProvider) GetAvailableGPUTypes

func (kwok *KwokCloudProvider) GetAvailableGPUTypes() map[string]struct{}

GetAvailableGPUTypes return all available GPU types cloud provider supports

func (*KwokCloudProvider) GetAvailableMachineTypes

func (kwok *KwokCloudProvider) GetAvailableMachineTypes() ([]string, error)

GetAvailableMachineTypes get all machine types that can be requested from the cloud provider. Implementation optional.

func (*KwokCloudProvider) GetNodeGpuConfig

func (kwok *KwokCloudProvider) GetNodeGpuConfig(node *apiv1.Node) *cloudprovider.GpuConfig

GetNodeGpuConfig returns the label, type and resource name for the GPU added to node. If node doesn't have any GPUs, it returns nil.

func (*KwokCloudProvider) GetResourceLimiter

func (kwok *KwokCloudProvider) GetResourceLimiter() (*cloudprovider.ResourceLimiter, error)

GetResourceLimiter returns struct containing limits (max, min) for resources (cores, memory etc.).

func (*KwokCloudProvider) HasInstance

func (kwok *KwokCloudProvider) HasInstance(node *apiv1.Node) (bool, error)

HasInstance returns whether a given node has a corresponding instance in this cloud provider Since there is no underlying cloud provider instance, return true

func (*KwokCloudProvider) Name

func (kwok *KwokCloudProvider) Name() string

Name returns name of the cloud provider.

func (*KwokCloudProvider) NewNodeGroup

func (kwok *KwokCloudProvider) NewNodeGroup(machineType string, labels map[string]string, systemLabels map[string]string,
	taints []apiv1.Taint,
	extraResources map[string]resource.Quantity) (cloudprovider.NodeGroup, error)

NewNodeGroup builds a theoretical node group based on the node definition provided.

func (*KwokCloudProvider) NodeGroupForNode

func (kwok *KwokCloudProvider) NodeGroupForNode(node *apiv1.Node) (cloudprovider.NodeGroup, error)

NodeGroupForNode returns the node group for the given node.

func (*KwokCloudProvider) NodeGroups

func (kwok *KwokCloudProvider) NodeGroups() []cloudprovider.NodeGroup

NodeGroups returns all node groups configured for this cloud provider.

func (*KwokCloudProvider) Pricing

Pricing returns pricing model for this cloud provider or error if not available.

func (*KwokCloudProvider) Refresh

func (kwok *KwokCloudProvider) Refresh() error

Refresh is called before every main loop and can be used to dynamically update cloud provider state. In particular the list of node groups returned by NodeGroups can change as a result of CloudProvider.Refresh().

type KwokConfig

type KwokConfig struct {
}

KwokConfig is the struct to define kwok specific config (needs to be implemented; currently empty)

type KwokProviderConfig

type KwokProviderConfig struct {
	APIVersion    string            `json:"apiVersion" yaml:"apiVersion"`
	ReadNodesFrom string            `json:"readNodesFrom" yaml:"readNodesFrom"`
	Nodegroups    *NodegroupsConfig `json:"nodegroups" yaml:"nodegroups"`
	Nodes         *NodeConfig       `json:"nodes" yaml:"nodes"`
	ConfigMap     *ConfigMapConfig  `json:"configmap" yaml:"configmap"`
	Kwok          *KwokConfig       `json:"kwok" yaml:"kwok"`
	// contains filtered or unexported fields
}

KwokProviderConfig is the struct to hold kwok provider config

func LoadConfigFile

func LoadConfigFile(kubeClient kubeclient.Interface) (*KwokProviderConfig, error)

LoadConfigFile loads kwok provider config from k8s configmap

type NodeConfig

type NodeConfig struct {
	GPUConfig *GPUConfig `json:"gpuConfig" yaml:"gpuConfig"`
	SkipTaint bool       `json:"skipTaint" yaml:"skipTaint"`
}

NodeConfig defines config options for the nodes

type NodeGroup

type NodeGroup struct {
	// contains filtered or unexported fields
}

NodeGroup implements NodeGroup interface.

func (*NodeGroup) AtomicIncreaseSize

func (nodeGroup *NodeGroup) AtomicIncreaseSize(delta int) error

AtomicIncreaseSize is not implemented.

func (*NodeGroup) Autoprovisioned

func (nodeGroup *NodeGroup) Autoprovisioned() bool

Autoprovisioned returns true if the node group is autoprovisioned.

func (*NodeGroup) Create

func (nodeGroup *NodeGroup) Create() (cloudprovider.NodeGroup, error)

Create creates the node group on the cloud provider side. Left unimplemented because Create is not used anywhere in the core autoscaler as of writing this

func (*NodeGroup) Debug

func (nodeGroup *NodeGroup) Debug() string

Debug returns a debug string for the nodegroup.

func (*NodeGroup) DecreaseTargetSize

func (nodeGroup *NodeGroup) DecreaseTargetSize(delta int) error

DecreaseTargetSize decreases the target size of the node group. This function doesn't permit to delete any existing node and can be used only to reduce the request for new nodes that have not been yet fulfilled. Delta should be negative.

func (*NodeGroup) Delete

func (nodeGroup *NodeGroup) Delete() error

Delete deletes the node group on the cloud provider side. Left unimplemented because Delete is not used anywhere in the core autoscaler as of writing this

func (*NodeGroup) DeleteNodes

func (nodeGroup *NodeGroup) DeleteNodes(nodes []*apiv1.Node) error

DeleteNodes deletes the specified nodes from the node group.

func (*NodeGroup) Exist

func (nodeGroup *NodeGroup) Exist() bool

Exist checks if the node group really exists on the cloud provider side. Since kwok nodegroup is not backed by anything on cloud provider side We can safely return `true` here

func (*NodeGroup) ForceDeleteNodes

func (nodeGroup *NodeGroup) ForceDeleteNodes(nodes []*apiv1.Node) error

ForceDeleteNodes deletes nodes from the group regardless of constraints.

func (*NodeGroup) GetOptions

GetOptions returns NodeGroupAutoscalingOptions that should be used for this particular NodeGroup. Returning a nil will result in using default options.

func (*NodeGroup) Id

func (nodeGroup *NodeGroup) Id() string

Id returns nodegroup name.

func (*NodeGroup) IncreaseSize

func (nodeGroup *NodeGroup) IncreaseSize(delta int) error

IncreaseSize increases NodeGroup size.

func (*NodeGroup) MaxSize

func (nodeGroup *NodeGroup) MaxSize() int

MaxSize returns maximum size of the node group.

func (*NodeGroup) MinSize

func (nodeGroup *NodeGroup) MinSize() int

MinSize returns minimum size of the node group.

func (*NodeGroup) Nodes

func (nodeGroup *NodeGroup) Nodes() ([]cloudprovider.Instance, error)

Nodes returns a list of all nodes that belong to this node group.

func (*NodeGroup) TargetSize

func (nodeGroup *NodeGroup) TargetSize() (int, error)

TargetSize returns the current TARGET size of the node group. It is possible that the number is different from the number of nodes registered in Kubernetes.

func (*NodeGroup) TemplateNodeInfo

func (nodeGroup *NodeGroup) TemplateNodeInfo() (*framework.NodeInfo, error)

TemplateNodeInfo returns a node template for this node group.

type NodegroupsConfig

type NodegroupsConfig struct {
	FromNodeLabelKey        string `json:"fromNodeLabelKey" yaml:"fromNodeLabelKey"`
	FromNodeLabelAnnotation string `json:"fromNodeLabelAnnotation" yaml:"fromNodeLabelAnnotation"`
}

NodegroupsConfig defines options for creating nodegroups

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL