Documentation
¶
Index ¶
- Constants
- func InstanceConfigUpToDate(instanceConfig, poolConfig *InstanceConfig) bool
- type ASGNodePoolsBackend
- func (n *ASGNodePoolsBackend) Get(_ context.Context, nodePool *api.NodePool) (*NodePool, error)
- func (n *ASGNodePoolsBackend) MarkForDecommission(_ context.Context, nodePool *api.NodePool) error
- func (n *ASGNodePoolsBackend) Scale(_ context.Context, nodePool *api.NodePool, replicas int) error
- func (n *ASGNodePoolsBackend) Terminate(_ context.Context, _ *api.NodePool, node *Node, decrementDesired bool) error
- type CLCUpdateStrategy
- type DrainConfig
- type EC2NodePoolBackend
- func (n *EC2NodePoolBackend) DecommissionKarpenterNodes(ctx context.Context) error
- func (n *EC2NodePoolBackend) DecommissionNodePool(ctx context.Context, nodePool *api.NodePool) error
- func (n *EC2NodePoolBackend) Get(ctx context.Context, nodePool *api.NodePool) (*NodePool, error)
- func (n *EC2NodePoolBackend) MarkForDecommission(context.Context, *api.NodePool) error
- func (n *EC2NodePoolBackend) Scale(context.Context, *api.NodePool, int) error
- func (n *EC2NodePoolBackend) Terminate(ctx context.Context, pool *api.NodePool, node *Node, _ bool) error
- type InstanceConfig
- type KarpenterCRDNameResolver
- type KubernetesNodePoolManager
- func (m *KubernetesNodePoolManager) AbortNodeDecommissioning(ctx context.Context, node *Node) error
- func (m *KubernetesNodePoolManager) CordonNode(ctx context.Context, node *Node) error
- func (m *KubernetesNodePoolManager) DisableReplacementNodeProvisioning(ctx context.Context, node *Node) error
- func (m *KubernetesNodePoolManager) GetPool(ctx context.Context, nodePoolDesc *api.NodePool) (*NodePool, error)
- func (m *KubernetesNodePoolManager) MarkNodeForDecommission(ctx context.Context, node *Node) error
- func (m *KubernetesNodePoolManager) MarkPoolForDecommission(ctx context.Context, nodePool *api.NodePool) error
- func (m *KubernetesNodePoolManager) ScalePool(ctx context.Context, nodePool *api.NodePool, replicas int) error
- func (m *KubernetesNodePoolManager) TerminateNode(ctx context.Context, nodePool *api.NodePool, node *Node, decrementDesired bool) error
- type Node
- type NodePool
- type NodePoolManager
- type ProfileNodePoolProvisioner
- func (n *ProfileNodePoolProvisioner) Get(ctx context.Context, nodePool *api.NodePool) (*NodePool, error)
- func (n *ProfileNodePoolProvisioner) MarkForDecommission(ctx context.Context, nodePool *api.NodePool) error
- func (n *ProfileNodePoolProvisioner) Scale(ctx context.Context, nodePool *api.NodePool, replicas int) error
- func (n *ProfileNodePoolProvisioner) Terminate(ctx context.Context, nodePool *api.NodePool, node *Node, decrementDesired bool) error
- type ProviderNodePoolsBackend
- type RollingUpdateStrategy
- type UpdateStrategy
Constants ¶
const (
KarpenterEC2NodeClassResource = "ec2nodeclasses.karpenter.k8s.aws"
)
Variables ¶
This section is empty.
Functions ¶
func InstanceConfigUpToDate ¶
func InstanceConfigUpToDate(instanceConfig, poolConfig *InstanceConfig) bool
InstanceConfigUpToDate compares current and desired InstanceConfig. It compares userdata, imageID and checks if the current config has all the desired tags. It does NOT check if the current config has too many EC2 tags as many tags are injected out of our control. This means removing a tag is not enough to make the configs unequal.
Types ¶
type ASGNodePoolsBackend ¶
type ASGNodePoolsBackend struct {
// contains filtered or unexported fields
}
ASGNodePoolsBackend defines a node pool backed by an AWS Auto Scaling Group.
func NewASGNodePoolsBackend ¶
func NewASGNodePoolsBackend(cluster *api.Cluster, sess *session.Session) *ASGNodePoolsBackend
NewASGNodePoolsBackend initializes a new ASGNodePoolsBackend for the given cluster and AWS session and.
func (*ASGNodePoolsBackend) Get ¶
Get gets the ASG matching to the node pool and gets all instances from the ASG. The node generation is set to 'current' for nodes with the latest launch configuration and 'outdated' for nodes with an older launch configuration.
func (*ASGNodePoolsBackend) MarkForDecommission ¶
MarkForDecommission suspends autoscaling of the node pool if it was enabled and makes sure that the pool can be scaled down to 0. The implementation assumes the kubernetes cluster-autoscaler is used so it just removes a tag.
func (*ASGNodePoolsBackend) Scale ¶
Scale sets the desired capacity of the ASGs to the number of replicas. If the node pool is backed by multiple ASGs the scale operation will try to balance the increment/decrement of nodes over all the ASGs.
func (*ASGNodePoolsBackend) Terminate ¶
func (n *ASGNodePoolsBackend) Terminate(_ context.Context, _ *api.NodePool, node *Node, decrementDesired bool) error
Terminate terminates an instance from the ASG and optionally decrements the DesiredCapacity. By default the desired capacity will not be decremented. In case the new desired capacity is less then the current min size of the ASG, it will also decrease the ASG minSize. This function will not return until the instance has been terminated in AWS.
type CLCUpdateStrategy ¶
type CLCUpdateStrategy struct {
// contains filtered or unexported fields
}
func NewCLCUpdateStrategy ¶
func NewCLCUpdateStrategy(logger *log.Entry, nodePoolManager NodePoolManager, pollingInterval time.Duration) *CLCUpdateStrategy
NewCLCUpdateStrategy initializes a new CLCUpdateStrategy.
func (*CLCUpdateStrategy) PrepareForRemoval ¶
type DrainConfig ¶
type DrainConfig struct { // Start forcefully evicting pods <ForceEvictionGracePeriod> after node drain started ForceEvictionGracePeriod time.Duration // Only force evict pods that are at least <MinPodLifetime> old MinPodLifetime time.Duration // Wait until all healthy pods in the same PDB are at least <MinHealthyPDBSiblingCreationTime> old MinHealthyPDBSiblingLifetime time.Duration // Wait until all unhealthy pods in the same PDB are at least <MinUnhealthyPDBSiblingCreationTime> old MinUnhealthyPDBSiblingLifetime time.Duration // Wait at least <ForceEvictionInterval> between force evictions to allow controllers to catch up ForceEvictionInterval time.Duration // Wait for <PollInterval> between force eviction attempts PollInterval time.Duration }
DrainConfig contains the various settings for the smart node draining algorithm
type EC2NodePoolBackend ¶
type EC2NodePoolBackend struct {
// contains filtered or unexported fields
}
EC2NodePoolBackend defines a node pool consisting of EC2 instances managed externally by some component e.g. Karpenter.
func NewEC2NodePoolBackend ¶
func NewEC2NodePoolBackend(cluster *api.Cluster, sess *session.Session, crdResolverInitializer func() (*KarpenterCRDNameResolver, error)) *EC2NodePoolBackend
NewEC2NodePoolBackend initializes a new EC2NodePoolBackend for the given clusterID and AWS session and.
func (*EC2NodePoolBackend) DecommissionKarpenterNodes ¶
func (n *EC2NodePoolBackend) DecommissionKarpenterNodes(ctx context.Context) error
func (*EC2NodePoolBackend) DecommissionNodePool ¶
func (*EC2NodePoolBackend) Get ¶
Get gets the EC2 instances matching to the node pool by looking at node pool tag. The node generation is set to 'current' for nodes with up-to-date userData,ImageID and tags and 'outdated' for nodes with an outdated configuration.
func (*EC2NodePoolBackend) MarkForDecommission ¶
type InstanceConfig ¶
type KarpenterCRDNameResolver ¶
type KarpenterCRDNameResolver struct { NodePoolCRDName string // contains filtered or unexported fields }
func NewKarpenterCRDResolver ¶
func NewKarpenterCRDResolver(ctx context.Context, k8sClients *kubernetes.ClientsCollection) (*KarpenterCRDNameResolver, error)
func (*KarpenterCRDNameResolver) NodePoolConfigGetter ¶
func (r *KarpenterCRDNameResolver) NodePoolConfigGetter(ctx context.Context, nodePool *api.NodePool) (*InstanceConfig, error)
func (*KarpenterCRDNameResolver) NodeTemplateCRDName ¶
func (r *KarpenterCRDNameResolver) NodeTemplateCRDName() string
type KubernetesNodePoolManager ¶
type KubernetesNodePoolManager struct {
// contains filtered or unexported fields
}
KubernetesNodePoolManager defines a node pool manager which uses the Kubernetes API along with a node pool provider backend to manage node pools.
func NewKubernetesNodePoolManager ¶
func NewKubernetesNodePoolManager(logger *log.Entry, kubeClient kubernetes.Interface, poolBackend ProviderNodePoolsBackend, drainConfig *DrainConfig, noScheduleTaint bool) *KubernetesNodePoolManager
NewKubernetesNodePoolManager initializes a new Kubernetes NodePool manager which can manage single node pools based on the nodes registered in the Kubernetes API and the related NodePoolBackend for those nodes e.g. ASGNodePool.
func (*KubernetesNodePoolManager) AbortNodeDecommissioning ¶
func (m *KubernetesNodePoolManager) AbortNodeDecommissioning(ctx context.Context, node *Node) error
func (*KubernetesNodePoolManager) CordonNode ¶
func (m *KubernetesNodePoolManager) CordonNode(ctx context.Context, node *Node) error
CordonNode marks a node unschedulable.
func (*KubernetesNodePoolManager) DisableReplacementNodeProvisioning ¶
func (m *KubernetesNodePoolManager) DisableReplacementNodeProvisioning(ctx context.Context, node *Node) error
func (*KubernetesNodePoolManager) GetPool ¶
func (m *KubernetesNodePoolManager) GetPool(ctx context.Context, nodePoolDesc *api.NodePool) (*NodePool, error)
GetPool gets the current node Pool from the node pool backend and attaches the Kubernetes node object name and labels to the corresponding nodes.
func (*KubernetesNodePoolManager) MarkNodeForDecommission ¶
func (m *KubernetesNodePoolManager) MarkNodeForDecommission(ctx context.Context, node *Node) error
func (*KubernetesNodePoolManager) MarkPoolForDecommission ¶
func (*KubernetesNodePoolManager) ScalePool ¶
func (m *KubernetesNodePoolManager) ScalePool(ctx context.Context, nodePool *api.NodePool, replicas int) error
ScalePool scales a nodePool to the specified number of replicas. On scale down it will attempt to do it gracefully by draining the nodes before terminating them.
func (*KubernetesNodePoolManager) TerminateNode ¶
func (m *KubernetesNodePoolManager) TerminateNode(ctx context.Context, nodePool *api.NodePool, node *Node, decrementDesired bool) error
TerminateNode terminates a node and optionally decrement the desired size of the node pool. Before a node is terminated it's drained to ensure that pods running on the nodes are gracefully terminated.
type Node ¶
type Node struct { Name string Annotations map[string]string Labels map[string]string Taints []v1.Taint Cordoned bool ProviderID string FailureDomain string Generation int VolumesAttached bool Ready bool Master bool }
Node is an abstract node object which combines the node information from the node pool backend along with the corresponding Kubernetes node object.
type NodePool ¶
NodePool defines a node pool including all nodes.
func WaitForDesiredNodes ¶
func WaitForDesiredNodes(ctx context.Context, logger *log.Entry, n NodePoolManager, nodePoolDesc *api.NodePool) (*NodePool, error)
WaitForDesiredNodes waits for the current number of nodes to match the desired number. The final node pool will be returned.
func (*NodePool) ReadyNodes ¶
ReadyNodes returns a list of nodes which are marked as ready.
type NodePoolManager ¶
type NodePoolManager interface { GetPool(ctx context.Context, nodePool *api.NodePool) (*NodePool, error) MarkNodeForDecommission(ctx context.Context, node *Node) error AbortNodeDecommissioning(ctx context.Context, node *Node) error ScalePool(ctx context.Context, nodePool *api.NodePool, replicas int) error TerminateNode(ctx context.Context, nodePool *api.NodePool, node *Node, decrementDesired bool) error MarkPoolForDecommission(ctx context.Context, nodePool *api.NodePool) error DisableReplacementNodeProvisioning(ctx context.Context, node *Node) error CordonNode(ctx context.Context, node *Node) error }
NodePoolManager defines an interface for managing node pools when performing update operations.
type ProfileNodePoolProvisioner ¶
type ProfileNodePoolProvisioner struct {
// contains filtered or unexported fields
}
ProfileNodePoolProvisioner is a NodePoolProvisioner which selects the backend provisioner based on the node pool profile. It has a default provisioner and a mapping of profile to provisioner for those profiles which can't use the default provisioner.
func NewProfileNodePoolsBackend ¶
func NewProfileNodePoolsBackend(defaultProvisioner ProviderNodePoolsBackend, profileMapping map[string]ProviderNodePoolsBackend) *ProfileNodePoolProvisioner
NewProfileNodePoolsBackend initializes a new ProfileNodePoolProvisioner.
func (*ProfileNodePoolProvisioner) Get ¶
func (n *ProfileNodePoolProvisioner) Get(ctx context.Context, nodePool *api.NodePool) (*NodePool, error)
Get the specified node pool using the right node pool provisioner for the profile.
func (*ProfileNodePoolProvisioner) MarkForDecommission ¶
func (n *ProfileNodePoolProvisioner) MarkForDecommission(ctx context.Context, nodePool *api.NodePool) error
MarkForDecommission marks a node pool for decommissioning using the right node pool provisioner for the profile.
type ProviderNodePoolsBackend ¶
type ProviderNodePoolsBackend interface { Get(ctx context.Context, nodePool *api.NodePool) (*NodePool, error) Scale(ctx context.Context, nodePool *api.NodePool, replicas int) error MarkForDecommission(ctx context.Context, nodePool *api.NodePool) error Terminate(ctx context.Context, nodePool *api.NodePool, node *Node, decrementDesired bool) error }
ProviderNodePoolsBackend is an interface for describing a node pools provider backend e.g. AWS Auto Scaling Groups.
type RollingUpdateStrategy ¶
type RollingUpdateStrategy struct {
// contains filtered or unexported fields
}
RollingUpdateStrategy is a cluster node update strategy which will roll the nodes with a specified surge.
func NewRollingUpdateStrategy ¶
func NewRollingUpdateStrategy(logger *log.Entry, nodePoolManager NodePoolManager, surge int) *RollingUpdateStrategy
NewRollingUpdateStrategy initializes a new RollingUpdateStrategy.