Documentation ¶
Index ¶
- Constants
- func ConfigurePredicateCheckerForLoop(unschedulablePods []*apiv1.Pod, schedulablePods []*apiv1.Pod, ...)
- func FilterOutExpendableAndSplit(unschedulableCandidates []*apiv1.Pod, expendablePodsPriorityCutoff int) ([]*apiv1.Pod, []*apiv1.Pod)
- func FilterOutExpendablePods(pods []*apiv1.Pod, expendablePodsPriorityCutoff int) []*apiv1.Pod
- func FilterOutSchedulable(unschedulableCandidates []*apiv1.Pod, nodes []*apiv1.Node, ...) []*apiv1.Pod
- func GetNodeInfosForGroups(nodes []*apiv1.Node, cloudProvider cloudprovider.CloudProvider, ...) (map[string]*schedulercache.NodeInfo, errors.AutoscalerError)
- func ScaleUp(context *context.AutoscalingContext, unschedulablePods []*apiv1.Pod, ...) (bool, errors.AutoscalerError)
- func UpdateClusterStateMetrics(csr *clusterstate.ClusterStateRegistry)
- func UpdateEmptyClusterStateMetrics()
- type Autoscaler
- type AutoscalerBuilder
- type AutoscalerBuilderImpl
- type AutoscalerOptions
- type DynamicAutoscaler
- type NodeDeleteStatus
- type ScaleDown
- func (sd *ScaleDown) CleanUp(timestamp time.Time)
- func (sd *ScaleDown) CleanUpUnneededNodes()
- func (sd *ScaleDown) GetCandidatesForScaleDown() []*apiv1.Node
- func (sd *ScaleDown) TryToScaleDown(allNodes []*apiv1.Node, pods []*apiv1.Pod, ...) (ScaleDownResult, errors.AutoscalerError)
- func (sd *ScaleDown) UpdateUnneededNodes(nodes []*apiv1.Node, nodesToCheck []*apiv1.Node, pods []*apiv1.Pod, ...) errors.AutoscalerError
- type ScaleDownResult
- type StaticAutoscaler
Constants ¶
const ( // MaxKubernetesEmptyNodeDeletionTime is the maximum time needed by Kubernetes to delete an empty node. MaxKubernetesEmptyNodeDeletionTime = 3 * time.Minute // MaxCloudProviderNodeDeletionTime is the maximum time needed by cloud provider to delete a node. MaxCloudProviderNodeDeletionTime = 5 * time.Minute // MaxPodEvictionTime is the maximum time CA tries to evict a pod before giving up. MaxPodEvictionTime = 2 * time.Minute // EvictionRetryTime is the time after CA retries failed pod eviction. EvictionRetryTime = 10 * time.Second // PodEvictionHeadroom is the extra time we wait to catch situations when the pod is ignoring SIGTERM and // is killed with SIGKILL after MaxGracefulTerminationTime PodEvictionHeadroom = 30 * time.Second // UnremovableNodeRecheckTimeout is the timeout before we check again a node that couldn't be removed before UnremovableNodeRecheckTimeout = 5 * time.Minute )
const ( // Megabyte is 2^20 bytes. Megabyte float64 = 1024 * 1024 )
Getting node cores/memory
const (
// ReschedulerTaintKey is the name of the taint created by rescheduler.
ReschedulerTaintKey = "CriticalAddonsOnly"
)
const (
// ScaleDownDisabledKey is the name of annotation marking node as not eligible for scale down.
ScaleDownDisabledKey = "cluster-autoscaler.kubernetes.io/scale-down-disabled"
)
Variables ¶
This section is empty.
Functions ¶
func ConfigurePredicateCheckerForLoop ¶
func ConfigurePredicateCheckerForLoop(unschedulablePods []*apiv1.Pod, schedulablePods []*apiv1.Pod, predicateChecker *simulator.PredicateChecker)
ConfigurePredicateCheckerForLoop can be run to update predicateChecker configuration based on current state of the cluster.
func FilterOutExpendableAndSplit ¶
func FilterOutExpendableAndSplit(unschedulableCandidates []*apiv1.Pod, expendablePodsPriorityCutoff int) ([]*apiv1.Pod, []*apiv1.Pod)
FilterOutExpendableAndSplit filters out expendable pods and splits into:
- waiting for lower priority pods preemption
- other pods.
func FilterOutExpendablePods ¶
FilterOutExpendablePods filters out expendable pods.
func FilterOutSchedulable ¶
func FilterOutSchedulable(unschedulableCandidates []*apiv1.Pod, nodes []*apiv1.Node, allScheduled []*apiv1.Pod, podsWaitingForLowerPriorityPreemption []*apiv1.Pod, predicateChecker *simulator.PredicateChecker, expendablePodsPriorityCutoff int) []*apiv1.Pod
FilterOutSchedulable checks whether pods from <unschedulableCandidates> marked as unschedulable by Scheduler actually can't be scheduled on any node and filter out the ones that can. It takes into account pods that are bound to node and will be scheduled after lower priority pod preemption.
func GetNodeInfosForGroups ¶
func GetNodeInfosForGroups(nodes []*apiv1.Node, cloudProvider cloudprovider.CloudProvider, kubeClient kube_client.Interface, daemonsets []*extensionsv1.DaemonSet, predicateChecker *simulator.PredicateChecker) (map[string]*schedulercache.NodeInfo, errors.AutoscalerError)
GetNodeInfosForGroups finds NodeInfos for all node groups used to manage the given nodes. It also returns a node group to sample node mapping. TODO(mwielgus): This returns map keyed by url, while most code (including scheduler) uses node.Name for a key.
TODO(mwielgus): Review error policy - sometimes we may continue with partial errors.
func ScaleUp ¶
func ScaleUp(context *context.AutoscalingContext, unschedulablePods []*apiv1.Pod, nodes []*apiv1.Node, daemonSets []*extensionsv1.DaemonSet) (bool, errors.AutoscalerError)
ScaleUp tries to scale the cluster up. Return true if it found a way to increase the size, false if it didn't and error if an error occurred. Assumes that all nodes in the cluster are ready and in sync with instance groups.
func UpdateClusterStateMetrics ¶
func UpdateClusterStateMetrics(csr *clusterstate.ClusterStateRegistry)
UpdateClusterStateMetrics updates metrics related to cluster state
func UpdateEmptyClusterStateMetrics ¶
func UpdateEmptyClusterStateMetrics()
UpdateEmptyClusterStateMetrics updates metrics related to empty cluster's state. TODO(aleksandra-malinowska): use long unregistered value from ClusterStateRegistry.
Types ¶
type Autoscaler ¶
type Autoscaler interface { // RunOnce represents an iteration in the control-loop of CA RunOnce(currentTime time.Time) errors.AutoscalerError // ExitCleanUp is a clean-up performed just before process termination. ExitCleanUp() }
Autoscaler is the main component of CA which scales up/down node groups according to its configuration The configuration can be injected at the creation of an autoscaler
func NewAutoscaler ¶
func NewAutoscaler(opts AutoscalerOptions) (Autoscaler, errors.AutoscalerError)
NewAutoscaler creates an autoscaler of an appropriate type according to the parameters
type AutoscalerBuilder ¶
type AutoscalerBuilder interface { SetDynamicConfig(config dynamic.Config) AutoscalerBuilder Build() (Autoscaler, errors.AutoscalerError) }
AutoscalerBuilder builds an instance of Autoscaler which is the core of CA
type AutoscalerBuilderImpl ¶
type AutoscalerBuilderImpl struct {
// contains filtered or unexported fields
}
AutoscalerBuilderImpl builds new autoscalers from its state including initial `AutoscalingOptions` given at startup and `dynamic.Config` read on demand from the configmap
func NewAutoscalerBuilder ¶
func NewAutoscalerBuilder(autoscalingOptions context.AutoscalingOptions, predicateChecker *simulator.PredicateChecker, kubeClient kube_client.Interface, kubeEventRecorder kube_record.EventRecorder, listerRegistry kube_util.ListerRegistry, podListProcessor pods.PodListProcessor) *AutoscalerBuilderImpl
NewAutoscalerBuilder builds an AutoscalerBuilder from required parameters
func (*AutoscalerBuilderImpl) Build ¶
func (b *AutoscalerBuilderImpl) Build() (Autoscaler, errors.AutoscalerError)
Build an autoscaler according to the builder's state
func (*AutoscalerBuilderImpl) SetDynamicConfig ¶
func (b *AutoscalerBuilderImpl) SetDynamicConfig(config dynamic.Config) AutoscalerBuilder
SetDynamicConfig sets an instance of dynamic.Config read from a configmap so that the new autoscaler built afterwards reflect the latest configuration contained in the configmap
type AutoscalerOptions ¶
type AutoscalerOptions struct { context.AutoscalingOptions dynamic.ConfigFetcherOptions KubeClient kube_client.Interface KubeEventRecorder kube_record.EventRecorder PredicateChecker *simulator.PredicateChecker ListerRegistry kube_util.ListerRegistry PodListProcessor pods.PodListProcessor }
AutoscalerOptions is the whole set of options for configuring an autoscaler
type DynamicAutoscaler ¶
type DynamicAutoscaler struct {
// contains filtered or unexported fields
}
DynamicAutoscaler is a variant of autoscaler which supports dynamic reconfiguration at runtime
func NewDynamicAutoscaler ¶
func NewDynamicAutoscaler(autoscalerBuilder AutoscalerBuilder, configFetcher dynamic.ConfigFetcher) (*DynamicAutoscaler, errors.AutoscalerError)
NewDynamicAutoscaler builds a DynamicAutoscaler from required parameters
func (*DynamicAutoscaler) ExitCleanUp ¶
func (a *DynamicAutoscaler) ExitCleanUp()
ExitCleanUp cleans-up after autoscaler, so no mess remains after process termination.
func (*DynamicAutoscaler) Reconfigure ¶
func (a *DynamicAutoscaler) Reconfigure() error
Reconfigure this dynamic autoscaler if the configmap is updated
func (*DynamicAutoscaler) RunOnce ¶
func (a *DynamicAutoscaler) RunOnce(currentTime time.Time) errors.AutoscalerError
RunOnce represents a single iteration of a dynamic autoscaler inside the CA's control-loop
type NodeDeleteStatus ¶
NodeDeleteStatus tells whether a node is being deleted right now.
func (*NodeDeleteStatus) IsDeleteInProgress ¶
func (n *NodeDeleteStatus) IsDeleteInProgress() bool
IsDeleteInProgress returns true if a node is being deleted.
func (*NodeDeleteStatus) SetDeleteInProgress ¶
func (n *NodeDeleteStatus) SetDeleteInProgress(status bool)
SetDeleteInProgress sets deletion process status
type ScaleDown ¶
type ScaleDown struct {
// contains filtered or unexported fields
}
ScaleDown is responsible for maintaining the state needed to perform unneeded node removals.
func NewScaleDown ¶
func NewScaleDown(context *context.AutoscalingContext) *ScaleDown
NewScaleDown builds new ScaleDown object.
func (*ScaleDown) CleanUpUnneededNodes ¶
func (sd *ScaleDown) CleanUpUnneededNodes()
CleanUpUnneededNodes clears the list of unneeded nodes.
func (*ScaleDown) GetCandidatesForScaleDown ¶
GetCandidatesForScaleDown gets candidates for scale down.
func (*ScaleDown) TryToScaleDown ¶
func (sd *ScaleDown) TryToScaleDown(allNodes []*apiv1.Node, pods []*apiv1.Pod, pdbs []*policyv1.PodDisruptionBudget, currentTime time.Time) (ScaleDownResult, errors.AutoscalerError)
TryToScaleDown tries to scale down the cluster. It returns ScaleDownResult indicating if any node was removed and error if such occurred.
func (*ScaleDown) UpdateUnneededNodes ¶
func (sd *ScaleDown) UpdateUnneededNodes( nodes []*apiv1.Node, nodesToCheck []*apiv1.Node, pods []*apiv1.Pod, timestamp time.Time, pdbs []*policyv1.PodDisruptionBudget) errors.AutoscalerError
UpdateUnneededNodes calculates which nodes are not needed, i.e. all pods can be scheduled somewhere else, and updates unneededNodes map accordingly. It also computes information where pods can be rescheduled and node utilization level. Timestamp is the current timestamp. The computations are made only for the nodes managed by CA.
type ScaleDownResult ¶
type ScaleDownResult int
ScaleDownResult represents the state of scale down.
const ( // ScaleDownError - scale down finished with error. ScaleDownError ScaleDownResult = iota // ScaleDownNoUnneeded - no unneeded nodes and no errors. ScaleDownNoUnneeded // ScaleDownNoNodeDeleted - unneeded nodes present but not available for deletion. ScaleDownNoNodeDeleted // ScaleDownNodeDeleted - a node was deleted. ScaleDownNodeDeleted // ScaleDownNodeDeleteStarted - a node deletion process was started. ScaleDownNodeDeleteStarted )
type StaticAutoscaler ¶
type StaticAutoscaler struct { // AutoscalingContext consists of validated settings and options for this autoscaler *context.AutoscalingContext kube_util.ListerRegistry // contains filtered or unexported fields }
StaticAutoscaler is an autoscaler which has all the core functionality of a CA but without the reconfiguration feature
func NewStaticAutoscaler ¶
func NewStaticAutoscaler(opts context.AutoscalingOptions, predicateChecker *simulator.PredicateChecker, kubeClient kube_client.Interface, kubeEventRecorder kube_record.EventRecorder, listerRegistry kube_util.ListerRegistry, podListProcessor pods.PodListProcessor) (*StaticAutoscaler, errors.AutoscalerError)
NewStaticAutoscaler creates an instance of Autoscaler filled with provided parameters
func (*StaticAutoscaler) ExitCleanUp ¶
func (a *StaticAutoscaler) ExitCleanUp()
ExitCleanUp removes status configmap.
func (*StaticAutoscaler) RunOnce ¶
func (a *StaticAutoscaler) RunOnce(currentTime time.Time) errors.AutoscalerError
RunOnce iterates over node groups and scales them up/down if necessary