Documentation ¶
Overview ¶
Package node contains code for syncing cloud instances with node registry
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( // UnreachableTaintTemplate is the taint for when a node becomes unreachable. UnreachableTaintTemplate = &v1.Taint{ Key: algorithm.TaintNodeUnreachable, Effect: v1.TaintEffectNoExecute, } // NotReadyTaintTemplate is the taint for when a node is not ready for // executing pods NotReadyTaintTemplate = &v1.Taint{ Key: algorithm.TaintNodeNotReady, Effect: v1.TaintEffectNoExecute, } )
Functions ¶
Types ¶
type Controller ¶ added in v1.8.0
type Controller struct {
// contains filtered or unexported fields
}
Controller is the controller that manages node related cluster state.
func NewNodeController ¶
func NewNodeController( podInformer coreinformers.PodInformer, nodeInformer coreinformers.NodeInformer, daemonSetInformer extensionsinformers.DaemonSetInformer, cloud cloudprovider.Interface, kubeClient clientset.Interface, podEvictionTimeout time.Duration, evictionLimiterQPS float32, secondaryEvictionLimiterQPS float32, largeClusterThreshold int32, unhealthyZoneThreshold float32, nodeMonitorGracePeriod time.Duration, nodeStartupGracePeriod time.Duration, nodeMonitorPeriod time.Duration, clusterCIDR *net.IPNet, serviceCIDR *net.IPNet, nodeCIDRMaskSize int, allocateNodeCIDRs bool, allocatorType ipam.CIDRAllocatorType, runTaintManager bool, useTaintBasedEvictions bool, taintNodeByCondition bool) (*Controller, error)
NewNodeController returns a new node controller to sync instances from cloudprovider. This method returns an error if it is unable to initialize the CIDR bitmap with podCIDRs it has already allocated to nodes. Since we don't allow podCIDR changes currently, this should be handled as a fatal error.
func (*Controller) ComputeZoneState ¶ added in v1.8.0
func (nc *Controller) ComputeZoneState(nodeReadyConditions []*v1.NodeCondition) (int, ZoneState)
ComputeZoneState returns a slice of NodeReadyConditions for all Nodes in a given zone. The zone is considered: - fullyDisrupted if there're no Ready Nodes, - partiallyDisrupted if at least than nc.unhealthyZoneThreshold percent of Nodes are not Ready, - normal otherwise
func (*Controller) HealthyQPSFunc ¶ added in v1.8.0
func (nc *Controller) HealthyQPSFunc(nodeNum int) float32
HealthyQPSFunc returns the default value for cluster eviction rate - we take nodeNum for consistency with ReducedQPSFunc.
func (*Controller) ReducedQPSFunc ¶ added in v1.8.0
func (nc *Controller) ReducedQPSFunc(nodeNum int) float32
ReducedQPSFunc returns the QPS for when a the cluster is large make evictions slower, if they're small stop evictions altogether.
func (*Controller) Run ¶ added in v1.8.0
func (nc *Controller) Run(stopCh <-chan struct{})
Run starts an asynchronous loop that monitors the status of cluster nodes.