Documentation ¶
Index ¶
- func BalancedResourceAllocation(pod *api.Pod, machinesToPods map[string][]*api.Pod, ...) (schedulerapi.HostPriorityList, error)
- func LeastRequestedPriority(pod *api.Pod, machinesToPods map[string][]*api.Pod, ...) (schedulerapi.HostPriorityList, error)
- func NewNodeLabelPriority(label string, presence bool) algorithm.PriorityFunction
- func NewSelectorSpreadPriority(serviceLister algorithm.ServiceLister, ...) algorithm.PriorityFunction
- func NewServiceAntiAffinityPriority(serviceLister algorithm.ServiceLister, label string) algorithm.PriorityFunction
- type NodeLabelPrioritizer
- type SelectorSpread
- type ServiceAntiAffinity
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func BalancedResourceAllocation ¶
func BalancedResourceAllocation(pod *api.Pod, machinesToPods map[string][]*api.Pod, podLister algorithm.PodLister, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)
BalancedResourceAllocation favors nodes with balanced resource usage rate. BalancedResourceAllocation should **NOT** be used alone, and **MUST** be used together with LeastRequestedPriority. It calculates the difference between the cpu and memory fracion of capacity, and prioritizes the host based on how close the two metrics are to each other. Detail: score = 10 - abs(cpuFraction-memoryFraction)*10. The algorithm is partly inspired by: "Wei Huang et al. An Energy Efficient Virtual Machine Placement Algorithm with Balanced Resource Utilization"
func LeastRequestedPriority ¶
func LeastRequestedPriority(pod *api.Pod, machinesToPods map[string][]*api.Pod, podLister algorithm.PodLister, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)
LeastRequestedPriority is a priority function that favors nodes with fewer requested resources. It calculates the percentage of memory and CPU requested by pods scheduled on the node, and prioritizes based on the minimum of the average of the fraction of requested to capacity. Details: cpu((capacity - sum(requested)) * 10 / capacity) + memory((capacity - sum(requested)) * 10 / capacity) / 2
func NewNodeLabelPriority ¶
func NewNodeLabelPriority(label string, presence bool) algorithm.PriorityFunction
func NewSelectorSpreadPriority ¶
func NewSelectorSpreadPriority(serviceLister algorithm.ServiceLister, controllerLister algorithm.ControllerLister) algorithm.PriorityFunction
func NewServiceAntiAffinityPriority ¶
func NewServiceAntiAffinityPriority(serviceLister algorithm.ServiceLister, label string) algorithm.PriorityFunction
Types ¶
type NodeLabelPrioritizer ¶
type NodeLabelPrioritizer struct {
// contains filtered or unexported fields
}
func (*NodeLabelPrioritizer) CalculateNodeLabelPriority ¶
func (n *NodeLabelPrioritizer) CalculateNodeLabelPriority(pod *api.Pod, machinesToPods map[string][]*api.Pod, podLister algorithm.PodLister, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)
CalculateNodeLabelPriority checks whether a particular label exists on a node or not, regardless of its value. If presence is true, prioritizes nodes that have the specified label, regardless of value. If presence is false, prioritizes nodes that do not have the specified label.
type SelectorSpread ¶
type SelectorSpread struct {
// contains filtered or unexported fields
}
func (*SelectorSpread) CalculateSpreadPriority ¶
func (s *SelectorSpread) CalculateSpreadPriority(pod *api.Pod, machinesToPods map[string][]*api.Pod, podLister algorithm.PodLister, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)
CalculateSpreadPriority spreads pods across hosts and zones, considering pods belonging to the same service or replication controller. When a pod is scheduled, it looks for services or RCs that match the pod, then finds existing pods that match those selectors. It favors nodes that have fewer existing matching pods. i.e. it pushes the scheduler towards a node where there's the smallest number of pods which match the same service selectors or RC selectors as the pod being scheduled. Where zone information is included on the nodes, it favors nodes in zones with fewer existing matching pods.
type ServiceAntiAffinity ¶
type ServiceAntiAffinity struct {
// contains filtered or unexported fields
}
func (*ServiceAntiAffinity) CalculateAntiAffinityPriority ¶
func (s *ServiceAntiAffinity) CalculateAntiAffinityPriority(pod *api.Pod, machinesToPods map[string][]*api.Pod, podLister algorithm.PodLister, nodeLister algorithm.NodeLister) (schedulerapi.HostPriorityList, error)
CalculateAntiAffinityPriority spreads pods by minimizing the number of pods belonging to the same service on machines with the same value for a particular label. The label to be considered is provided to the struct (ServiceAntiAffinity).