Documentation ¶
Index ¶
- Variables
- func CheckPodsExceedingFreeResources(pods []*api.Pod, capacity api.ResourceList) (fitting []*api.Pod, notFittingCPU, notFittingMemory []*api.Pod)
- func MapPodsToMachines(lister algorithm.PodLister) (map[string][]*api.Pod, error)
- func NewNodeLabelPredicate(info NodeInfo, labels []string, presence bool) algorithm.FitPredicate
- func NewResourceFitPredicate(info NodeInfo) algorithm.FitPredicate
- func NewSelectorMatchPredicate(info NodeInfo) algorithm.FitPredicate
- func NewServiceAffinityPredicate(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, ...) algorithm.FitPredicate
- func NoDiskConflict(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)
- func PodFitsHost(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)
- func PodFitsHostPorts(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)
- func PodMatchesNodeLabels(pod *api.Pod, node *api.Node) bool
- type CachedNodeInfo
- type ClientNodeInfo
- type NodeInfo
- type NodeLabelChecker
- type NodeSelector
- type ResourceFit
- type ServiceAffinity
- type StaticNodeInfo
Constants ¶
This section is empty.
Variables ¶
var FailedResourceType string
Functions ¶
func CheckPodsExceedingFreeResources ¶ added in v1.1.1
func MapPodsToMachines ¶
MapPodsToMachines obtains a list of pods and pivots that list into a map where the keys are host names and the values are the list of pods running on that host.
func NewNodeLabelPredicate ¶
func NewNodeLabelPredicate(info NodeInfo, labels []string, presence bool) algorithm.FitPredicate
func NewResourceFitPredicate ¶
func NewResourceFitPredicate(info NodeInfo) algorithm.FitPredicate
func NewSelectorMatchPredicate ¶
func NewSelectorMatchPredicate(info NodeInfo) algorithm.FitPredicate
func NewServiceAffinityPredicate ¶
func NewServiceAffinityPredicate(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, nodeInfo NodeInfo, labels []string) algorithm.FitPredicate
func NoDiskConflict ¶
NoDiskConflict evaluates if a pod can fit due to the volumes it requests, and those that are already mounted. If there is already a volume mounted on that node, another pod that uses the same volume can't be scheduled there. This is GCE, Amazon EBS, and Ceph RBD specific for now: - GCE PD allows multiple mounts as long as they're all read-only - AWS EBS forbids any two pods mounting the same volume ID - Ceph RBD forbids if any two pods share at least same monitor, and match pool and image. TODO: migrate this into some per-volume specific code?
func PodFitsHost ¶
func PodFitsHostPorts ¶ added in v1.1.1
Types ¶
type CachedNodeInfo ¶ added in v1.2.0
type CachedNodeInfo struct {
*cache.StoreToNodeLister
}
func (*CachedNodeInfo) GetNodeInfo ¶ added in v1.2.0
func (c *CachedNodeInfo) GetNodeInfo(id string) (*api.Node, error)
GetNodeInfo returns cached data for the node 'id'.
type ClientNodeInfo ¶
func (ClientNodeInfo) GetNodeInfo ¶
func (nodes ClientNodeInfo) GetNodeInfo(nodeID string) (*api.Node, error)
type NodeLabelChecker ¶
type NodeLabelChecker struct {
// contains filtered or unexported fields
}
func (*NodeLabelChecker) CheckNodeLabelPresence ¶
func (n *NodeLabelChecker) CheckNodeLabelPresence(pod *api.Pod, existingPods []*api.Pod, nodeID string) (bool, error)
CheckNodeLabelPresence checks whether all of the specified labels exists on a node or not, regardless of their value If "presence" is false, then returns false if any of the requested labels matches any of the node's labels, otherwise returns true. If "presence" is true, then returns false if any of the requested labels does not match any of the node's labels, otherwise returns true.
Consider the cases where the nodes are placed in regions/zones/racks and these are identified by labels In some cases, it is required that only nodes that are part of ANY of the defined regions/zones/racks be selected
Alternately, eliminating nodes that have a certain label, regardless of value, is also useful A node may have a label with "retiring" as key and the date as the value and it may be desirable to avoid scheduling new pods on this node
type NodeSelector ¶
type NodeSelector struct {
// contains filtered or unexported fields
}
func (*NodeSelector) PodSelectorMatches ¶
type ResourceFit ¶
type ResourceFit struct {
// contains filtered or unexported fields
}
func (*ResourceFit) PodFitsResources ¶
func (r *ResourceFit) PodFitsResources(pod *api.Pod, existingPods []*api.Pod, node string) (bool, error)
PodFitsResources calculates fit based on requested, rather than used resources
type ServiceAffinity ¶
type ServiceAffinity struct {
// contains filtered or unexported fields
}
func (*ServiceAffinity) CheckServiceAffinity ¶
func (s *ServiceAffinity) CheckServiceAffinity(pod *api.Pod, existingPods []*api.Pod, nodeID string) (bool, error)
CheckServiceAffinity ensures that only the nodes that match the specified labels are considered for scheduling. The set of labels to be considered are provided to the struct (ServiceAffinity). The pod is checked for the labels and any missing labels are then checked in the node that hosts the service pods (peers) for the given pod.
We add an implicit selector requiring some particular value V for label L to a pod, if: - L is listed in the ServiceAffinity object that is passed into the function - the pod does not have any NodeSelector for L - some other pod from the same service is already scheduled onto a node that has value V for label L
type StaticNodeInfo ¶
func (StaticNodeInfo) GetNodeInfo ¶
func (nodes StaticNodeInfo) GetNodeInfo(nodeID string) (*api.Node, error)