Documentation ¶
Index ¶
- Variables
- func CheckNodeMemoryPressurePredicate(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func CheckPodsExceedingFreeResources(pods []*api.Pod, allocatable api.ResourceList) (fitting []*api.Pod, ...)
- func GeneralPredicates(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func NewMaxPDVolumeCountPredicate(filter VolumeFilter, maxVolumes int, pvInfo PersistentVolumeInfo, ...) algorithm.FitPredicate
- func NewNodeLabelPredicate(labels []string, presence bool) algorithm.FitPredicate
- func NewPodAffinityPredicate(info NodeInfo, podLister algorithm.PodLister, failureDomains []string) algorithm.FitPredicate
- func NewServiceAffinityPredicate(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, ...) algorithm.FitPredicate
- func NewTolerationMatchPredicate(info NodeInfo) algorithm.FitPredicate
- func NewVolumeZonePredicate(pvInfo PersistentVolumeInfo, pvcInfo PersistentVolumeClaimInfo) algorithm.FitPredicate
- func NoDiskConflict(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func PodFitsHost(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func PodFitsHostPorts(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func PodFitsResources(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func PodMatchesNodeLabels(pod *api.Pod, node *api.Node) bool
- func PodSelectorMatches(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- type CachedNodeInfo
- type InsufficientResourceError
- type MaxPDVolumeCountChecker
- type NodeInfo
- type NodeLabelChecker
- type PersistentVolumeClaimInfo
- type PersistentVolumeInfo
- type PodAffinityChecker
- func (checker *PodAffinityChecker) AnyPodMatchesPodAffinityTerm(pod *api.Pod, allPods []*api.Pod, node *api.Node, ...) (bool, error)
- func (checker *PodAffinityChecker) InterPodAffinityMatches(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func (checker *PodAffinityChecker) NodeMatchPodAffinityAntiAffinity(pod *api.Pod, allPods []*api.Pod, node *api.Node) bool
- func (checker *PodAffinityChecker) NodeMatchesHardPodAffinity(pod *api.Pod, allPods []*api.Pod, node *api.Node, podAffinity *api.PodAffinity) bool
- func (checker *PodAffinityChecker) NodeMatchesHardPodAntiAffinity(pod *api.Pod, allPods []*api.Pod, node *api.Node, ...) bool
- type PredicateFailureError
- type ServiceAffinity
- type TolerationMatch
- type VolumeFilter
- type VolumeZoneChecker
Constants ¶
This section is empty.
Variables ¶
var ( // The predicateName tries to be consistent as the predicate name used in DefaultAlgorithmProvider defined in // defaults.go (which tend to be stable for backward compatibility) ErrDiskConflict = newPredicateFailureError("NoDiskConflict") ErrVolumeZoneConflict = newPredicateFailureError("NoVolumeZoneConflict") ErrNodeSelectorNotMatch = newPredicateFailureError("MatchNodeSelector") ErrPodAffinityNotMatch = newPredicateFailureError("MatchInterPodAffinity") ErrTaintsTolerationsNotMatch = newPredicateFailureError("PodToleratesNodeTaints") ErrPodNotMatchHostName = newPredicateFailureError("HostName") ErrPodNotFitsHostPorts = newPredicateFailureError("PodFitsHostPorts") ErrNodeLabelPresenceViolated = newPredicateFailureError("CheckNodeLabelPresence") ErrServiceAffinityViolated = newPredicateFailureError("CheckServiceAffinity") ErrMaxVolumeCountExceeded = newPredicateFailureError("MaxVolumeCount") ErrNodeUnderMemoryPressure = newPredicateFailureError("NodeUnderMemoryPressure") // ErrFakePredicate is used for test only. The fake predicates returning false also returns error // as ErrFakePredicate. ErrFakePredicate = newPredicateFailureError("FakePredicateError") )
Functions ¶
func CheckNodeMemoryPressurePredicate ¶ added in v1.3.0
func CheckNodeMemoryPressurePredicate(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
CheckNodeMemoryPressurePredicate checks if a pod can be scheduled on a node reporting memory pressure condition.
func CheckPodsExceedingFreeResources ¶ added in v1.1.0
func GeneralPredicates ¶ added in v1.3.0
func NewMaxPDVolumeCountPredicate ¶ added in v1.2.0
func NewMaxPDVolumeCountPredicate(filter VolumeFilter, maxVolumes int, pvInfo PersistentVolumeInfo, pvcInfo PersistentVolumeClaimInfo) algorithm.FitPredicate
NewMaxPDVolumeCountPredicate creates a predicate which evaluates whether a pod can fit based on the number of volumes which match a filter that it requests, and those that are already present. The maximum number is configurable to accommodate different systems.
The predicate looks for both volumes used directly, as well as PVC volumes that are backed by relevant volume types, counts the number of unique volumes, and rejects the new pod if it would place the total count over the maximum.
func NewNodeLabelPredicate ¶
func NewNodeLabelPredicate(labels []string, presence bool) algorithm.FitPredicate
func NewPodAffinityPredicate ¶ added in v1.3.0
func NewServiceAffinityPredicate ¶
func NewServiceAffinityPredicate(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, nodeInfo NodeInfo, labels []string) algorithm.FitPredicate
func NewTolerationMatchPredicate ¶ added in v1.3.0
func NewTolerationMatchPredicate(info NodeInfo) algorithm.FitPredicate
func NewVolumeZonePredicate ¶ added in v1.2.0
func NewVolumeZonePredicate(pvInfo PersistentVolumeInfo, pvcInfo PersistentVolumeClaimInfo) algorithm.FitPredicate
VolumeZonePredicate evaluates if a pod can fit due to the volumes it requests, given that some volumes may have zone scheduling constraints. The requirement is that any volume zone-labels must match the equivalent zone-labels on the node. It is OK for the node to have more zone-label constraints (for example, a hypothetical replicated volume might allow region-wide access)
Currently this is only supported with PersistentVolumeClaims, and looks to the labels only on the bound PersistentVolume.
Working with volumes declared inline in the pod specification (i.e. not using a PersistentVolume) is likely to be harder, as it would require determining the zone of a volume during scheduling, and that is likely to require calling out to the cloud provider. It seems that we are moving away from inline volume declarations anyway.
func NoDiskConflict ¶
NoDiskConflict evaluates if a pod can fit due to the volumes it requests, and those that are already mounted. If there is already a volume mounted on that node, another pod that uses the same volume can't be scheduled there. This is GCE, Amazon EBS, and Ceph RBD specific for now: - GCE PD allows multiple mounts as long as they're all read-only - AWS EBS forbids any two pods mounting the same volume ID - Ceph RBD forbids if any two pods share at least same monitor, and match pool and image. TODO: migrate this into some per-volume specific code?
func PodFitsHost ¶
func PodFitsHostPorts ¶ added in v1.2.0
func PodFitsResources ¶ added in v1.3.0
func PodMatchesNodeLabels ¶
The pod can only schedule onto nodes that satisfy requirements in both NodeAffinity and nodeSelector.
func PodSelectorMatches ¶ added in v1.3.0
Types ¶
type CachedNodeInfo ¶ added in v1.2.0
type CachedNodeInfo struct {
*cache.StoreToNodeLister
}
func (*CachedNodeInfo) GetNodeInfo ¶ added in v1.2.0
func (c *CachedNodeInfo) GetNodeInfo(id string) (*api.Node, error)
GetNodeInfo returns cached data for the node 'id'.
type InsufficientResourceError ¶ added in v1.2.0
type InsufficientResourceError struct { // resourceName is the name of the resource that is insufficient ResourceName string // contains filtered or unexported fields }
InsufficientResourceError is an error type that indicates what kind of resource limit is hit and caused the unfitting failure.
func (*InsufficientResourceError) Error ¶ added in v1.2.0
func (e *InsufficientResourceError) Error() string
type MaxPDVolumeCountChecker ¶ added in v1.2.0
type MaxPDVolumeCountChecker struct {
// contains filtered or unexported fields
}
type NodeLabelChecker ¶
type NodeLabelChecker struct {
// contains filtered or unexported fields
}
func (*NodeLabelChecker) CheckNodeLabelPresence ¶
func (n *NodeLabelChecker) CheckNodeLabelPresence(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
CheckNodeLabelPresence checks whether all of the specified labels exists on a node or not, regardless of their value If "presence" is false, then returns false if any of the requested labels matches any of the node's labels, otherwise returns true. If "presence" is true, then returns false if any of the requested labels does not match any of the node's labels, otherwise returns true.
Consider the cases where the nodes are placed in regions/zones/racks and these are identified by labels In some cases, it is required that only nodes that are part of ANY of the defined regions/zones/racks be selected
Alternately, eliminating nodes that have a certain label, regardless of value, is also useful A node may have a label with "retiring" as key and the date as the value and it may be desirable to avoid scheduling new pods on this node
type PersistentVolumeClaimInfo ¶ added in v1.2.0
type PersistentVolumeClaimInfo interface {
GetPersistentVolumeClaimInfo(namespace string, pvcID string) (*api.PersistentVolumeClaim, error)
}
type PersistentVolumeInfo ¶ added in v1.2.0
type PersistentVolumeInfo interface {
GetPersistentVolumeInfo(pvID string) (*api.PersistentVolume, error)
}
type PodAffinityChecker ¶ added in v1.3.0
type PodAffinityChecker struct {
// contains filtered or unexported fields
}
func (*PodAffinityChecker) AnyPodMatchesPodAffinityTerm ¶ added in v1.3.0
func (checker *PodAffinityChecker) AnyPodMatchesPodAffinityTerm(pod *api.Pod, allPods []*api.Pod, node *api.Node, podAffinityTerm api.PodAffinityTerm) (bool, error)
AnyPodMatchesPodAffinityTerm checks if any of given pods can match the specific podAffinityTerm.
func (*PodAffinityChecker) InterPodAffinityMatches ¶ added in v1.3.0
func (checker *PodAffinityChecker) InterPodAffinityMatches(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
func (*PodAffinityChecker) NodeMatchPodAffinityAntiAffinity ¶ added in v1.3.0
func (checker *PodAffinityChecker) NodeMatchPodAffinityAntiAffinity(pod *api.Pod, allPods []*api.Pod, node *api.Node) bool
NodeMatchPodAffinityAntiAffinity checks if the node matches the requiredDuringScheduling affinity/anti-affinity rules indicated by the pod.
func (*PodAffinityChecker) NodeMatchesHardPodAffinity ¶ added in v1.3.0
func (checker *PodAffinityChecker) NodeMatchesHardPodAffinity(pod *api.Pod, allPods []*api.Pod, node *api.Node, podAffinity *api.PodAffinity) bool
Checks whether the given node has pods which satisfy all the required pod affinity scheduling rules. If node has pods which satisfy all the required pod affinity scheduling rules then return true.
func (*PodAffinityChecker) NodeMatchesHardPodAntiAffinity ¶ added in v1.3.0
func (checker *PodAffinityChecker) NodeMatchesHardPodAntiAffinity(pod *api.Pod, allPods []*api.Pod, node *api.Node, podAntiAffinity *api.PodAntiAffinity) bool
Checks whether the given node has pods which satisfy all the required pod anti-affinity scheduling rules. Also checks whether putting the pod onto the node would break any anti-affinity scheduling rules indicated by existing pods. If node has pods which satisfy all the required pod anti-affinity scheduling rules and scheduling the pod onto the node won't break any existing pods' anti-affinity rules, then return true.
type PredicateFailureError ¶ added in v1.3.0
type PredicateFailureError struct {
PredicateName string
}
func (*PredicateFailureError) Error ¶ added in v1.3.0
func (e *PredicateFailureError) Error() string
type ServiceAffinity ¶
type ServiceAffinity struct {
// contains filtered or unexported fields
}
func (*ServiceAffinity) CheckServiceAffinity ¶
func (s *ServiceAffinity) CheckServiceAffinity(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
CheckServiceAffinity ensures that only the nodes that match the specified labels are considered for scheduling. The set of labels to be considered are provided to the struct (ServiceAffinity). The pod is checked for the labels and any missing labels are then checked in the node that hosts the service pods (peers) for the given pod.
We add an implicit selector requiring some particular value V for label L to a pod, if: - L is listed in the ServiceAffinity object that is passed into the function - the pod does not have any NodeSelector for L - some other pod from the same service is already scheduled onto a node that has value V for label L
type TolerationMatch ¶ added in v1.3.0
type TolerationMatch struct {
// contains filtered or unexported fields
}
func (*TolerationMatch) PodToleratesNodeTaints ¶ added in v1.3.0
func (t *TolerationMatch) PodToleratesNodeTaints(pod *api.Pod, nodeInfo *schedulercache.NodeInfo) (bool, error)
type VolumeFilter ¶ added in v1.2.0
type VolumeFilter struct { // Filter normal volumes FilterVolume func(vol *api.Volume) (id string, relevant bool) FilterPersistentVolume func(pv *api.PersistentVolume) (id string, relevant bool) }
VolumeFilter contains information on how to filter PD Volumes when checking PD Volume caps
var EBSVolumeFilter VolumeFilter = VolumeFilter{ FilterVolume: func(vol *api.Volume) (string, bool) { if vol.AWSElasticBlockStore != nil { return vol.AWSElasticBlockStore.VolumeID, true } return "", false }, FilterPersistentVolume: func(pv *api.PersistentVolume) (string, bool) { if pv.Spec.AWSElasticBlockStore != nil { return pv.Spec.AWSElasticBlockStore.VolumeID, true } return "", false }, }
EBSVolumeFilter is a VolumeFilter for filtering AWS ElasticBlockStore Volumes
var GCEPDVolumeFilter VolumeFilter = VolumeFilter{ FilterVolume: func(vol *api.Volume) (string, bool) { if vol.GCEPersistentDisk != nil { return vol.GCEPersistentDisk.PDName, true } return "", false }, FilterPersistentVolume: func(pv *api.PersistentVolume) (string, bool) { if pv.Spec.GCEPersistentDisk != nil { return pv.Spec.GCEPersistentDisk.PDName, true } return "", false }, }
GCEPDVolumeFilter is a VolumeFilter for filtering GCE PersistentDisk Volumes
type VolumeZoneChecker ¶ added in v1.2.0
type VolumeZoneChecker struct {
// contains filtered or unexported fields
}