Documentation ¶
Index ¶
- func CheckPodsExceedingFreeResources(pods []*api.Pod, allocatable api.ResourceList) (fitting []*api.Pod, notFittingCPU, notFittingMemory []*api.Pod)
- func NewMaxPDVolumeCountPredicate(filter VolumeFilter, maxVolumes int, pvInfo PersistentVolumeInfo, ...) algorithm.FitPredicate
- func NewNodeLabelPredicate(info NodeInfo, labels []string, presence bool) algorithm.FitPredicate
- func NewResourceFitPredicate(info NodeInfo) algorithm.FitPredicate
- func NewSelectorMatchPredicate(info NodeInfo) algorithm.FitPredicate
- func NewServiceAffinityPredicate(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, ...) algorithm.FitPredicate
- func NewVolumeZonePredicate(nodeInfo NodeInfo, pvInfo PersistentVolumeInfo, ...) algorithm.FitPredicate
- func NoDiskConflict(pod *api.Pod, nodeName string, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func PodFitsHost(pod *api.Pod, nodeName string, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func PodFitsHostPorts(pod *api.Pod, nodeName string, nodeInfo *schedulercache.NodeInfo) (bool, error)
- func PodMatchesNodeLabels(pod *api.Pod, node *api.Node) bool
- type CachedNodeInfo
- type ClientNodeInfo
- type InsufficientResourceError
- type MaxPDVolumeCountChecker
- type NodeInfo
- type NodeLabelChecker
- type NodeSelector
- type PersistentVolumeClaimInfo
- type PersistentVolumeInfo
- type ResourceFit
- type ServiceAffinity
- type StaticNodeInfo
- type VolumeFilter
- type VolumeZoneChecker
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func CheckPodsExceedingFreeResources ¶ added in v1.1.1
func NewMaxPDVolumeCountPredicate ¶ added in v1.2.0
func NewMaxPDVolumeCountPredicate(filter VolumeFilter, maxVolumes int, pvInfo PersistentVolumeInfo, pvcInfo PersistentVolumeClaimInfo) algorithm.FitPredicate
NewMaxPDVolumeCountPredicate creates a predicate which evaluates whether a pod can fit based on the number of volumes which match a filter that it requests, and those that are already present. The maximum number is configurable to accommodate different systems.
The predicate looks for both volumes used directly, as well as PVC volumes that are backed by relevant volume types, counts the number of unique volumes, and rejects the new pod if it would place the total count over the maximum.
func NewNodeLabelPredicate ¶
func NewNodeLabelPredicate(info NodeInfo, labels []string, presence bool) algorithm.FitPredicate
func NewResourceFitPredicate ¶
func NewResourceFitPredicate(info NodeInfo) algorithm.FitPredicate
func NewSelectorMatchPredicate ¶
func NewSelectorMatchPredicate(info NodeInfo) algorithm.FitPredicate
func NewServiceAffinityPredicate ¶
func NewServiceAffinityPredicate(podLister algorithm.PodLister, serviceLister algorithm.ServiceLister, nodeInfo NodeInfo, labels []string) algorithm.FitPredicate
func NewVolumeZonePredicate ¶ added in v1.2.0
func NewVolumeZonePredicate(nodeInfo NodeInfo, pvInfo PersistentVolumeInfo, pvcInfo PersistentVolumeClaimInfo) algorithm.FitPredicate
VolumeZonePredicate evaluates if a pod can fit due to the volumes it requests, given that some volumes may have zone scheduling constraints. The requirement is that any volume zone-labels must match the equivalent zone-labels on the node. It is OK for the node to have more zone-label constraints (for example, a hypothetical replicated volume might allow region-wide access)
Currently this is only supported with PersistentVolumeClaims, and looks to the labels only on the bound PersistentVolume.
Working with volumes declared inline in the pod specification (i.e. not using a PersistentVolume) is likely to be harder, as it would require determining the zone of a volume during scheduling, and that is likely to require calling out to the cloud provider. It seems that we are moving away from inline volume declarations anyway.
func NoDiskConflict ¶
NoDiskConflict evaluates if a pod can fit due to the volumes it requests, and those that are already mounted. If there is already a volume mounted on that node, another pod that uses the same volume can't be scheduled there. This is GCE, Amazon EBS, and Ceph RBD specific for now: - GCE PD allows multiple mounts as long as they're all read-only - AWS EBS forbids any two pods mounting the same volume ID - Ceph RBD forbids if any two pods share at least same monitor, and match pool and image. TODO: migrate this into some per-volume specific code?
func PodFitsHost ¶
func PodFitsHostPorts ¶ added in v1.1.1
Types ¶
type CachedNodeInfo ¶ added in v1.2.0
type CachedNodeInfo struct {
*cache.StoreToNodeLister
}
func (*CachedNodeInfo) GetNodeInfo ¶ added in v1.2.0
func (c *CachedNodeInfo) GetNodeInfo(id string) (*api.Node, error)
GetNodeInfo returns cached data for the node 'id'.
type ClientNodeInfo ¶
func (ClientNodeInfo) GetNodeInfo ¶
func (nodes ClientNodeInfo) GetNodeInfo(nodeID string) (*api.Node, error)
type InsufficientResourceError ¶ added in v1.2.0
type InsufficientResourceError struct {
// contains filtered or unexported fields
}
InsufficientResourceError is an error type that indicates what kind of resource limit is hit and caused the unfitting failure.
func (*InsufficientResourceError) Error ¶ added in v1.2.0
func (e *InsufficientResourceError) Error() string
type MaxPDVolumeCountChecker ¶ added in v1.2.0
type MaxPDVolumeCountChecker struct {
// contains filtered or unexported fields
}
type NodeLabelChecker ¶
type NodeLabelChecker struct {
// contains filtered or unexported fields
}
func (*NodeLabelChecker) CheckNodeLabelPresence ¶
func (n *NodeLabelChecker) CheckNodeLabelPresence(pod *api.Pod, nodeName string, nodeInfo *schedulercache.NodeInfo) (bool, error)
CheckNodeLabelPresence checks whether all of the specified labels exists on a node or not, regardless of their value If "presence" is false, then returns false if any of the requested labels matches any of the node's labels, otherwise returns true. If "presence" is true, then returns false if any of the requested labels does not match any of the node's labels, otherwise returns true.
Consider the cases where the nodes are placed in regions/zones/racks and these are identified by labels In some cases, it is required that only nodes that are part of ANY of the defined regions/zones/racks be selected
Alternately, eliminating nodes that have a certain label, regardless of value, is also useful A node may have a label with "retiring" as key and the date as the value and it may be desirable to avoid scheduling new pods on this node
type NodeSelector ¶
type NodeSelector struct {
// contains filtered or unexported fields
}
func (*NodeSelector) PodSelectorMatches ¶
func (n *NodeSelector) PodSelectorMatches(pod *api.Pod, nodeName string, nodeInfo *schedulercache.NodeInfo) (bool, error)
type PersistentVolumeClaimInfo ¶ added in v1.2.0
type PersistentVolumeClaimInfo interface {
GetPersistentVolumeClaimInfo(namespace string, pvcID string) (*api.PersistentVolumeClaim, error)
}
type PersistentVolumeInfo ¶ added in v1.2.0
type PersistentVolumeInfo interface {
GetPersistentVolumeInfo(pvID string) (*api.PersistentVolume, error)
}
type ResourceFit ¶
type ResourceFit struct {
// contains filtered or unexported fields
}
func (*ResourceFit) PodFitsResources ¶
func (r *ResourceFit) PodFitsResources(pod *api.Pod, nodeName string, nodeInfo *schedulercache.NodeInfo) (bool, error)
PodFitsResources calculates fit based on requested, rather than used resources
type ServiceAffinity ¶
type ServiceAffinity struct {
// contains filtered or unexported fields
}
func (*ServiceAffinity) CheckServiceAffinity ¶
func (s *ServiceAffinity) CheckServiceAffinity(pod *api.Pod, nodeName string, nodeInfo *schedulercache.NodeInfo) (bool, error)
CheckServiceAffinity ensures that only the nodes that match the specified labels are considered for scheduling. The set of labels to be considered are provided to the struct (ServiceAffinity). The pod is checked for the labels and any missing labels are then checked in the node that hosts the service pods (peers) for the given pod.
We add an implicit selector requiring some particular value V for label L to a pod, if: - L is listed in the ServiceAffinity object that is passed into the function - the pod does not have any NodeSelector for L - some other pod from the same service is already scheduled onto a node that has value V for label L
type StaticNodeInfo ¶
func (StaticNodeInfo) GetNodeInfo ¶
func (nodes StaticNodeInfo) GetNodeInfo(nodeID string) (*api.Node, error)
type VolumeFilter ¶ added in v1.2.0
type VolumeFilter struct { // Filter normal volumes FilterVolume func(vol *api.Volume) (id string, relevant bool) FilterPersistentVolume func(pv *api.PersistentVolume) (id string, relevant bool) }
VolumeFilter contains information on how to filter PD Volumes when checking PD Volume caps
var EBSVolumeFilter VolumeFilter = VolumeFilter{ FilterVolume: func(vol *api.Volume) (string, bool) { if vol.AWSElasticBlockStore != nil { return vol.AWSElasticBlockStore.VolumeID, true } return "", false }, FilterPersistentVolume: func(pv *api.PersistentVolume) (string, bool) { if pv.Spec.AWSElasticBlockStore != nil { return pv.Spec.AWSElasticBlockStore.VolumeID, true } return "", false }, }
EBSVolumeFilter is a VolumeFilter for filtering AWS ElasticBlockStore Volumes
var GCEPDVolumeFilter VolumeFilter = VolumeFilter{ FilterVolume: func(vol *api.Volume) (string, bool) { if vol.GCEPersistentDisk != nil { return vol.GCEPersistentDisk.PDName, true } return "", false }, FilterPersistentVolume: func(pv *api.PersistentVolume) (string, bool) { if pv.Spec.GCEPersistentDisk != nil { return pv.Spec.GCEPersistentDisk.PDName, true } return "", false }, }
GCEPDVolumeFilter is a VolumeFilter for filtering GCE PersistentDisk Volumes
type VolumeZoneChecker ¶ added in v1.2.0
type VolumeZoneChecker struct {
// contains filtered or unexported fields
}