Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ScheduleAnywhere ¶
ScheduleAnywhere can be passed to TrySchedulePods when there are no extra restrictions on nodes to consider.
Types ¶
type HintKey ¶
type HintKey string
HintKey uniquely identifies a pod for the sake of scheduling hints.
func HintKeyFromPod ¶
HintKeyFromPod generates a HintKey for a given pod.
type HintingSimulator ¶
type HintingSimulator struct {
// contains filtered or unexported fields
}
HintingSimulator is a helper object for simulating scheduler behavior.
func NewHintingSimulator ¶
func NewHintingSimulator(predicateChecker predicatechecker.PredicateChecker) *HintingSimulator
NewHintingSimulator returns a new HintingSimulator.
func (*HintingSimulator) DropOldHints ¶
func (s *HintingSimulator) DropOldHints()
DropOldHints drops old scheduling hints.
func (*HintingSimulator) TrySchedulePods ¶
func (s *HintingSimulator) TrySchedulePods(clusterSnapshot clustersnapshot.ClusterSnapshot, pods []*apiv1.Pod, isNodeAcceptable func(*framework.NodeInfo) bool, breakOnFailure bool) ([]Status, int, error)
TrySchedulePods attempts to schedule provided pods on any acceptable nodes. Each node is considered acceptable iff isNodeAcceptable() returns true. Returns a list of scheduled pods with assigned pods and the count of overflowing controllers, or an error if an unexpected error occurs. If the breakOnFailure is set to true, the function will stop scheduling attempts after the first scheduling attempt that fails. This is useful if all provided pods need to be scheduled. Note: this function does not fork clusterSnapshot: this has to be done by the caller.
type Hints ¶
type Hints struct {
// contains filtered or unexported fields
}
Hints can be used for tracking past scheduling decisions. It is essentially equivalent to a map, with the ability to replace whole generations of keys. See DropOld() for more information.
func (*Hints) DropOld ¶
func (h *Hints) DropOld()
DropOld cleans up old keys. All keys are considered old if they were added before the previous call to DropOld().
type SimilarPodsScheduling ¶
type SimilarPodsScheduling struct {
// contains filtered or unexported fields
}
SimilarPodsScheduling stores mapping from controller ref to SimilarPodsSchedulingInfo
func NewSimilarPodsScheduling ¶
func NewSimilarPodsScheduling() *SimilarPodsScheduling
NewSimilarPodsScheduling creates a new SimilarPodsScheduling
func (*SimilarPodsScheduling) IsSimilarUnschedulable ¶
func (p *SimilarPodsScheduling) IsSimilarUnschedulable(pod *apiv1.Pod) bool
IsSimilarUnschedulable returns scheduling info for given pod if matching one exists in SimilarPodsScheduling
func (*SimilarPodsScheduling) OverflowingControllerCount ¶
func (p *SimilarPodsScheduling) OverflowingControllerCount() int
OverflowingControllerCount returns the number of controllers that had too many different pods to be effectively cached.
func (*SimilarPodsScheduling) SetUnschedulable ¶
func (p *SimilarPodsScheduling) SetUnschedulable(pod *apiv1.Pod)
SetUnschedulable sets scheduling info for given pod in SimilarPodsScheduling
type SimilarPodsSchedulingInfo ¶
type SimilarPodsSchedulingInfo struct {
// contains filtered or unexported fields
}
SimilarPodsSchedulingInfo data structure is used to avoid running predicates #pending_pods * #nodes times (which turned out to be very expensive if there are thousands of pending pods). This optimization is based on the assumption that if there are that many pods they're likely created by controllers (deployment, replication controller, ...). So instead of running all predicates for every pod we first check whether we've already seen identical pod (in this step we're not binpacking, just checking if the pod would fit anywhere right now) and if so we use the result we already calculated. To decide if two pods are similar enough we check if they have identical label and spec and are owned by the same controller. The problem is the whole SimilarPodsSchedulingInfo struct is not hashable and keeping a list and running deep equality checks would likely also be expensive. So instead we use controller UID as a key in initial lookup and only run full comparison on a set of SimilarPodsSchedulingInfo created for pods owned by this controller.