Documentation
¶
Index ¶
- func IgnoredForTopology(p *v1.Pod) bool
- func TopologyListOptions(namespace string, labelSelector *metav1.LabelSelector) *client.ListOptions
- type InFlightNode
- type Node
- type Preferences
- type Queue
- type Scheduler
- type Topology
- func (t *Topology) AddRequirements(podRequirements, nodeRequirements scheduling.Requirements, p *v1.Pod) (scheduling.Requirements, error)
- func (t *Topology) Record(p *v1.Pod, requirements scheduling.Requirements)
- func (t *Topology) Register(topologyKey string, domain string)
- func (t *Topology) Update(ctx context.Context, p *v1.Pod) error
- type TopologyGroup
- func (t *TopologyGroup) AddOwner(key types.UID)
- func (t *TopologyGroup) Counts(pod *v1.Pod, requirements scheduling.Requirements) bool
- func (t *TopologyGroup) Get(pod *v1.Pod, podDomains, nodeDomains sets.Set) sets.Set
- func (t *TopologyGroup) Hash() uint64
- func (t *TopologyGroup) IsOwnedBy(key types.UID) bool
- func (t *TopologyGroup) Record(domains ...string)
- func (t *TopologyGroup) Register(domains ...string)
- func (t *TopologyGroup) RemoveOwner(key types.UID)
- type TopologyNodeFilter
- type TopologyType
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func IgnoredForTopology ¶ added in v0.5.3
func TopologyListOptions ¶ added in v0.5.3
func TopologyListOptions(namespace string, labelSelector *metav1.LabelSelector) *client.ListOptions
Types ¶
type InFlightNode ¶ added in v0.10.0
func NewInFlightNode ¶ added in v0.10.0
func NewInFlightNode(n *state.Node, topology *Topology, startupTaints []v1.Taint, daemonResources v1.ResourceList) *InFlightNode
type Node ¶ added in v0.7.1
type Node struct { scheduling.NodeTemplate InstanceTypeOptions []cloudprovider.InstanceType Pods []*v1.Pod // contains filtered or unexported fields }
Node is a set of constraints, compatible pods, and possible instance types that could fulfill these constraints. This will be turned into one or more actual node instances within the cluster after bin packing.
func NewNode ¶ added in v0.7.1
func NewNode(nodeTemplate *scheduling.NodeTemplate, topology *Topology, daemonResources v1.ResourceList, instanceTypes []cloudprovider.InstanceType) *Node
type Preferences ¶ added in v0.9.0
type Preferences struct{}
type Queue ¶ added in v0.9.0
type Queue struct {
// contains filtered or unexported fields
}
Queue is a queue of pods that is scheduled. It's used to attempt to schedule pods as long as we are making progress in scheduling. This is sometimes required to maintain zonal topology spreads with constrained pods, and can satisfy pod affinities that occur in a batch of pods if there are enough constraints provided.
func NewQueue ¶ added in v0.9.0
NewQueue constructs a new queue given the input pods, sorting them to optimize for bin-packing into nodes.
type Scheduler ¶
type Scheduler struct {
// contains filtered or unexported fields
}
func NewScheduler ¶
func NewScheduler(nodeTemplates []*scheduling.NodeTemplate, provisioners []v1alpha5.Provisioner, cluster *state.Cluster, topology *Topology, instanceTypes map[string][]cloudprovider.InstanceType, daemonOverhead map[*scheduling.NodeTemplate]v1.ResourceList, recorder events.Recorder) *Scheduler
type Topology ¶
type Topology struct {
// contains filtered or unexported fields
}
func NewTopology ¶ added in v0.9.0
func (*Topology) AddRequirements ¶ added in v0.9.0
func (t *Topology) AddRequirements(podRequirements, nodeRequirements scheduling.Requirements, p *v1.Pod) (scheduling.Requirements, error)
AddRequirements tightens the input requirements by adding additional requirements that are being enforced by topology spreads affinities, anti-affinities or inverse anti-affinities. The nodeHostname is the hostname that we are currently considering placing the pod on. It returns these newly tightened requirements, or an error in the case of a set of requirements that cannot be satisfied.
func (*Topology) Record ¶ added in v0.9.0
func (t *Topology) Record(p *v1.Pod, requirements scheduling.Requirements)
Record records the topology changes given that pod p schedule on a node with the given requirements
func (*Topology) Register ¶ added in v0.9.0
Register is used to register a domain as available across topologies for the given topology key.
func (*Topology) Update ¶ added in v0.9.0
Update unregisters the pod as the owner of all affinities and then creates any new topologies based on the pod spec registered the pod as the owner of all associated affinities, new or old. This allows Update() to be called after relaxation of a preference to properly break the topology <-> owner relationship so that the preferred topology will no longer influence scheduling.
type TopologyGroup ¶
type TopologyGroup struct { // Hashed Fields Key string Type TopologyType // contains filtered or unexported fields }
TopologyGroup is used to track pod counts that match a selector by the topology domain (e.g. SELECT COUNT(*) FROM pods GROUP BY(topology_ke
func NewTopologyGroup ¶
func NewTopologyGroup(topologyType TopologyType, topologyKey string, pod *v1.Pod, namespaces utilsets.String, labelSelector *metav1.LabelSelector, maxSkew int32, domains utilsets.String) *TopologyGroup
func (*TopologyGroup) AddOwner ¶ added in v0.9.0
func (t *TopologyGroup) AddOwner(key types.UID)
func (*TopologyGroup) Counts ¶ added in v0.9.0
func (t *TopologyGroup) Counts(pod *v1.Pod, requirements scheduling.Requirements) bool
Counts returns true if the pod would count for the topology, given that it schedule to a node with the provided requirements
func (*TopologyGroup) Hash ¶ added in v0.9.0
func (t *TopologyGroup) Hash() uint64
Hash is used so we can track single topologies that affect multiple groups of pods. If a deployment has 100x pods with self anti-affinity, we track that as a single topology with 100 owners instead of 100x topologies.
func (*TopologyGroup) IsOwnedBy ¶ added in v0.9.0
func (t *TopologyGroup) IsOwnedBy(key types.UID) bool
func (*TopologyGroup) Record ¶ added in v0.9.0
func (t *TopologyGroup) Record(domains ...string)
func (*TopologyGroup) Register ¶
func (t *TopologyGroup) Register(domains ...string)
Register ensures that the topology is aware of the given domain names.
func (*TopologyGroup) RemoveOwner ¶ added in v0.9.0
func (t *TopologyGroup) RemoveOwner(key types.UID)
type TopologyNodeFilter ¶ added in v0.9.0
type TopologyNodeFilter []scheduling.Requirements
TopologyNodeFilter is used to determine if a given actual node or scheduling node matches the pod's node selectors and required node affinity terms. This is used with topology spread constraints to determine if the node should be included for topology counting purposes. This is only used with topology spread constraints as affinities/anti-affinities always count across all nodes. A nil or zero-value TopologyNodeFilter behaves well and the filter returns true for all nodes.
func MakeTopologyNodeFilter ¶ added in v0.9.0
func MakeTopologyNodeFilter(p *v1.Pod) TopologyNodeFilter
func (TopologyNodeFilter) Matches ¶ added in v0.9.0
func (t TopologyNodeFilter) Matches(node *v1.Node) bool
Matches returns true if the TopologyNodeFilter doesn't prohibit node from the participating in the topology
func (TopologyNodeFilter) MatchesRequirements ¶ added in v0.9.0
func (t TopologyNodeFilter) MatchesRequirements(requirements scheduling.Requirements) bool
MatchesRequirements returns true if the TopologyNodeFilter doesn't prohibit a node with the requirements from participating in the topology. This method allows checking the requirements from a scheduling.Node to see if the node we will soon create participates in this topology.
type TopologyType ¶ added in v0.9.0
type TopologyType byte
const ( TopologyTypeSpread TopologyType = iota TopologyTypePodAffinity TopologyTypePodAntiAffinity )
func (TopologyType) String ¶ added in v0.9.0
func (t TopologyType) String() string