Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var AzAwareTightlyPack = SparkBinPackFunction(func( ctx context.Context, driverResources, executorResources *resources.Resources, executorCount int, driverNodePriorityOrder, executorNodePriorityOrder []string, nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult { packingResult := SingleAZTightlyPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata) if packingResult.HasCapacity { return packingResult } return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, tightlyPackExecutors) })
AzAwareTightlyPack is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to tightly pack executors while also trying to ensure that we can fit everything in a single AZ. If it cannot fit into a single AZ it falls back to the TightlyPack SparkBinPackFunction.
var DistributeEvenly = SparkBinPackFunction(func( ctx context.Context, driverResources, executorResources *resources.Resources, executorCount int, driverNodePriorityOrder, executorNodePriorityOrder []string, nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult { return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, distributeExecutorsEvenly) })
DistributeEvenly is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to distribute executors evenly
var MinimalFragmentation = SparkBinPackFunction(func( ctx context.Context, driverResources, executorResources *resources.Resources, executorCount int, driverNodePriorityOrder, executorNodePriorityOrder []string, nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult { return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, minimalFragmentation) })
MinimalFragmentation is a SparkBinPackFunction that tries to minimize spark app fragmentation across the cluster. see minimalFragmentation for more details.
var SingleAZMinimalFragmentation = getSingleAZSparkBinFunction(minimalFragmentation)
SingleAZMinimalFragmentation attempts to use as few nodes as possible to schedule executors when multiple nodes can be used to fit n executors, it will pick the node with the least available resources that still fit n executors, if there are multiple, it will prefer the higher priority node
var SingleAZTightlyPack = getSingleAZSparkBinFunction(tightlyPackExecutors)
SingleAZTightlyPack is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to tightly pack executors while also ensuring that we can fit everything in a single AZ. If it cannot fit into a single AZ binpacking fails
var TightlyPack = SparkBinPackFunction(func( ctx context.Context, driverResources, executorResources *resources.Resources, executorCount int, driverNodePriorityOrder, executorNodePriorityOrder []string, nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult { return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, tightlyPackExecutors) })
TightlyPack is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to tightly pack executors
Functions ¶
func ComputePackingEfficiencies ¶ added in v0.12.0
func ComputePackingEfficiencies( nodeGroupSchedulingMetadata resources.NodeGroupSchedulingMetadata, reservedResources resources.NodeGroupResources) map[string]*PackingEfficiency
ComputePackingEfficiencies calculates utilization for all provided nodes, given the new reservation.
Types ¶
type AvgPackingEfficiency ¶ added in v0.12.0
AvgPackingEfficiency represents result packing efficiency per resource type for a group of nodes. Computed as average packing efficiency over all node efficiencies.
func ComputeAvgPackingEfficiency ¶ added in v0.12.0
func ComputeAvgPackingEfficiency( nodeGroupSchedulingMetadata resources.NodeGroupSchedulingMetadata, packingEfficiencies []*PackingEfficiency) AvgPackingEfficiency
ComputeAvgPackingEfficiency calculate average packing efficiency, given packing efficiencies for individual nodes.
func WorstAvgPackingEfficiency ¶ added in v0.12.0
func WorstAvgPackingEfficiency() AvgPackingEfficiency
WorstAvgPackingEfficiency returns a representation of a failed bin packing. Each individual resource type is at worst possible (zero) packing efficiency.
func (*AvgPackingEfficiency) LessThan ¶ added in v0.12.0
func (p *AvgPackingEfficiency) LessThan(o AvgPackingEfficiency) bool
LessThan compares two average packing efficiencies. For a single packing we take the highest of the resources' efficiency. For example, when CPU is at 0.81 and Memory is at 0.54 the avg efficiency is 0.81. One packing efficiency is deemed less efficient when its avg efficiency is lower than the other's packing efficiency.
type GenericBinPackFunction ¶
type GenericBinPackFunction func( ctx context.Context, itemResources *resources.Resources, itemCount int, nodePriorityOrder []string, nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata, reservedResources resources.NodeGroupResources) (nodes []string, hasCapacity bool)
GenericBinPackFunction is a function type for assigning nodes to a batch of equivalent pods
type PackingEfficiency ¶ added in v0.12.0
PackingEfficiency represents result packing efficiency per resource type for one node. Computed as the total resources used divided by total capacity.
func (*PackingEfficiency) Max ¶ added in v0.12.0
func (p *PackingEfficiency) Max() float64
Max returns the highest packing efficiency of all resources.
type PackingResult ¶ added in v0.12.0
type PackingResult struct { DriverNode string ExecutorNodes []string PackingEfficiencies map[string]*PackingEfficiency HasCapacity bool }
PackingResult is a result of one binpacking operation. When successful, assigns driver and executors to nodes. Includes an overview of the resource assignment across nodes.
func EmptyPackingResult ¶ added in v0.12.0
func EmptyPackingResult() *PackingResult
EmptyPackingResult returns a representation of the worst possible packing result.
func SparkBinPack ¶
func SparkBinPack( ctx context.Context, driverResources, executorResources *resources.Resources, executorCount int, driverNodePriorityOrder, executorNodePriorityOrder []string, nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata, distributeExecutors GenericBinPackFunction) *PackingResult
SparkBinPack places the driver first and calls distributeExecutors function to place executors
type SparkBinPackFunction ¶
type SparkBinPackFunction func( ctx context.Context, driverResources, executorResources *resources.Resources, executorCount int, driverNodePriorityOrder, executorNodePriorityOrder []string, nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult
SparkBinPackFunction is a function type for assigning nodes to spark drivers and executors