binpack

package
v0.13.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 7, 2023 License: Apache-2.0 Imports: 6 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var AzAwareTightlyPack = SparkBinPackFunction(func(
	ctx context.Context,
	driverResources, executorResources *resources.Resources,
	executorCount int,
	driverNodePriorityOrder, executorNodePriorityOrder []string,
	nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult {
	packingResult := SingleAZTightlyPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata)
	if packingResult.HasCapacity {
		return packingResult
	}
	return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, tightlyPackExecutors)
})

AzAwareTightlyPack is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to tightly pack executors while also trying to ensure that we can fit everything in a single AZ. If it cannot fit into a single AZ it falls back to the TightlyPack SparkBinPackFunction.

View Source
var DistributeEvenly = SparkBinPackFunction(func(
	ctx context.Context,
	driverResources, executorResources *resources.Resources,
	executorCount int,
	driverNodePriorityOrder, executorNodePriorityOrder []string,
	nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult {
	return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, distributeExecutorsEvenly)
})

DistributeEvenly is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to distribute executors evenly

View Source
var MinimalFragmentation = SparkBinPackFunction(func(
	ctx context.Context,
	driverResources, executorResources *resources.Resources,
	executorCount int,
	driverNodePriorityOrder, executorNodePriorityOrder []string,
	nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult {
	return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, minimalFragmentation)
})

MinimalFragmentation is a SparkBinPackFunction that tries to put the driver pod on the first possible node with enough capacity, and then tries to pack executors onto as few nodes as possible.

View Source
var SingleAZMinimalFragmentation = getSingleAZSparkBinFunction(minimalFragmentation)

SingleAZMinimalFragmentation attempts to use as few nodes as possible to schedule executors when multiple nodes can be used to fit n executors, it will pick the node with the least available resources that still fit n executors, if there are multiple, it will prefer the higher priority node

View Source
var SingleAZTightlyPack = getSingleAZSparkBinFunction(tightlyPackExecutors)

SingleAZTightlyPack is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to tightly pack executors while also ensuring that we can fit everything in a single AZ. If it cannot fit into a single AZ binpacking fails

View Source
var TightlyPack = SparkBinPackFunction(func(
	ctx context.Context,
	driverResources, executorResources *resources.Resources,
	executorCount int,
	driverNodePriorityOrder, executorNodePriorityOrder []string,
	nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult {
	return SparkBinPack(ctx, driverResources, executorResources, executorCount, driverNodePriorityOrder, executorNodePriorityOrder, nodesSchedulingMetadata, tightlyPackExecutors)
})

TightlyPack is a SparkBinPackFunction that tries to put the driver pod to as prior nodes as possible before trying to tightly pack executors

Functions

func ComputePackingEfficiencies added in v0.12.0

func ComputePackingEfficiencies(
	nodeGroupSchedulingMetadata resources.NodeGroupSchedulingMetadata,
	reservedResources resources.NodeGroupResources) map[string]*PackingEfficiency

ComputePackingEfficiencies calculates utilization for all provided nodes, given the new reservation.

Types

type AvgPackingEfficiency added in v0.12.0

type AvgPackingEfficiency struct {
	CPU    float64
	Memory float64
	GPU    float64
	Max    float64
}

AvgPackingEfficiency represents result packing efficiency per resource type for a group of nodes. Computed as average packing efficiency over all node efficiencies.

func ComputeAvgPackingEfficiency added in v0.12.0

func ComputeAvgPackingEfficiency(
	nodeGroupSchedulingMetadata resources.NodeGroupSchedulingMetadata,
	packingEfficiencies []*PackingEfficiency) AvgPackingEfficiency

ComputeAvgPackingEfficiency calculate average packing efficiency, given packing efficiencies for individual nodes.

func WorstAvgPackingEfficiency added in v0.12.0

func WorstAvgPackingEfficiency() AvgPackingEfficiency

WorstAvgPackingEfficiency returns a representation of a failed bin packing. Each individual resource type is at worst possible (zero) packing efficiency.

func (*AvgPackingEfficiency) LessThan added in v0.12.0

LessThan compares two average packing efficiencies. For a single packing we take the highest of the resources' efficiency. For example, when CPU is at 0.81 and Memory is at 0.54 the avg efficiency is 0.81. One packing efficiency is deemed less efficient when its avg efficiency is lower than the other's packing efficiency.

type GenericBinPackFunction

type GenericBinPackFunction func(
	ctx context.Context,
	itemResources *resources.Resources,
	itemCount int,
	nodePriorityOrder []string,
	nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata,
	reservedResources resources.NodeGroupResources) (nodes []string, hasCapacity bool)

GenericBinPackFunction is a function type for assigning nodes to a batch of equivalent pods

type PackingEfficiency added in v0.12.0

type PackingEfficiency struct {
	NodeName string
	CPU      float64
	Memory   float64
	GPU      float64
}

PackingEfficiency represents result packing efficiency per resource type for one node. Computed as the total resources used divided by total capacity.

func (*PackingEfficiency) Max added in v0.12.0

func (p *PackingEfficiency) Max() float64

Max returns the highest packing efficiency of all resources.

type PackingResult added in v0.12.0

type PackingResult struct {
	DriverNode          string
	ExecutorNodes       []string
	PackingEfficiencies map[string]*PackingEfficiency
	HasCapacity         bool
}

PackingResult is a result of one binpacking operation. When successful, assigns driver and executors to nodes. Includes an overview of the resource assignment across nodes.

func EmptyPackingResult added in v0.12.0

func EmptyPackingResult() *PackingResult

EmptyPackingResult returns a representation of the worst possible packing result.

func SparkBinPack

func SparkBinPack(
	ctx context.Context,
	driverResources, executorResources *resources.Resources,
	executorCount int,
	driverNodePriorityOrder, executorNodePriorityOrder []string,
	nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata,
	distributeExecutors GenericBinPackFunction) *PackingResult

SparkBinPack places the driver first and calls distributeExecutors function to place executors

type SparkBinPackFunction

type SparkBinPackFunction func(
	ctx context.Context,
	driverResources, executorResources *resources.Resources,
	executorCount int,
	driverNodePriorityOrder, executorNodePriorityOrder []string,
	nodesSchedulingMetadata resources.NodeGroupSchedulingMetadata) *PackingResult

SparkBinPackFunction is a function type for assigning nodes to spark drivers and executors

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL