Documentation ¶
Index ¶
Constants ¶
const ( // ResourceNvidiaGPU is the name of the Nvidia GPU resource. ResourceNvidiaGPU = "nvidia.com/gpu" // GPULabel is the label added to nodes with GPU resource on GKE. GPULabel = "cloud.google.com/gke-accelerator" // DefaultGPUType is the type of GPU used in NAP if the user // don't specify what type of GPU his pod wants. DefaultGPUType = "nvidia-tesla-k80" )
Variables ¶
This section is empty.
Functions ¶
func FilterOutNodesWithUnreadyGpus ¶
func FilterOutNodesWithUnreadyGpus(allNodes, readyNodes []*apiv1.Node) ([]*apiv1.Node, []*apiv1.Node)
FilterOutNodesWithUnreadyGpus removes nodes that should have GPU, but don't have it in allocatable from ready nodes list and updates their status to unready on all nodes list. This is a hack/workaround for nodes with GPU coming up without installed drivers, resulting in GPU missing from their allocatable and capacity.
func GetGpuRequests ¶
func GetGpuRequests(pods []*apiv1.Pod) map[string]GpuRequestInfo
GetGpuRequests returns a GpuRequestInfo for each type of GPU requested by any pod in pods argument. If the pod requests GPU, but doesn't specify what type of GPU it wants (via NodeSelector) it assumes it's DefaultGPUType.
func NodeHasGpu ¶
NodeHasGpu returns true if a given node has GPU hardware. The result will be true if there is hardware capability. It doesn't matter if the drivers are installed and GPU is ready to use.
func PodRequestsGpu ¶
PodRequestsGpu returns true if a given pod has GPU request.
Types ¶
type GpuRequestInfo ¶
type GpuRequestInfo struct { // MaxRequest is maximum GPU request among pods MaxRequest resource.Quantity // Pods is a list of pods requesting GPU Pods []*apiv1.Pod // SystemLabels is a set of system labels corresponding to selected GPU // that needs to be passed to cloudprovider SystemLabels map[string]string }
GpuRequestInfo contains an information about a set of pods requesting a GPU.