Documentation ¶
Index ¶
- func GetDeploymentMode(annotations map[string]string, deployConfig *v1beta1api.DeployConfig) constants.DeploymentModeType
- func GetServingRuntime(cl client.Client, name string, namespace string) (*v1alpha1.ServingRuntimeSpec, error)
- func IsMMSPredictor(predictor *v1beta1api.PredictorSpec, ...) bool
- func IsMemoryResourceAvailable(isvc *v1beta1api.InferenceService, totalReqMemory resource.Quantity, ...) bool
- func ListPodsByLabel(cl client.Client, namespace string, labelKey string, labelVal string) (*v1.PodList, error)
- func MergePodSpec(runtimePodSpec *v1alpha1.ServingRuntimePodSpec, ...) (*v1.PodSpec, error)
- func MergeRuntimeContainers(runtimeContainer *v1.Container, predictorContainer *v1.Container) (*v1.Container, error)
- func ReplacePlaceholders(container *v1.Container, meta metav1.ObjectMeta) error
- func UpdateImageTag(container *v1.Container, runtimeVersion *string, ...)
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetDeploymentMode ¶
func GetDeploymentMode(annotations map[string]string, deployConfig *v1beta1api.DeployConfig) constants.DeploymentModeType
GetDeploymentMode returns the current deployment mode, supports Serverless and RawDeployment case 1: no serving.kserve.org/deploymentMode annotation
return config.deploy.defaultDeploymentMode
case 2: serving.kserve.org/deploymentMode is set
if the mode is "RawDeployment", "Serverless" or "ModelMesh", return it. else return config.deploy.defaultDeploymentMode
func GetServingRuntime ¶ added in v0.8.0
func GetServingRuntime(cl client.Client, name string, namespace string) (*v1alpha1.ServingRuntimeSpec, error)
GetServingRuntime Get a ServingRuntime by name. First, ServingRuntimes in the given namespace will be checked. If a resource of the specified name is not found, then ClusterServingRuntimes will be checked.
func IsMMSPredictor ¶
func IsMMSPredictor(predictor *v1beta1api.PredictorSpec, isvcConfig *v1beta1api.InferenceServicesConfig) bool
IsMMSPredictor Only enable MMS predictor when predictor config sets MMS to true and neither storage uri nor storage spec is set
func IsMemoryResourceAvailable ¶
func IsMemoryResourceAvailable(isvc *v1beta1api.InferenceService, totalReqMemory resource.Quantity, isvcConfig *v1beta1api.InferenceServicesConfig) bool
func ListPodsByLabel ¶ added in v0.9.0
func ListPodsByLabel(cl client.Client, namespace string, labelKey string, labelVal string) (*v1.PodList, error)
ListPodsByLabel Get a PodList by label.
func MergePodSpec ¶ added in v0.8.0
func MergePodSpec(runtimePodSpec *v1alpha1.ServingRuntimePodSpec, predictorPodSpec *v1beta1.PodSpec) (*v1.PodSpec, error)
MergePodSpec Merge the predictor PodSpec struct with the runtime PodSpec struct, allowing users to override runtime PodSpec settings from the predictor spec.
func MergeRuntimeContainers ¶ added in v0.8.0
func MergeRuntimeContainers(runtimeContainer *v1.Container, predictorContainer *v1.Container) (*v1.Container, error)
MergeRuntimeContainers Merge the predictor Container struct with the runtime Container struct, allowing users to override runtime container settings from the predictor spec.
func ReplacePlaceholders ¶ added in v0.8.0
func ReplacePlaceholders(container *v1.Container, meta metav1.ObjectMeta) error
ReplacePlaceholders Replace placeholders in runtime container by values from inferenceservice metadata
func UpdateImageTag ¶ added in v0.8.0
func UpdateImageTag(container *v1.Container, runtimeVersion *string, isvcConfig *v1beta1.InferenceServicesConfig)
UpdateImageTag Update image tag if GPU is enabled or runtime version is provided
Types ¶
This section is empty.