Documentation ¶
Overview ¶
Package v1beta1 is the v1beta1 version of the API. +groupName=sparkoperator.k8s.io
Package v1beta1 contains API Schema definitions for the v1beta1 API group +kubebuilder:object:generate=true +groupName=sparkoperator.k8s.io
Index ¶
- Constants
- Variables
- func Resource(resource string) schema.GroupResource
- func SetSparkApplicationDefaults(app *SparkApplication)
- type ApplicationState
- type ApplicationStateType
- type ConcurrencyPolicy
- type Dependencies
- type DeployMode
- type DriverInfo
- type DriverSpec
- type ExecutorSpec
- type ExecutorState
- type GPUSpec
- type MonitoringSpec
- type NameKey
- type NamePath
- type PrometheusSpec
- type RestartPolicy
- type RestartPolicyType
- type ScheduleState
- type ScheduledSparkApplication
- type ScheduledSparkApplicationList
- type ScheduledSparkApplicationSpec
- type ScheduledSparkApplicationStatus
- type SecretInfo
- type SecretType
- type SparkApplication
- func (in *SparkApplication) DeepCopy() *SparkApplication
- func (in *SparkApplication) DeepCopyInto(out *SparkApplication)
- func (in *SparkApplication) DeepCopyObject() runtime.Object
- func (s *SparkApplication) ExposeDriverMetrics() bool
- func (s *SparkApplication) ExposeExecutorMetrics() bool
- func (s *SparkApplication) HasPrometheusConfigFile() bool
- func (s *SparkApplication) PrometheusMonitoringEnabled() bool
- type SparkApplicationList
- type SparkApplicationSpec
- type SparkApplicationStatus
- type SparkApplicationType
- type SparkPodSpec
Constants ¶
const ( Group = "sparkoperator.k8s.io" Version = "v1beta1" )
Variables ¶
var ( // GroupVersion is group version used to register these objects. GroupVersion = schema.GroupVersion{Group: "sparkoperator.k8s.io", Version: "v1beta1"} // SchemeBuilder is used to add go types to the GroupVersionKind scheme. SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion} // AddToScheme adds the types in this group-version to the given scheme. AddToScheme = SchemeBuilder.AddToScheme )
var SchemeGroupVersion = schema.GroupVersion{Group: Group, Version: Version}
SchemeGroupVersion is the group version used to register these objects.
Functions ¶
func Resource ¶
func Resource(resource string) schema.GroupResource
Resource takes an unqualified resource and returns a Group-qualified GroupResource.
func SetSparkApplicationDefaults ¶
func SetSparkApplicationDefaults(app *SparkApplication)
SetSparkApplicationDefaults sets default values for certain fields of a SparkApplication.
Types ¶
type ApplicationState ¶
type ApplicationState struct { State ApplicationStateType `json:"state"` ErrorMessage string `json:"errorMessage,omitempty"` }
ApplicationState tells the current state of the application and an error message in case of failures.
func (*ApplicationState) DeepCopy ¶
func (in *ApplicationState) DeepCopy() *ApplicationState
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationState.
func (*ApplicationState) DeepCopyInto ¶
func (in *ApplicationState) DeepCopyInto(out *ApplicationState)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ApplicationStateType ¶
type ApplicationStateType string
ApplicationStateType represents the type of the current state of an application.
const ( NewState ApplicationStateType = "" SubmittedState ApplicationStateType = "SUBMITTED" RunningState ApplicationStateType = "RUNNING" CompletedState ApplicationStateType = "COMPLETED" FailedState ApplicationStateType = "FAILED" FailedSubmissionState ApplicationStateType = "SUBMISSION_FAILED" PendingRerunState ApplicationStateType = "PENDING_RERUN" InvalidatingState ApplicationStateType = "INVALIDATING" SucceedingState ApplicationStateType = "SUCCEEDING" FailingState ApplicationStateType = "FAILING" UnknownState ApplicationStateType = "UNKNOWN" )
Different states an application may have.
type ConcurrencyPolicy ¶
type ConcurrencyPolicy string
const ( // ConcurrencyAllow allows SparkApplications to run concurrently. ConcurrencyAllow ConcurrencyPolicy = "Allow" // ConcurrencyForbid forbids concurrent runs of SparkApplications, skipping the next run if the previous // one hasn't finished yet. ConcurrencyForbid ConcurrencyPolicy = "Forbid" // ConcurrencyReplace kills the currently running SparkApplication instance and replaces it with a new one. ConcurrencyReplace ConcurrencyPolicy = "Replace" )
type Dependencies ¶
type Dependencies struct { // Jars is a list of JAR files the Spark application depends on. // Optional. Jars []string `json:"jars,omitempty"` // Files is a list of files the Spark application depends on. // Optional. Files []string `json:"files,omitempty"` // PyFiles is a list of Python files the Spark application depends on. // Optional. PyFiles []string `json:"pyFiles,omitempty"` // JarsDownloadDir is the location to download jars to in the driver and executors. JarsDownloadDir *string `json:"jarsDownloadDir,omitempty"` // FilesDownloadDir is the location to download files to in the driver and executors. FilesDownloadDir *string `json:"filesDownloadDir,omitempty"` // DownloadTimeout specifies the timeout in seconds before aborting the attempt to download // and unpack dependencies from remote locations into the driver and executor pods. DownloadTimeout *int32 `json:"downloadTimeout,omitempty"` // MaxSimultaneousDownloads specifies the maximum number of remote dependencies to download // simultaneously in a driver or executor pod. MaxSimultaneousDownloads *int32 `json:"maxSimultaneousDownloads,omitempty"` }
Dependencies specifies all possible types of dependencies of a Spark application.
func (*Dependencies) DeepCopy ¶
func (in *Dependencies) DeepCopy() *Dependencies
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Dependencies.
func (*Dependencies) DeepCopyInto ¶
func (in *Dependencies) DeepCopyInto(out *Dependencies)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type DeployMode ¶
type DeployMode string
DeployMode describes the type of deployment of a Spark application.
const ( ClusterMode DeployMode = "cluster" ClientMode DeployMode = "client" InClusterClientMode DeployMode = "in-cluster-client" )
Different types of deployments.
type DriverInfo ¶
type DriverInfo struct { WebUIServiceName string `json:"webUIServiceName,omitempty"` // UI Details for the UI created via ClusterIP service accessible from within the cluster. WebUIPort int32 `json:"webUIPort,omitempty"` WebUIAddress string `json:"webUIAddress,omitempty"` // Ingress Details if an ingress for the UI was created. WebUIIngressName string `json:"webUIIngressName,omitempty"` WebUIIngressAddress string `json:"webUIIngressAddress,omitempty"` PodName string `json:"podName,omitempty"` }
DriverInfo captures information about the driver.
func (*DriverInfo) DeepCopy ¶
func (in *DriverInfo) DeepCopy() *DriverInfo
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DriverInfo.
func (*DriverInfo) DeepCopyInto ¶
func (in *DriverInfo) DeepCopyInto(out *DriverInfo)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type DriverSpec ¶
type DriverSpec struct { SparkPodSpec `json:",inline"` // PodName is the name of the driver pod that the user creates. This is used for the // in-cluster client mode in which the user creates a client pod where the driver of // the user application runs. It's an error to set this field if Mode is not // in-cluster-client. // Optional. PodName *string `json:"podName,omitempty"` // ServiceAccount is the name of the Kubernetes service account used by the driver pod // when requesting executor pods from the API server. ServiceAccount *string `json:"serviceAccount,omitempty"` // JavaOptions is a string of extra JVM options to pass to the driver. For instance, // GC settings or other logging. JavaOptions *string `json:"javaOptions,omitempty"` }
DriverSpec is specification of the driver.
func (*DriverSpec) DeepCopy ¶
func (in *DriverSpec) DeepCopy() *DriverSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DriverSpec.
func (*DriverSpec) DeepCopyInto ¶
func (in *DriverSpec) DeepCopyInto(out *DriverSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ExecutorSpec ¶
type ExecutorSpec struct { SparkPodSpec `json:",inline"` // Instances is the number of executor instances. // Optional. Instances *int32 `json:"instances,omitempty"` // CoreRequest is the physical CPU core request for the executors. // Optional. CoreRequest *string `json:"coreRequest,omitempty"` // JavaOptions is a string of extra JVM options to pass to the executors. For instance, // GC settings or other logging. JavaOptions *string `json:"javaOptions,omitempty"` }
ExecutorSpec is specification of the executor.
func (*ExecutorSpec) DeepCopy ¶
func (in *ExecutorSpec) DeepCopy() *ExecutorSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExecutorSpec.
func (*ExecutorSpec) DeepCopyInto ¶
func (in *ExecutorSpec) DeepCopyInto(out *ExecutorSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ExecutorState ¶
type ExecutorState string
ExecutorState tells the current state of an executor.
const ( ExecutorPendingState ExecutorState = "PENDING" ExecutorRunningState ExecutorState = "RUNNING" ExecutorCompletedState ExecutorState = "COMPLETED" ExecutorFailedState ExecutorState = "FAILED" ExecutorUnknownState ExecutorState = "UNKNOWN" )
Different states an executor may have.
type GPUSpec ¶
type GPUSpec struct { // Name is GPU resource name, such as: nvidia.com/gpu or amd.com/gpu Name string `json:"name"` // Quantity is the number of GPUs to request for driver or executor. Quantity int64 `json:"quantity"` }
func (*GPUSpec) DeepCopy ¶
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GPUSpec.
func (*GPUSpec) DeepCopyInto ¶
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type MonitoringSpec ¶
type MonitoringSpec struct { // ExposeDriverMetrics specifies whether to expose metrics on the driver. ExposeDriverMetrics bool `json:"exposeDriverMetrics"` // ExposeExecutorMetrics specifies whether to expose metrics on the executors. ExposeExecutorMetrics bool `json:"exposeExecutorMetrics"` // MetricsProperties is the content of a custom metrics.properties for configuring the Spark metric system. // Optional. // If not specified, the content in spark-docker/conf/metrics.properties will be used. MetricsProperties *string `json:"metricsProperties,omitempty"` // Prometheus is for configuring the Prometheus JMX exporter. // Optional. Prometheus *PrometheusSpec `json:"prometheus,omitempty"` }
MonitoringSpec defines the monitoring specification.
func (*MonitoringSpec) DeepCopy ¶
func (in *MonitoringSpec) DeepCopy() *MonitoringSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MonitoringSpec.
func (*MonitoringSpec) DeepCopyInto ¶
func (in *MonitoringSpec) DeepCopyInto(out *MonitoringSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type NameKey ¶
NameKey represents the name and key of a SecretKeyRef.
func (*NameKey) DeepCopy ¶
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NameKey.
func (*NameKey) DeepCopyInto ¶
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type NamePath ¶
NamePath is a pair of a name and a path to which the named objects should be mounted to.
func (*NamePath) DeepCopy ¶
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NamePath.
func (*NamePath) DeepCopyInto ¶
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type PrometheusSpec ¶
type PrometheusSpec struct { // JmxExporterJar is the path to the Prometheus JMX exporter jar in the container. JmxExporterJar string `json:"jmxExporterJar"` // Port is the port of the HTTP server run by the Prometheus JMX exporter. // Optional. // If not specified, 8090 will be used as the default. Port *int32 `json:"port"` // ConfigFile is the path to the custom Prometheus configuration file provided in the Spark image. // ConfigFile takes precedence over Configuration, which is shown below. ConfigFile *string `json:"configFile,omitempty"` // Configuration is the content of the Prometheus configuration needed by the Prometheus JMX exporter. // Optional. // If not specified, the content in spark-docker/conf/prometheus.yaml will be used. // Configuration has no effect if ConfigFile is set. Configuration *string `json:"configuration,omitempty"` }
PrometheusSpec defines the Prometheus specification when Prometheus is to be used for collecting and exposing metrics.
func (*PrometheusSpec) DeepCopy ¶
func (in *PrometheusSpec) DeepCopy() *PrometheusSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PrometheusSpec.
func (*PrometheusSpec) DeepCopyInto ¶
func (in *PrometheusSpec) DeepCopyInto(out *PrometheusSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type RestartPolicy ¶
type RestartPolicy struct { Type RestartPolicyType `json:"type,omitempty"` // FailureRetries are the number of times to retry a failed application before giving up in a particular case. // This is best effort and actual retry attempts can be >= the value specified due to caching. // These are required if RestartPolicy is OnFailure. OnSubmissionFailureRetries *int32 `json:"onSubmissionFailureRetries,omitempty"` OnFailureRetries *int32 `json:"onFailureRetries,omitempty"` // Interval to wait between successive retries of a failed application. OnSubmissionFailureRetryInterval *int64 `json:"onSubmissionFailureRetryInterval,omitempty"` OnFailureRetryInterval *int64 `json:"onFailureRetryInterval,omitempty"` }
RestartPolicy is the policy of if and in which conditions the controller should restart a terminated application. This completely defines actions to be taken on any kind of Failures during an application run.
func (*RestartPolicy) DeepCopy ¶
func (in *RestartPolicy) DeepCopy() *RestartPolicy
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RestartPolicy.
func (*RestartPolicy) DeepCopyInto ¶
func (in *RestartPolicy) DeepCopyInto(out *RestartPolicy)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type RestartPolicyType ¶
type RestartPolicyType string
const ( Never RestartPolicyType = "Never" OnFailure RestartPolicyType = "OnFailure" Always RestartPolicyType = "Always" )
type ScheduleState ¶
type ScheduleState string
const ( FailedValidationState ScheduleState = "FailedValidation" ScheduledState ScheduleState = "Scheduled" )
type ScheduledSparkApplication ¶
type ScheduledSparkApplication struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` Spec ScheduledSparkApplicationSpec `json:"spec,omitempty"` Status ScheduledSparkApplicationStatus `json:"status,omitempty"` }
ScheduledSparkApplication is the Schema for the scheduledsparkapplications API
func (*ScheduledSparkApplication) DeepCopy ¶
func (in *ScheduledSparkApplication) DeepCopy() *ScheduledSparkApplication
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduledSparkApplication.
func (*ScheduledSparkApplication) DeepCopyInto ¶
func (in *ScheduledSparkApplication) DeepCopyInto(out *ScheduledSparkApplication)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*ScheduledSparkApplication) DeepCopyObject ¶
func (in *ScheduledSparkApplication) DeepCopyObject() runtime.Object
DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
type ScheduledSparkApplicationList ¶
type ScheduledSparkApplicationList struct { metav1.TypeMeta `json:",inline"` metav1.ListMeta `json:"metadata,omitempty"` Items []ScheduledSparkApplication `json:"items"` }
ScheduledSparkApplicationList contains a list of ScheduledSparkApplication
func (*ScheduledSparkApplicationList) DeepCopy ¶
func (in *ScheduledSparkApplicationList) DeepCopy() *ScheduledSparkApplicationList
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduledSparkApplicationList.
func (*ScheduledSparkApplicationList) DeepCopyInto ¶
func (in *ScheduledSparkApplicationList) DeepCopyInto(out *ScheduledSparkApplicationList)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*ScheduledSparkApplicationList) DeepCopyObject ¶
func (in *ScheduledSparkApplicationList) DeepCopyObject() runtime.Object
DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
type ScheduledSparkApplicationSpec ¶
type ScheduledSparkApplicationSpec struct { // Schedule is a cron schedule on which the application should run. Schedule string `json:"schedule"` // Template is a template from which SparkApplication instances can be created. Template SparkApplicationSpec `json:"template"` // Suspend is a flag telling the controller to suspend subsequent runs of the application if set to true. // Optional. // Defaults to false. Suspend *bool `json:"suspend,omitempty"` // ConcurrencyPolicy is the policy governing concurrent SparkApplication runs. ConcurrencyPolicy ConcurrencyPolicy `json:"concurrencyPolicy,omitempty"` // SuccessfulRunHistoryLimit is the number of past successful runs of the application to keep. // Optional. // Defaults to 1. SuccessfulRunHistoryLimit *int32 `json:"successfulRunHistoryLimit,omitempty"` // FailedRunHistoryLimit is the number of past failed runs of the application to keep. // Optional. // Defaults to 1. FailedRunHistoryLimit *int32 `json:"failedRunHistoryLimit,omitempty"` }
ScheduledSparkApplicationSpec defines the desired state of ScheduledSparkApplication
func (*ScheduledSparkApplicationSpec) DeepCopy ¶
func (in *ScheduledSparkApplicationSpec) DeepCopy() *ScheduledSparkApplicationSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduledSparkApplicationSpec.
func (*ScheduledSparkApplicationSpec) DeepCopyInto ¶
func (in *ScheduledSparkApplicationSpec) DeepCopyInto(out *ScheduledSparkApplicationSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ScheduledSparkApplicationStatus ¶
type ScheduledSparkApplicationStatus struct { // LastRun is the time when the last run of the application started. LastRun metav1.Time `json:"lastRun,omitempty"` // NextRun is the time when the next run of the application will start. NextRun metav1.Time `json:"nextRun,omitempty"` // LastRunName is the name of the SparkApplication for the most recent run of the application. LastRunName string `json:"lastRunName,omitempty"` // PastSuccessfulRunNames keeps the names of SparkApplications for past successful runs. PastSuccessfulRunNames []string `json:"pastSuccessfulRunNames,omitempty"` // PastFailedRunNames keeps the names of SparkApplications for past failed runs. PastFailedRunNames []string `json:"pastFailedRunNames,omitempty"` // ScheduleState is the current scheduling state of the application. ScheduleState ScheduleState `json:"scheduleState,omitempty"` // Reason tells why the ScheduledSparkApplication is in the particular ScheduleState. Reason string `json:"reason,omitempty"` }
ScheduledSparkApplicationStatus defines the observed state of ScheduledSparkApplication
func (*ScheduledSparkApplicationStatus) DeepCopy ¶
func (in *ScheduledSparkApplicationStatus) DeepCopy() *ScheduledSparkApplicationStatus
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduledSparkApplicationStatus.
func (*ScheduledSparkApplicationStatus) DeepCopyInto ¶
func (in *ScheduledSparkApplicationStatus) DeepCopyInto(out *ScheduledSparkApplicationStatus)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type SecretInfo ¶
type SecretInfo struct { Name string `json:"name"` Path string `json:"path"` Type SecretType `json:"secretType"` }
SecretInfo captures information of a secret.
func (*SecretInfo) DeepCopy ¶
func (in *SecretInfo) DeepCopy() *SecretInfo
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SecretInfo.
func (*SecretInfo) DeepCopyInto ¶
func (in *SecretInfo) DeepCopyInto(out *SecretInfo)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type SecretType ¶
type SecretType string
SecretType tells the type of a secret.
const ( // GCPServiceAccountSecret is for secrets from a GCP service account Json key file that needs // the environment variable GOOGLE_APPLICATION_CREDENTIALS. GCPServiceAccountSecret SecretType = "GCPServiceAccount" // HadoopDelegationTokenSecret is for secrets from an Hadoop delegation token that needs the // environment variable HADOOP_TOKEN_FILE_LOCATION. HadoopDelegationTokenSecret SecretType = "HadoopDelegationToken" // GenericType is for secrets that needs no special handling. GenericType SecretType = "Generic" )
An enumeration of secret types supported.
type SparkApplication ¶
type SparkApplication struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` Spec SparkApplicationSpec `json:"spec,omitempty"` Status SparkApplicationStatus `json:"status,omitempty"` }
SparkApplication is the Schema for the sparkapplications API
func (*SparkApplication) DeepCopy ¶
func (in *SparkApplication) DeepCopy() *SparkApplication
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SparkApplication.
func (*SparkApplication) DeepCopyInto ¶
func (in *SparkApplication) DeepCopyInto(out *SparkApplication)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*SparkApplication) DeepCopyObject ¶
func (in *SparkApplication) DeepCopyObject() runtime.Object
DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (*SparkApplication) ExposeDriverMetrics ¶
func (s *SparkApplication) ExposeDriverMetrics() bool
ExposeDriverMetrics returns if driver metrics should be exposed.
func (*SparkApplication) ExposeExecutorMetrics ¶
func (s *SparkApplication) ExposeExecutorMetrics() bool
ExposeExecutorMetrics returns if executor metrics should be exposed.
func (*SparkApplication) HasPrometheusConfigFile ¶
func (s *SparkApplication) HasPrometheusConfigFile() bool
HasPrometheusConfigFile returns if Prometheus monitoring uses a configuration file in the container.
func (*SparkApplication) PrometheusMonitoringEnabled ¶
func (s *SparkApplication) PrometheusMonitoringEnabled() bool
PrometheusMonitoringEnabled returns if Prometheus monitoring is enabled or not.
type SparkApplicationList ¶
type SparkApplicationList struct { metav1.TypeMeta `json:",inline"` metav1.ListMeta `json:"metadata,omitempty"` Items []SparkApplication `json:"items"` }
SparkApplicationList contains a list of SparkApplication
func (*SparkApplicationList) DeepCopy ¶
func (in *SparkApplicationList) DeepCopy() *SparkApplicationList
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SparkApplicationList.
func (*SparkApplicationList) DeepCopyInto ¶
func (in *SparkApplicationList) DeepCopyInto(out *SparkApplicationList)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*SparkApplicationList) DeepCopyObject ¶
func (in *SparkApplicationList) DeepCopyObject() runtime.Object
DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
type SparkApplicationSpec ¶
type SparkApplicationSpec struct { // Type tells the type of the Spark application. Type SparkApplicationType `json:"type"` // SparkVersion is the version of Spark the application uses. SparkVersion string `json:"sparkVersion"` // Mode is the deployment mode of the Spark application. Mode DeployMode `json:"mode,omitempty"` // Image is the container image for the driver, executor, and init-container. Any custom container images for the // driver, executor, or init-container takes precedence over this. // Optional. Image *string `json:"image,omitempty"` // InitContainerImage is the image of the init-container to use. Overrides Spec.Image if set. // Optional. InitContainerImage *string `json:"initContainerImage,omitempty"` // ImagePullPolicy is the image pull policy for the driver, executor, and init-container. // Optional. ImagePullPolicy *string `json:"imagePullPolicy,omitempty"` // ImagePullSecrets is the list of image-pull secrets. // Optional. ImagePullSecrets []string `json:"imagePullSecrets,omitempty"` // MainClass is the fully-qualified main class of the Spark application. // This only applies to Java/Scala Spark applications. // Optional. MainClass *string `json:"mainClass,omitempty"` // MainFile is the path to a bundled JAR, Python, or R file of the application. // Optional. MainApplicationFile *string `json:"mainApplicationFile"` // Arguments is a list of arguments to be passed to the application. // Optional. Arguments []string `json:"arguments,omitempty"` // SparkConf carries user-specified Spark configuration properties as they would use the "--conf" option in // spark-submit. // Optional. SparkConf map[string]string `json:"sparkConf,omitempty"` // HadoopConf carries user-specified Hadoop configuration properties as they would use the "--conf" option // in spark-submit. The SparkApplication controller automatically adds prefix "spark.hadoop." to Hadoop // configuration properties. // Optional. HadoopConf map[string]string `json:"hadoopConf,omitempty"` // SparkConfigMap carries the name of the ConfigMap containing Spark configuration files such as log4j.properties. // The controller will add environment variable SPARK_CONF_DIR to the path where the ConfigMap is mounted to. // Optional. SparkConfigMap *string `json:"sparkConfigMap,omitempty"` // HadoopConfigMap carries the name of the ConfigMap containing Hadoop configuration files such as core-site.xml. // The controller will add environment variable HADOOP_CONF_DIR to the path where the ConfigMap is mounted to. // Optional. HadoopConfigMap *string `json:"hadoopConfigMap,omitempty"` // Volumes is the list of Kubernetes volumes that can be mounted by the driver and/or executors. // Optional. Volumes []corev1.Volume `json:"volumes,omitempty"` // Driver is the driver specification. Driver DriverSpec `json:"driver"` // Executor is the executor specification. Executor ExecutorSpec `json:"executor"` // Deps captures all possible types of dependencies of a Spark application. Deps Dependencies `json:"deps"` // RestartPolicy defines the policy on if and in which conditions the controller should restart an application. RestartPolicy RestartPolicy `json:"restartPolicy,omitempty"` // NodeSelector is the Kubernetes node selector to be added to the driver and executor pods. // This field is mutually exclusive with nodeSelector at podSpec level (driver or executor). // This field will be deprecated in future versions (at SparkApplicationSpec level). // Optional. NodeSelector map[string]string `json:"nodeSelector,omitempty"` // FailureRetries is the number of times to retry a failed application before giving up. // This is best effort and actual retry attempts can be >= the value specified. // Optional. FailureRetries *int32 `json:"failureRetries,omitempty"` // RetryInterval is the unit of intervals in seconds between submission retries. // Optional. RetryInterval *int64 `json:"retryInterval,omitempty"` // This sets the major Python version of the docker // image used to run the driver and executor containers. Can either be 2 or 3, default 2. // Optional. PythonVersion *string `json:"pythonVersion,omitempty"` // This sets the Memory Overhead Factor that will allocate memory to non-JVM memory. // For JVM-based jobs this value will default to 0.10, for non-JVM jobs 0.40. Value of this field will // be overridden by `Spec.Driver.MemoryOverhead` and `Spec.Executor.MemoryOverhead` if they are set. // Optional. MemoryOverheadFactor *string `json:"memoryOverheadFactor,omitempty"` // Monitoring configures how monitoring is handled. // Optional. Monitoring *MonitoringSpec `json:"monitoring,omitempty"` // BatchScheduler configures which batch scheduler will be used for scheduling // Optional. BatchScheduler *string `json:"batchScheduler,omitempty"` }
SparkApplicationSpec defines the desired state of SparkApplication
func (*SparkApplicationSpec) DeepCopy ¶
func (in *SparkApplicationSpec) DeepCopy() *SparkApplicationSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SparkApplicationSpec.
func (*SparkApplicationSpec) DeepCopyInto ¶
func (in *SparkApplicationSpec) DeepCopyInto(out *SparkApplicationSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type SparkApplicationStatus ¶
type SparkApplicationStatus struct { // SparkApplicationID is set by the spark-distribution(via spark.app.id config) on the driver and executor pods SparkApplicationID string `json:"sparkApplicationId,omitempty"` // SubmissionID is a unique ID of the current submission of the application. SubmissionID string `json:"submissionID,omitempty"` // LastSubmissionAttemptTime is the time for the last application submission attempt. LastSubmissionAttemptTime metav1.Time `json:"lastSubmissionAttemptTime,omitempty"` // CompletionTime is the time when the application runs to completion if it does. TerminationTime metav1.Time `json:"terminationTime,omitempty"` // DriverInfo has information about the driver. DriverInfo DriverInfo `json:"driverInfo"` // AppState tells the overall application state. AppState ApplicationState `json:"applicationState,omitempty"` // ExecutorState records the state of executors by executor Pod names. ExecutorState map[string]ExecutorState `json:"executorState,omitempty"` // ExecutionAttempts is the total number of attempts to run a submitted application to completion. // Incremented upon each attempted run of the application and reset upon invalidation. ExecutionAttempts int32 `json:"executionAttempts,omitempty"` // SubmissionAttempts is the total number of attempts to submit an application to run. // Incremented upon each attempted submission of the application and reset upon invalidation and rerun. SubmissionAttempts int32 `json:"submissionAttempts,omitempty"` }
SparkApplicationStatus defines the observed state of SparkApplication
func (*SparkApplicationStatus) DeepCopy ¶
func (in *SparkApplicationStatus) DeepCopy() *SparkApplicationStatus
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SparkApplicationStatus.
func (*SparkApplicationStatus) DeepCopyInto ¶
func (in *SparkApplicationStatus) DeepCopyInto(out *SparkApplicationStatus)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type SparkApplicationType ¶
type SparkApplicationType string
SparkApplicationType describes the type of a Spark application.
const ( JavaApplicationType SparkApplicationType = "Java" ScalaApplicationType SparkApplicationType = "Scala" PythonApplicationType SparkApplicationType = "Python" RApplicationType SparkApplicationType = "R" )
Different types of Spark applications.
type SparkPodSpec ¶
type SparkPodSpec struct { // Cores is the number of CPU cores to request for the pod. // Optional. Cores *float32 `json:"cores,omitempty"` // CoreLimit specifies a hard limit on CPU cores for the pod. // Optional CoreLimit *string `json:"coreLimit,omitempty"` // Memory is the amount of memory to request for the pod. // Optional. Memory *string `json:"memory,omitempty"` // MemoryOverhead is the amount of off-heap memory to allocate in cluster mode, in MiB unless otherwise specified. // Optional. MemoryOverhead *string `json:"memoryOverhead,omitempty"` // GPU specifies GPU requirement for the pod. // Optional. GPU *GPUSpec `json:"gpu,omitempty"` // Image is the container image to use. Overrides Spec.Image if set. // Optional. Image *string `json:"image,omitempty"` // ConfigMaps carries information of other ConfigMaps to add to the pod. // Optional. ConfigMaps []NamePath `json:"configMaps,omitempty"` // Secrets carries information of secrets to add to the pod. // Optional. Secrets []SecretInfo `json:"secrets,omitempty"` // EnvVars carries the environment variables to add to the pod. // Optional. EnvVars map[string]string `json:"envVars,omitempty"` // EnvSecretKeyRefs holds a mapping from environment variable names to SecretKeyRefs. // Optional. EnvSecretKeyRefs map[string]NameKey `json:"envSecretKeyRefs,omitempty"` // Labels are the Kubernetes labels to be added to the pod. // Optional. Labels map[string]string `json:"labels,omitempty"` // Annotations are the Kubernetes annotations to be added to the pod. // Optional. Annotations map[string]string `json:"annotations,omitempty"` // VolumeMounts specifies the volumes listed in ".spec.volumes" to mount into the main container's filesystem. // Optional. VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"` // Affinity specifies the affinity/anti-affinity settings for the pod. // Optional. Affinity *corev1.Affinity `json:"affinity,omitempty"` // Tolerations specifies the tolerations listed in ".spec.tolerations" to be applied to the pod. // Optional. Tolerations []corev1.Toleration `json:"tolerations,omitempty"` // SecurityContext specifies the PodSecurityContext to apply. // Optional. SecurityContext *corev1.PodSecurityContext `json:"securityContext,omitempty"` // SchedulerName specifies the scheduler that will be used for scheduling // Optional. SchedulerName *string `json:"schedulerName,omitempty"` // Sidecars is a list of sidecar containers that run along side the main Spark container. // Optional. Sidecars []corev1.Container `json:"sidecars,omitempty"` // HostNetwork indicates whether to request host networking for the pod or not. // Optional. HostNetwork *bool `json:"hostNetwork,omitempty"` // NodeSelector is the Kubernetes node selector to be added to the driver and executor pods. // This field is mutually exclusive with nodeSelector at SparkApplication level (which will be deprecated). // Optional. NodeSelector map[string]string `json:"nodeSelector,omitempty"` // DnsConfig dns settings for the pod, following the Kubernetes specifications. // Optional. DNSConfig *corev1.PodDNSConfig `json:"dnsConfig,omitempty"` }
SparkPodSpec defines common things that can be customized for a Spark driver or executor pod. TODO: investigate if we should use v1.PodSpec and limit what can be set instead.
func (*SparkPodSpec) DeepCopy ¶
func (in *SparkPodSpec) DeepCopy() *SparkPodSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SparkPodSpec.
func (*SparkPodSpec) DeepCopyInto ¶
func (in *SparkPodSpec) DeepCopyInto(out *SparkPodSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.