spark

package
v0.4.1-saahil.spark.in... Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 30, 2021 License: Apache-2.0 Imports: 16 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func DriverServiceName added in v0.4.0

func DriverServiceName(name string) string

func FrameworkConfigMapName added in v0.4.0

func FrameworkConfigMapName(instance string, comp Component) string

func HeadlessServiceName

func HeadlessServiceName(name string) string

func HorizontalPodAutoscalerObjectMeta

func HorizontalPodAutoscalerObjectMeta(sc *dcv1alpha1.SparkCluster) metav1.ObjectMeta

HorizontalPodAutoscalerObjectMeta returns the ObjectMeta object used to identify new HPA objects.

func InstanceObjectName

func InstanceObjectName(instance string, comp Component) string

InstanceObjectName returns the name that will be used to create most owned cluster resources.

func KeyTabConfigMapName added in v0.4.0

func KeyTabConfigMapName(instance string, comp Component) string

func MasterServiceName added in v0.4.0

func MasterServiceName(name string) string

MasterServiceName returns the name of the service that points to the spark master pod.

func MetadataLabels

func MetadataLabels(sc *dcv1alpha1.SparkCluster) map[string]string

MetadataLabels returns standard metadata for spark resources.

func MetadataLabelsWithComponent

func MetadataLabelsWithComponent(sc *dcv1alpha1.SparkCluster, comp Component) map[string]string

MetadataLabelsWithComponent returns standard component metadata for spark resources.

func NewClusterDriverNetworkPolicy added in v0.4.0

func NewClusterDriverNetworkPolicy(sc *dcv1alpha1.SparkCluster) *networkingv1.NetworkPolicy

func NewClusterMasterNetworkPolicy added in v0.4.0

func NewClusterMasterNetworkPolicy(sc *dcv1alpha1.SparkCluster) *networkingv1.NetworkPolicy

func NewClusterWorkerNetworkPolicy added in v0.4.0

func NewClusterWorkerNetworkPolicy(sc *dcv1alpha1.SparkCluster) *networkingv1.NetworkPolicy

func NewFrameworkConfigMap added in v0.4.0

func NewFrameworkConfigMap(sc *dcv1alpha1.SparkCluster) *corev1.ConfigMap

NewFrameworkConfigMap generates a configmap which represents a spark-defaults.conf file out of provided config

func NewHeadlessService

func NewHeadlessService(sc *dcv1alpha1.SparkCluster) *corev1.Service

NewHeadlessService creates a headless service that points to worker nodes

func NewHorizontalPodAutoscaler

func NewHorizontalPodAutoscaler(sc *dcv1alpha1.SparkCluster) (*autoscalingv2beta2.HorizontalPodAutoscaler, error)

NewHorizontalPodAutoscaler generates an HPA that targets a SparkCluster resource.

The metrics-server needs to be launched separately and the worker deployment requires cpu resource requests in order for this object to have any effect.

func NewKeyTabConfigMap added in v0.4.0

func NewKeyTabConfigMap(sc *dcv1alpha1.SparkCluster) *corev1.ConfigMap

NewKeyTabConfigMap generates a configmap which represents the Kerberos KeyTab configuration out of provided config

func NewMasterService

func NewMasterService(sc *dcv1alpha1.SparkCluster) *corev1.Service

NewMasterService creates a ClusterIP service that points to the head node. Dashboard port is exposed when enabled.

func NewPodSecurityPolicyRBAC

func NewPodSecurityPolicyRBAC(sc *dcv1alpha1.SparkCluster) (*rbacv1.Role, *rbacv1.RoleBinding)

NewPodSecurityPolicyRBAC generates the role and role binding required to use a pod security policy. The role is bound to the service account used by the spark cluster pods.

func NewServiceAccount

func NewServiceAccount(sc *dcv1alpha1.SparkCluster) *corev1.ServiceAccount

NewServiceAccount generates a service account resource without API access.

func NewSparkDriverService added in v0.4.0

func NewSparkDriverService(sc *dcv1alpha1.SparkCluster) *corev1.Service

NewSparkDriverService creates a ClusterIP service that exposes the driver UI port.

func NewStatefulSet

func NewStatefulSet(sc *dcv1alpha1.SparkCluster, comp Component) (*appsv1.StatefulSet, error)

NewStatefulSet generates a Deployment configured to manage Spark cluster nodes. The configuration is based the provided spec and the desired Component workload.

func SelectorLabels

func SelectorLabels(sc *dcv1alpha1.SparkCluster) map[string]string

SelectorLabels returns a resource selector clause for spark resources.

func SelectorLabelsWithComponent

func SelectorLabelsWithComponent(sc *dcv1alpha1.SparkCluster, comp Component) map[string]string

SelectorLabelsWithComponent returns a resource component selector clause for spark resources.

Types

type Component

type Component string

Component is used to drive Kubernetes object generation for different spark types.

const (
	// ComponentNone indicates a generic spark resource.
	ComponentNone Component = "none"
	// ComponentMaster indicates a spark master resource.
	ComponentMaster Component = "master"
	// ComponentWorker indicates a spark worker resource.
	ComponentWorker Component = "worker"
	// ApplicationName defines the static name used to generate spark object metadata.
	ApplicationName = "spark"
)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL