Documentation
¶
Index ¶
- type Cluster
- func (r *Cluster) ClusterConfig() *pulumi.Output
- func (r *Cluster) ID() *pulumi.IDOutput
- func (r *Cluster) Labels() *pulumi.MapOutput
- func (r *Cluster) Name() *pulumi.StringOutput
- func (r *Cluster) Project() *pulumi.StringOutput
- func (r *Cluster) Region() *pulumi.StringOutput
- func (r *Cluster) URN() *pulumi.URNOutput
- type ClusterArgs
- type ClusterState
- type Job
- func (r *Job) DriverControlsFilesUri() *pulumi.StringOutput
- func (r *Job) DriverOutputResourceUri() *pulumi.StringOutput
- func (r *Job) ForceDelete() *pulumi.BoolOutput
- func (r *Job) HadoopConfig() *pulumi.Output
- func (r *Job) HiveConfig() *pulumi.Output
- func (r *Job) ID() *pulumi.IDOutput
- func (r *Job) Labels() *pulumi.MapOutput
- func (r *Job) PigConfig() *pulumi.Output
- func (r *Job) Placement() *pulumi.Output
- func (r *Job) Project() *pulumi.StringOutput
- func (r *Job) PysparkConfig() *pulumi.Output
- func (r *Job) Reference() *pulumi.Output
- func (r *Job) Region() *pulumi.StringOutput
- func (r *Job) Scheduling() *pulumi.Output
- func (r *Job) SparkConfig() *pulumi.Output
- func (r *Job) SparksqlConfig() *pulumi.Output
- func (r *Job) Status() *pulumi.Output
- func (r *Job) URN() *pulumi.URNOutput
- type JobArgs
- type JobState
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Cluster ¶
type Cluster struct {
// contains filtered or unexported fields
}
Manages a Cloud Dataproc cluster resource within GCP. For more information see [the official dataproc documentation](https://cloud.google.com/dataproc/).
!> **Warning:** Due to limitations of the API, all arguments except `labels`,`cluster_config.worker_config.num_instances` and `cluster_config.preemptible_worker_config.num_instances` are non-updateable. Changing others will cause recreation of the whole cluster!
func GetCluster ¶
func GetCluster(ctx *pulumi.Context, name string, id pulumi.ID, state *ClusterState, opts ...pulumi.ResourceOpt) (*Cluster, error)
GetCluster gets an existing Cluster resource's state with the given name, ID, and optional state properties that are used to uniquely qualify the lookup (nil if not required).
func NewCluster ¶
func NewCluster(ctx *pulumi.Context, name string, args *ClusterArgs, opts ...pulumi.ResourceOpt) (*Cluster, error)
NewCluster registers a new resource with the given unique name, arguments, and options.
func (*Cluster) ClusterConfig ¶
Allows you to configure various aspects of the cluster. Structure defined below.
func (*Cluster) Labels ¶
The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including `goog-dataproc-cluster-name` which is the name of the cluster.
func (*Cluster) Name ¶
func (r *Cluster) Name() *pulumi.StringOutput
The name of the cluster, unique within the project and zone.
func (*Cluster) Project ¶
func (r *Cluster) Project() *pulumi.StringOutput
The ID of the project in which the `cluster` will exist. If it is not provided, the provider project is used.
func (*Cluster) Region ¶
func (r *Cluster) Region() *pulumi.StringOutput
The region in which the cluster and associated nodes will be created in. Defaults to `global`.
type ClusterArgs ¶
type ClusterArgs struct { // Allows you to configure various aspects of the cluster. // Structure defined below. ClusterConfig interface{} // The list of labels (key/value pairs) to be applied to // instances in the cluster. GCP generates some itself including `goog-dataproc-cluster-name` // which is the name of the cluster. Labels interface{} // The name of the cluster, unique within the project and // zone. Name interface{} // The ID of the project in which the `cluster` will exist. If it // is not provided, the provider project is used. Project interface{} // The region in which the cluster and associated nodes will be created in. // Defaults to `global`. Region interface{} }
The set of arguments for constructing a Cluster resource.
type ClusterState ¶
type ClusterState struct { // Allows you to configure various aspects of the cluster. // Structure defined below. ClusterConfig interface{} // The list of labels (key/value pairs) to be applied to // instances in the cluster. GCP generates some itself including `goog-dataproc-cluster-name` // which is the name of the cluster. Labels interface{} // The name of the cluster, unique within the project and // zone. Name interface{} // The ID of the project in which the `cluster` will exist. If it // is not provided, the provider project is used. Project interface{} // The region in which the cluster and associated nodes will be created in. // Defaults to `global`. Region interface{} }
Input properties used for looking up and filtering Cluster resources.
type Job ¶
type Job struct {
// contains filtered or unexported fields
}
Manages a job resource within a Dataproc cluster within GCE. For more information see [the official dataproc documentation](https://cloud.google.com/dataproc/).
!> **Note:** This resource does not support 'update' and changing any attributes will cause the resource to be recreated.
func GetJob ¶
func GetJob(ctx *pulumi.Context, name string, id pulumi.ID, state *JobState, opts ...pulumi.ResourceOpt) (*Job, error)
GetJob gets an existing Job resource's state with the given name, ID, and optional state properties that are used to uniquely qualify the lookup (nil if not required).
func NewJob ¶
func NewJob(ctx *pulumi.Context, name string, args *JobArgs, opts ...pulumi.ResourceOpt) (*Job, error)
NewJob registers a new resource with the given unique name, arguments, and options.
func (*Job) DriverControlsFilesUri ¶
func (r *Job) DriverControlsFilesUri() *pulumi.StringOutput
If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
func (*Job) DriverOutputResourceUri ¶
func (r *Job) DriverOutputResourceUri() *pulumi.StringOutput
A URI pointing to the location of the stdout of the job's driver program.
func (*Job) ForceDelete ¶
func (r *Job) ForceDelete() *pulumi.BoolOutput
By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.
func (*Job) HadoopConfig ¶
func (*Job) HiveConfig ¶
func (*Job) Project ¶
func (r *Job) Project() *pulumi.StringOutput
The project in which the `cluster` can be found and jobs subsequently run against. If it is not provided, the provider project is used.
func (*Job) PysparkConfig ¶
func (*Job) Region ¶
func (r *Job) Region() *pulumi.StringOutput
The Cloud Dataproc region. This essentially determines which clusters are available for this job to be submitted to. If not specified, defaults to `global`.
func (*Job) Scheduling ¶
Optional. Job scheduling configuration.
func (*Job) SparkConfig ¶
func (*Job) SparksqlConfig ¶
type JobArgs ¶
type JobArgs struct { // By default, you can only delete inactive jobs within // Dataproc. Setting this to true, and calling destroy, will ensure that the // job is first cancelled before issuing the delete. ForceDelete interface{} HadoopConfig interface{} HiveConfig interface{} // The list of labels (key/value pairs) to add to the job. Labels interface{} PigConfig interface{} Placement interface{} // The project in which the `cluster` can be found and jobs // subsequently run against. If it is not provided, the provider project is used. Project interface{} PysparkConfig interface{} Reference interface{} // The Cloud Dataproc region. This essentially determines which clusters are available // for this job to be submitted to. If not specified, defaults to `global`. Region interface{} // Optional. Job scheduling configuration. Scheduling interface{} SparkConfig interface{} SparksqlConfig interface{} }
The set of arguments for constructing a Job resource.
type JobState ¶
type JobState struct { // If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri. DriverControlsFilesUri interface{} // A URI pointing to the location of the stdout of the job's driver program. DriverOutputResourceUri interface{} // By default, you can only delete inactive jobs within // Dataproc. Setting this to true, and calling destroy, will ensure that the // job is first cancelled before issuing the delete. ForceDelete interface{} HadoopConfig interface{} HiveConfig interface{} // The list of labels (key/value pairs) to add to the job. Labels interface{} PigConfig interface{} Placement interface{} // The project in which the `cluster` can be found and jobs // subsequently run against. If it is not provided, the provider project is used. Project interface{} PysparkConfig interface{} Reference interface{} // The Cloud Dataproc region. This essentially determines which clusters are available // for this job to be submitted to. If not specified, defaults to `global`. Region interface{} // Optional. Job scheduling configuration. Scheduling interface{} SparkConfig interface{} SparksqlConfig interface{} Status interface{} }
Input properties used for looking up and filtering Job resources.