awscdkbatchalpha

package module
v2.95.1-alpha.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 9, 2023 License: Apache-2.0 Imports: 15 Imported by: 0

README

AWS Batch Construct Library

---

The APIs of higher level constructs in this module are in developer preview before they become stable. We will only make breaking changes to address unforeseen API issues. Therefore, these APIs are not subject to Semantic Versioning, and breaking changes will be announced in release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.


This module is part of the AWS Cloud Development Kit project.

AWS Batch is a batch processing tool for efficiently running hundreds of thousands computing jobs in AWS. Batch can dynamically provision Amazon EC2 Instances to meet the resource requirements of submitted jobs and simplifies the planning, scheduling, and executions of your batch workloads. Batch achieves this through four different resources:

  • ComputeEnvironments: Contain the resources used to execute Jobs
  • JobDefinitions: Define a type of Job that can be submitted
  • JobQueues: Route waiting Jobs to ComputeEnvironments
  • SchedulingPolicies: Applied to Queues to control how and when Jobs exit the JobQueue and enter the ComputeEnvironment

ComputeEnvironments can be managed or unmanaged. Batch will automatically provision EC2 Instances in a managed ComputeEnvironment and will not provision any Instances in an unmanaged ComputeEnvironment. Managed ComputeEnvironments can use ECS, Fargate, or EKS resources to spin up EC2 Instances in (ensure your EKS Cluster has been configured to support a Batch ComputeEnvironment before linking it). You can use Launch Templates and Placement Groups to configure exactly how these resources will be provisioned.

JobDefinitions can use either ECS resources or EKS resources. ECS JobDefinitions can use multiple containers to execute distributed workloads. EKS JobDefinitions can only execute a single container. Submitted Jobs use JobDefinitions as templates.

JobQueues must link at least one ComputeEnvironment. Jobs exit the Queue in FIFO order unless a SchedulingPolicy is specified.

SchedulingPolicys tell the Scheduler how to choose which Jobs should be executed next by the ComputeEnvironment.

Use Cases & Examples

Cost Optimization
Spot Instances

Spot instances are significantly discounted EC2 instances that can be reclaimed at any time by AWS. Workloads that are fault-tolerant or stateless can take advantage of spot pricing. To use spot spot instances, set spot to true on a managed Ec2 or Fargate Compute Environment:

vpc := ec2.NewVpc(this, jsii.String("VPC"))
batch.NewFargateComputeEnvironment(this, jsii.String("myFargateComputeEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})

Batch allows you to specify the percentage of the on-demand instance that the current spot price must be to provision the instance using the spotBidPercentage. This defaults to 100%, which is the recommended value. This value cannot be specified for FargateComputeEnvironments and only applies to ManagedEc2EcsComputeEnvironments. The following code configures a Compute Environment to only use spot instances that are at most 20% the price of the on-demand instance price:

vpc := ec2.NewVpc(this, jsii.String("VPC"))
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
	SpotBidPercentage: jsii.Number(20),
})

For stateful or otherwise non-interruption-tolerant workflows, omit spot or set it to false to only provision on-demand instances.

Choosing Your Instance Types

Batch allows you to choose the instance types or classes that will run your workload. This example configures your ComputeEnvironment to use only the M5AD.large instance:

vpc := ec2.NewVpc(this, jsii.String("VPC"))

batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceTypes: []instanceType{
		ec2.*instanceType_Of(ec2.InstanceClass_M5AD, ec2.InstanceSize_LARGE),
	},
})

Batch allows you to specify only the instance class and to let it choose the size, which you can do like this:

var computeEnv iManagedEc2EcsComputeEnvironment
vpc := ec2.NewVpc(this, jsii.String("VPC"))
computeEnv.AddInstanceClass(ec2.InstanceClass_M5AD)
// Or, specify it on the constructor:
// Or, specify it on the constructor:
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
})

Unless you explicitly specify useOptimalInstanceClasses: false, this compute environment will use 'optimal' instances, which tells Batch to pick an instance from the C4, M4, and R4 instance families. Note: Batch does not allow specifying instance types or classes with different architectures. For example, InstanceClass.A1 cannot be specified alongside 'optimal', because A1 uses ARM and 'optimal' uses x86_64. You can specify both 'optimal' alongside several different instance types in the same compute environment:

var vpc iVpc


computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	InstanceTypes: []instanceType{
		ec2.*instanceType_Of(ec2.InstanceClass_M5AD, ec2.InstanceSize_LARGE),
	},
	UseOptimalInstanceClasses: jsii.Boolean(true),
	 // default
	Vpc: Vpc,
})
// Note: this is equivalent to specifying
computeEnv.AddInstanceType(ec2.instanceType_Of(ec2.InstanceClass_M5AD, ec2.InstanceSize_LARGE))
computeEnv.AddInstanceClass(ec2.InstanceClass_C4)
computeEnv.AddInstanceClass(ec2.InstanceClass_M4)
computeEnv.AddInstanceClass(ec2.InstanceClass_R4)
Allocation Strategies
Allocation Strategy Optimized for Downsides
BEST_FIT Cost May limit throughput
BEST_FIT_PROGRESSIVE Throughput May increase cost
SPOT_CAPACITY_OPTIMIZED Least interruption Only useful on Spot instances
SPOT_PRICE_CAPACITY_OPTIMIZED Least interruption + Price Only useful on Spot instances

Batch provides different Allocation Strategies to help it choose which instances to provision. If your workflow tolerates interruptions, you should enable spot on your ComputeEnvironment and use SPOT_PRICE_CAPACITY_OPTIMIZED (this is the default if spot is enabled). This will tell Batch to choose the instance types from the ones you’ve specified that have the most spot capacity available to minimize the chance of interruption and have the lowest price. To get the most benefit from your spot instances, you should allow Batch to choose from as many different instance types as possible. If you only care about minimal interruptions and not want Batch to optimize for cost, use SPOT_CAPACITY_OPTIMIZED. SPOT_PRICE_CAPACITY_OPTIMIZED is recommended over SPOT_CAPACITY_OPTIMIZED for most use cases.

If your workflow does not tolerate interruptions and you want to minimize your costs at the expense of potentially longer waiting times, use AllocationStrategy.BEST_FIT. This will choose the lowest-cost instance type that fits all the jobs in the queue. If instances of that type are not available, the queue will not choose a new type; instead, it will wait for the instance to become available. This can stall your Queue, with your compute environment only using part of its max capacity (or none at all) until the BEST_FIT instance becomes available.

If you are running a workflow that does not tolerate interruptions and you want to maximize throughput, you can use AllocationStrategy.BEST_FIT_PROGRESSIVE. This is the default Allocation Strategy if spot is false or unspecified. This strategy will examine the Jobs in the queue and choose whichever instance type meets the requirements of the jobs in the queue and with the lowest cost per vCPU, just as BEST_FIT. However, if not all of the capacity can be filled with this instance type, it will choose a new next-best instance type to run any jobs that couldn’t fit into the BEST_FIT capacity. To make the most use of this allocation strategy, it is recommended to use as many instance classes as is feasible for your workload. This example shows a ComputeEnvironment that uses BEST_FIT_PROGRESSIVE with 'optimal' and InstanceClass.M5 instance types:

var vpc iVpc


computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_M5,
	},
})

This example shows a ComputeEnvironment that uses BEST_FIT with 'optimal' instances:

var vpc iVpc


computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	AllocationStrategy: batch.AllocationStrategy_BEST_FIT,
})

Note: allocationStrategy cannot be specified on Fargate Compute Environments.

Controlling vCPU allocation

You can specify the maximum and minimum vCPUs a managed ComputeEnvironment can have at any given time. Batch will always maintain minvCpus worth of instances in your ComputeEnvironment, even if it is not executing any jobs, and even if it is disabled. Batch will scale the instances up to maxvCpus worth of instances as jobs exit the JobQueue and enter the ComputeEnvironment. If you use AllocationStrategy.BEST_FIT_PROGRESSIVE, AllocationStrategy.SPOT_PRICE_CAPACITY_OPTIMIZED, or AllocationStrategy.SPOT_CAPACITY_OPTIMIZED, batch may exceed maxvCpus; it will never exceed maxvCpus by more than a single instance type. This example configures a minvCpus of 10 and a maxvCpus of 100:

var vpc iVpc


batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
	MinvCpus: jsii.Number(10),
	MaxvCpus: jsii.Number(100),
})
Tagging Instances

You can tag any instances launched by your managed EC2 ComputeEnvironments by using the CDK Tags API:

import "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc


tagCE := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("CEThatMakesTaggedInstnaces"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
})

awscdk.Tags_Of(tagCE).Add(jsii.String("super"), jsii.String("salamander"))

Unmanaged ComputeEnvironments do not support maxvCpus or minvCpus because you must provision and manage the instances yourself; that is, Batch will not scale them up and down as needed.

Sharing a ComputeEnvironment between multiple JobQueues

Multiple JobQueues can share the same ComputeEnvironment. If multiple Queues are attempting to submit Jobs to the same ComputeEnvironment, Batch will pick the Job from the Queue with the highest priority. This example creates two JobQueues that share a ComputeEnvironment:

var vpc iVpc

sharedComputeEnv := batch.NewFargateComputeEnvironment(this, jsii.String("spotEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})
lowPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(1),
})
highPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(10),
})
lowPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
highPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
Fairshare Scheduling

Batch JobQueues execute Jobs submitted to them in FIFO order unless you specify a SchedulingPolicy. FIFO queuing can cause short-running jobs to be starved while long-running jobs fill the compute environment. To solve this, Jobs can be associated with a share.

Shares consist of a shareIdentifier and a weightFactor, which is inversely correlated with the vCPU allocated to that share identifier. When submitting a Job, you can specify its shareIdentifier to associate that particular job with that share. Let's see how the scheduler uses this information to schedule jobs.

For example, if there are two shares defined as follows:

Share Identifier Weight Factor
A 1
B 1

The weight factors share the following relationship:

A_{vCpus} / A_{Weight} = B_{vCpus} / B_{Weight}

where BvCpus is the number of vCPUs allocated to jobs with share identifier 'B', and B_weight is the weight factor of B.

The total number of vCpus allocated to a share is equal to the amount of jobs in that share times the number of vCpus necessary for every job. Let's say that each A job needs 32 VCpus (A_requirement = 32) and each B job needs 64 vCpus (B_requirement = 64):

A_{vCpus} = A_{Jobs} * A_{Requirement}
B_{vCpus} = B_{Jobs} * B_{Requirement}

We have:

A_{vCpus} / A_{Weight} = B_{vCpus} / B_{Weight}
A_{Jobs} * A_{Requirement} / A_{Weight} = B_{Jobs} * B_{Requirement} / B_{Weight}
A_{Jobs} * 32 / 1 = B_{Jobs} * 64 / 1
A_{Jobs} * 32 = B_{Jobs} * 64
A_{Jobs} = B_{Jobs} * 2

Thus the scheduler will schedule two 'A' jobs for each 'B' job.

You can control the weight factors to change these ratios, but note that weight factors are inversely correlated with the vCpus allocated to the corresponding share.

This example would be configured like this:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"))

fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("A"),
	WeightFactor: jsii.Number(1),
})
fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("B"),
	WeightFactor: jsii.Number(1),
})
batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	SchedulingPolicy: fairsharePolicy,
})

Note: The scheduler will only consider the current usage of the compute environment unless you specify shareDecay. For example, a shareDecay of 5 minutes in the above example means that at any given point in time, twice as many 'A' jobs will be scheduled for each 'B' job, but only for the past 5 minutes. If 'B' jobs run longer than 5 minutes, then the scheduler is allowed to put more than two 'A' jobs for each 'B' job, because the usage of those long-running 'B' jobs will no longer be considered after 5 minutes. shareDecay linearly decreases the usage of long running jobs for calculation purposes. For example if share decay is 60 seconds, then jobs that run for 30 seconds have their usage considered to be only 50% of what it actually is, but after a whole minute the scheduler pretends they don't exist for fairness calculations.

The following code specifies a shareDecay of 5 minutes:

import cdk "github.com/aws/aws-cdk-go/awscdk"

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"), &FairshareSchedulingPolicyProps{
	ShareDecay: cdk.Duration_Minutes(jsii.Number(5)),
})

If you have high priority jobs that should always be executed as soon as they arrive, you can define a computeReservation to specify the percentage of the maximum vCPU capacity that should be reserved for shares that are not in the queue. The actual reserved percentage is defined by Batch as:

 (\frac{computeReservation}{100}) ^ {ActiveFairShares}

where ActiveFairShares is the number of shares for which there exists at least one job in the queue with a unique share identifier.

This is best illustrated with an example. Suppose there are three shares with share identifiers A, B and C respectively and we specify the computeReservation to be 75%. The queue is currently empty, and no other shares exist.

There are no active fair shares, since the queue is empty. Thus (75/100)^0 = 1 = 100% of the maximum vCpus are reserved for all shares.

A job with identifier A enters the queue.

The number of active fair shares is now 1, hence (75/100)^1 = .75 = 75% of the maximum vCpus are reserved for all shares that do not have the identifier A; for this example, this is B and C, (but if jobs are submitted with a share identifier not covered by this fairshare policy, those would be considered just as B and C are).

Now a B job enters the queue. The number of active fair shares is now 2, so (75/100)^2 = .5625 = 56.25% of the maximum vCpus are reserved for all shares that do not have the identifier A or B.

Now a second A job enters the queue. The number of active fair shares is still 2, so the percentage reserved is still 56.25%

Now a C job enters the queue. The number of active fair shares is now 3, so (75/100)^3 = .421875 = 42.1875% of the maximum vCpus are reserved for all shares that do not have the identifier A, B, or C.

If there are no other shares that your jobs can specify, this means that 42.1875% of your capacity will never be used!

Now, A, B, and C can only consume 100% - 42.1875% = 57.8125% of the maximum vCpus. Note that the this percentage is not split between A, B, and C. Instead, the scheduler will use their weightFactors to decide which jobs to schedule; the only difference is that instead of competing for 100% of the max capacity, jobs compete for 57.8125% of the max capacity.

This example specifies a computeReservation of 75% that will behave as explained in the example above:

batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"), &FairshareSchedulingPolicyProps{
	ComputeReservation: jsii.Number(75),
	Shares: []share{
		&share{
			WeightFactor: jsii.Number(1),
			ShareIdentifier: jsii.String("A"),
		},
		&share{
			WeightFactor: jsii.Number(0.5),
			ShareIdentifier: jsii.String("B"),
		},
		&share{
			WeightFactor: jsii.Number(2),
			ShareIdentifier: jsii.String("C"),
		},
	},
})

You can specify a priority on your JobDefinitions to tell the scheduler to prioritize certain jobs that share the same share identifier.

Configuring Job Retry Policies

Certain workflows may result in Jobs failing due to intermittent issues. Jobs can specify retry policies to respond to different failures with different actions. There are three different ways information about the way a Job exited can be conveyed;

  • exitCode: the exit code returned from the process executed by the container. Will only match non-zero exit codes.
  • reason: any middleware errors, like your Docker registry being down.
  • statusReason: infrastructure errors, most commonly your spot instance being reclaimed.

For most use cases, only one of these will be associated with a particular action at a time. To specify common exitCodes, reasons, or statusReasons, use the corresponding value from the Reason class. This example shows some common failure reasons:

import cdk "github.com/aws/aws-cdk-go/awscdk"


jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

When specifying a custom reason, you can specify a glob string to match each of these and react to different failures accordingly. Up to five different retry strategies can be configured for each Job, and each strategy can match against some or all of exitCode, reason, and statusReason. You can optionally configure the number of times a job will be retried, but you cannot configure different retry counts for different strategies; they all share the same count. If multiple conditions are specified in a given retry strategy, they must all match for the action to be taken; the conditions are ANDed together, not ORed.

Running single-container ECS workflows

Batch can run jobs on ECS or EKS. ECS jobs can be defined as single container or multinode. This example creates a JobDefinition that runs a single container with ECS:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"
import efs "github.com/aws/aws-cdk-go/awscdk"

var myFileSystem iFileSystem
var myJobRole role

myFileSystem.GrantRead(myJobRole)

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Volumes: []ecsVolume{
			batch.*ecsVolume_Efs(&EfsVolumeOptions{
				Name: jsii.String("myVolume"),
				FileSystem: myFileSystem,
				ContainerPath: jsii.String("/Volumes/myVolume"),
				UseJobRole: jsii.Boolean(true),
			}),
		},
		JobRole: myJobRole,
	}),
})

For workflows that need persistent storage, batch supports mounting Volumes to the container. You can both provision the volume and mount it to the container in a single operation:

import efs "github.com/aws/aws-cdk-go/awscdk"

var myFileSystem iFileSystem
var jobDefn ecsJobDefinition


jobDefn.Container.AddVolume(batch.EcsVolume_Efs(&EfsVolumeOptions{
	Name: jsii.String("myVolume"),
	FileSystem: myFileSystem,
	ContainerPath: jsii.String("/Volumes/myVolume"),
}))
Secrets

You can expose SecretsManager Secret ARNs or SSM Parameters to your container as environment variables. The following example defines the MY_SECRET_ENV_VAR environment variable that contains the ARN of the Secret defined by mySecret:

import cdk "github.com/aws/aws-cdk-go/awscdk"

var mySecret iSecret


jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Secrets: map[string]secret{
			"MY_SECRET_ENV_VAR": batch.*secret_fromSecretsManager(mySecret),
		},
	}),
})
Running Kubernetes Workflows

Batch also supports running workflows on EKS. The following example creates a JobDefinition that runs on EKS:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

You can mount Volumes to these containers in a single operation:

var jobDefn eksJobDefinition

jobDefn.Container.AddVolume(batch.EksVolume_EmptyDir(&EmptyDirVolumeOptions{
	Name: jsii.String("emptyDir"),
	MountPath: jsii.String("/Volumes/emptyDir"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_HostPath(&HostPathVolumeOptions{
	Name: jsii.String("hostPath"),
	HostPath: jsii.String("/sys"),
	MountPath: jsii.String("/Volumes/hostPath"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_Secret(&SecretPathVolumeOptions{
	Name: jsii.String("secret"),
	Optional: jsii.Boolean(true),
	MountPath: jsii.String("/Volumes/secret"),
	SecretName: jsii.String("mySecret"),
}))
Running Distributed Workflows

Some workflows benefit from parallellization and are most powerful when run in a distributed environment, such as certain numerical calculations or simulations. Batch offers MultiNodeJobDefinitions, which allow a single job to run on multiple instances in parallel, for this purpose. Message Passing Interface (MPI) is often used with these workflows. You must configure your containers to use MPI properly, but Batch allows different nodes running different containers to communicate easily with one another. You must configure your containers to use certain environment variables that Batch will provide them, which lets them know which one is the main node, among other information. For an in-depth example on using MPI to perform numerical computations on Batch, see this blog post In particular, the environment variable that tells the containers which one is the main node can be configured on your MultiNodeJobDefinition as follows:

import "github.com/aws/aws-cdk-go/awscdk"

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

If you need to set the control node to an index other than 0, specify it in directly:

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	MainNode: jsii.Number(5),
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
})
Pass Parameters to a Job

Batch allows you define parameters in your JobDefinition, which can be referenced in the container command. For example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Parameters: map[string]interface{}{
		"echoParam": jsii.String("foobar"),
	},
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Command: []*string{
			jsii.String("echo"),
			jsii.String("Ref::echoParam"),
		},
	}),
})
Understanding Progressive Allocation Strategies

AWS Batch uses an allocation strategy to determine what compute resource will efficiently handle incoming job requests. By default, BEST_FIT will pick an available compute instance based on vCPU requirements. If none exist, the job will wait until resources become available. However, with this strategy, you may have jobs waiting in the queue unnecessarily despite having more powerful instances available. Below is an example of how that situation might look like:

Compute Environment:

1. m5.xlarge => 4 vCPU
2. m5.2xlarge => 8 vCPU
Job Queue:
---------
| A | B |
---------

Job Requirements:
A => 4 vCPU - ALLOCATED TO m5.xlarge
B => 2 vCPU - WAITING

In this situation, Batch will allocate Job A to compute resource #1 because it is the most cost efficient resource that matches the vCPU requirement. However, with this BEST_FIT strategy, Job B will not be allocated to our other available compute resource even though it is strong enough to handle it. Instead, it will wait until the first job is finished processing or wait a similar m5.xlarge resource to be provisioned.

The alternative would be to use the BEST_FIT_PROGRESSIVE strategy in order for the remaining job to be handled in larger containers regardless of vCPU requirement and costs.

Permissions

You can grant any Principal the batch:submitJob permission on both a job definition and a job queue like this:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc


ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Documentation

Overview

The CDK Construct Library for AWS::Batch

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func EcsEc2ContainerDefinition_IsConstruct

func EcsEc2ContainerDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func EcsFargateContainerDefinition_IsConstruct

func EcsFargateContainerDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func EcsJobDefinition_IsConstruct

func EcsJobDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func EcsJobDefinition_IsOwnedResource

func EcsJobDefinition_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func EcsJobDefinition_IsResource

func EcsJobDefinition_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func EfsVolume_IsEfsVolume

func EfsVolume_IsEfsVolume(x interface{}) *bool

Returns true if x is an EfsVolume, false otherwise. Experimental.

func EksContainerDefinition_IsConstruct

func EksContainerDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func EksJobDefinition_IsConstruct

func EksJobDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func EksJobDefinition_IsOwnedResource

func EksJobDefinition_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func EksJobDefinition_IsResource

func EksJobDefinition_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func EmptyDirVolume_IsEmptyDirVolume

func EmptyDirVolume_IsEmptyDirVolume(x interface{}) *bool

Returns `true` if `x` is an EmptyDirVolume, `false` otherwise. Experimental.

func FairshareSchedulingPolicy_IsConstruct

func FairshareSchedulingPolicy_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func FairshareSchedulingPolicy_IsOwnedResource

func FairshareSchedulingPolicy_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func FairshareSchedulingPolicy_IsResource

func FairshareSchedulingPolicy_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func FargateComputeEnvironment_IsConstruct

func FargateComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func FargateComputeEnvironment_IsOwnedResource

func FargateComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func FargateComputeEnvironment_IsResource

func FargateComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func HostPathVolume_IsHostPathVolume

func HostPathVolume_IsHostPathVolume(x interface{}) *bool

returns `true` if `x` is a HostPathVolume, `false` otherwise. Experimental.

func HostVolume_IsHostVolume

func HostVolume_IsHostVolume(x interface{}) *bool

returns `true` if `x` is a `HostVolume`, `false` otherwise. Experimental.

func JobQueue_IsConstruct

func JobQueue_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func JobQueue_IsOwnedResource

func JobQueue_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func JobQueue_IsResource

func JobQueue_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func LinuxParameters_IsConstruct

func LinuxParameters_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func ManagedEc2EcsComputeEnvironment_IsConstruct

func ManagedEc2EcsComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func ManagedEc2EcsComputeEnvironment_IsOwnedResource

func ManagedEc2EcsComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func ManagedEc2EcsComputeEnvironment_IsResource

func ManagedEc2EcsComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func ManagedEc2EksComputeEnvironment_IsConstruct

func ManagedEc2EksComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func ManagedEc2EksComputeEnvironment_IsOwnedResource

func ManagedEc2EksComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func ManagedEc2EksComputeEnvironment_IsResource

func ManagedEc2EksComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func MultiNodeJobDefinition_IsConstruct

func MultiNodeJobDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func MultiNodeJobDefinition_IsOwnedResource

func MultiNodeJobDefinition_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func MultiNodeJobDefinition_IsResource

func MultiNodeJobDefinition_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

func NewEcsEc2ContainerDefinition_Override

func NewEcsEc2ContainerDefinition_Override(e EcsEc2ContainerDefinition, scope constructs.Construct, id *string, props *EcsEc2ContainerDefinitionProps)

Experimental.

func NewEcsFargateContainerDefinition_Override

func NewEcsFargateContainerDefinition_Override(e EcsFargateContainerDefinition, scope constructs.Construct, id *string, props *EcsFargateContainerDefinitionProps)

Experimental.

func NewEcsJobDefinition_Override

func NewEcsJobDefinition_Override(e EcsJobDefinition, scope constructs.Construct, id *string, props *EcsJobDefinitionProps)

Experimental.

func NewEcsVolume_Override

func NewEcsVolume_Override(e EcsVolume, options *EcsVolumeOptions)

Experimental.

func NewEfsVolume_Override

func NewEfsVolume_Override(e EfsVolume, options *EfsVolumeOptions)

Experimental.

func NewEksContainerDefinition_Override

func NewEksContainerDefinition_Override(e EksContainerDefinition, scope constructs.Construct, id *string, props *EksContainerDefinitionProps)

Experimental.

func NewEksJobDefinition_Override

func NewEksJobDefinition_Override(e EksJobDefinition, scope constructs.Construct, id *string, props *EksJobDefinitionProps)

Experimental.

func NewEksVolume_Override

func NewEksVolume_Override(e EksVolume, options *EksVolumeOptions)

Experimental.

func NewEmptyDirVolume_Override

func NewEmptyDirVolume_Override(e EmptyDirVolume, options *EmptyDirVolumeOptions)

Experimental.

func NewFairshareSchedulingPolicy_Override

func NewFairshareSchedulingPolicy_Override(f FairshareSchedulingPolicy, scope constructs.Construct, id *string, props *FairshareSchedulingPolicyProps)

Experimental.

func NewFargateComputeEnvironment_Override

func NewFargateComputeEnvironment_Override(f FargateComputeEnvironment, scope constructs.Construct, id *string, props *FargateComputeEnvironmentProps)

Experimental.

func NewHostPathVolume_Override

func NewHostPathVolume_Override(h HostPathVolume, options *HostPathVolumeOptions)

Experimental.

func NewHostVolume_Override

func NewHostVolume_Override(h HostVolume, options *HostVolumeOptions)

Experimental.

func NewJobQueue_Override

func NewJobQueue_Override(j JobQueue, scope constructs.Construct, id *string, props *JobQueueProps)

Experimental.

func NewLinuxParameters_Override

func NewLinuxParameters_Override(l LinuxParameters, scope constructs.Construct, id *string, props *LinuxParametersProps)

Constructs a new instance of the LinuxParameters class. Experimental.

func NewManagedEc2EcsComputeEnvironment_Override

func NewManagedEc2EcsComputeEnvironment_Override(m ManagedEc2EcsComputeEnvironment, scope constructs.Construct, id *string, props *ManagedEc2EcsComputeEnvironmentProps)

Experimental.

func NewManagedEc2EksComputeEnvironment_Override

func NewManagedEc2EksComputeEnvironment_Override(m ManagedEc2EksComputeEnvironment, scope constructs.Construct, id *string, props *ManagedEc2EksComputeEnvironmentProps)

Experimental.

func NewMultiNodeJobDefinition_Override

func NewMultiNodeJobDefinition_Override(m MultiNodeJobDefinition, scope constructs.Construct, id *string, props *MultiNodeJobDefinitionProps)

Experimental.

func NewReason_Override

func NewReason_Override(r Reason)

Experimental.

func NewRetryStrategy_Override

func NewRetryStrategy_Override(r RetryStrategy, action Action, on Reason)

Experimental.

func NewSecretPathVolume_Override

func NewSecretPathVolume_Override(s SecretPathVolume, options *SecretPathVolumeOptions)

Experimental.

func NewSecret_Override

func NewSecret_Override(s Secret)

Experimental.

func NewUnmanagedComputeEnvironment_Override

func NewUnmanagedComputeEnvironment_Override(u UnmanagedComputeEnvironment, scope constructs.Construct, id *string, props *UnmanagedComputeEnvironmentProps)

Experimental.

func SecretPathVolume_IsSecretPathVolume

func SecretPathVolume_IsSecretPathVolume(x interface{}) *bool

returns `true` if `x` is a `SecretPathVolume` and `false` otherwise. Experimental.

func UnmanagedComputeEnvironment_IsConstruct

func UnmanagedComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`. Experimental.

func UnmanagedComputeEnvironment_IsOwnedResource

func UnmanagedComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise. Experimental.

func UnmanagedComputeEnvironment_IsResource

func UnmanagedComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource. Experimental.

Types

type Action

type Action string

The Action to take when all specified conditions in a RetryStrategy are met.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

Experimental.

const (
	// The job will not retry.
	// Experimental.
	Action_EXIT Action = "EXIT"
	// The job will retry.
	//
	// It can be retried up to the number of times specified in `retryAttempts`.
	// Experimental.
	Action_RETRY Action = "RETRY"
)

type AllocationStrategy

type AllocationStrategy string

Determines how this compute environment chooses instances to spawn.

Example:

var vpc iVpc

computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	AllocationStrategy: batch.AllocationStrategy_BEST_FIT,
})

See: https://aws.amazon.com/blogs/compute/optimizing-for-cost-availability-and-throughput-by-selecting-your-aws-batch-allocation-strategy/

Experimental.

const (
	// Batch chooses the lowest-cost instance type that fits all the jobs in the queue.
	//
	// If instances of that type are not available, the queue will not choose a new type;
	// instead, it will wait for the instance to become available.
	// This can stall your `Queue`, with your compute environment only using part of its max capacity
	// (or none at all) until the `BEST_FIT` instance becomes available.
	// This allocation strategy keeps costs lower but can limit scaling.
	// `BEST_FIT` isn't supported when updating compute environments.
	// Experimental.
	AllocationStrategy_BEST_FIT AllocationStrategy = "BEST_FIT"
	// This is the default Allocation Strategy if `spot` is `false` or unspecified.
	//
	// This strategy will examine the Jobs in the queue and choose whichever instance type meets the requirements
	// of the jobs in the queue and with the lowest cost per vCPU, just as `BEST_FIT`.
	// However, if not all of the capacity can be filled with this instance type,
	// it will choose a new next-best instance type to run any jobs that couldn’t fit into the `BEST_FIT` capacity.
	// To make the most use of this allocation strategy,
	// it is recommended to use as many instance classes as is feasible for your workload.
	// Experimental.
	AllocationStrategy_BEST_FIT_PROGRESSIVE AllocationStrategy = "BEST_FIT_PROGRESSIVE"
	// If your workflow tolerates interruptions, you should enable `spot` on your `ComputeEnvironment` and use `SPOT_CAPACITY_OPTIMIZED` (this is the default if `spot` is enabled).
	//
	// This will tell Batch to choose the instance types from the ones you’ve specified that have
	// the most spot capacity available to minimize the chance of interruption.
	// To get the most benefit from your spot instances,
	// you should allow Batch to choose from as many different instance types as possible.
	// Experimental.
	AllocationStrategy_SPOT_CAPACITY_OPTIMIZED AllocationStrategy = "SPOT_CAPACITY_OPTIMIZED"
	// The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
	//
	// The Batch team recommends this over `SPOT_CAPACITY_OPTIMIZED` in most instances.
	// Experimental.
	AllocationStrategy_SPOT_PRICE_CAPACITY_OPTIMIZED AllocationStrategy = "SPOT_PRICE_CAPACITY_OPTIMIZED"
)

type ComputeEnvironmentProps

type ComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	// Experimental.
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	// Experimental.
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
}

Props common to all ComputeEnvironments.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import "github.com/aws/aws-cdk-go/awscdk"

var role role

computeEnvironmentProps := &ComputeEnvironmentProps{
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	ServiceRole: role,
}

Experimental.

type CustomReason

type CustomReason struct {
	// A glob string that will match on the job exit code.
	//
	// For example, `'40*'` will match 400, 404, 40123456789012.
	// Default: - will not match on the exit code.
	//
	// Experimental.
	OnExitCode *string `field:"optional" json:"onExitCode" yaml:"onExitCode"`
	// A glob string that will match on the reason returned by the exiting job For example, `'CannotPullContainerError*'` indicates that container needed to start the job could not be pulled.
	// Default: - will not match on the reason.
	//
	// Experimental.
	OnReason *string `field:"optional" json:"onReason" yaml:"onReason"`
	// A glob string that will match on the statusReason returned by the exiting job.
	//
	// For example, `'Host EC2*'` indicates that the spot instance has been reclaimed.
	// Default: - will not match on the status reason.
	//
	// Experimental.
	OnStatusReason *string `field:"optional" json:"onStatusReason" yaml:"onStatusReason"`
}

The corresponding Action will only be taken if *all* of the conditions specified here are met.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

Experimental.

type Device

type Device struct {
	// The path for the device on the host container instance.
	// Experimental.
	HostPath *string `field:"required" json:"hostPath" yaml:"hostPath"`
	// The path inside the container at which to expose the host device.
	// Default: Same path as the host.
	//
	// Experimental.
	ContainerPath *string `field:"optional" json:"containerPath" yaml:"containerPath"`
	// The explicit permissions to provide to the container for the device.
	//
	// By default, the container has permissions for read, write, and mknod for the device.
	// Default: Readonly.
	//
	// Experimental.
	Permissions *[]DevicePermission `field:"optional" json:"permissions" yaml:"permissions"`
}

A container instance host device.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

device := &Device{
	HostPath: jsii.String("hostPath"),

	// the properties below are optional
	ContainerPath: jsii.String("containerPath"),
	Permissions: []devicePermission{
		batch_alpha.*devicePermission_READ,
	},
}

Experimental.

type DevicePermission

type DevicePermission string

Permissions for device access. Experimental.

const (
	// Read.
	// Experimental.
	DevicePermission_READ DevicePermission = "READ"
	// Write.
	// Experimental.
	DevicePermission_WRITE DevicePermission = "WRITE"
	// Make a node.
	// Experimental.
	DevicePermission_MKNOD DevicePermission = "MKNOD"
)

type DnsPolicy

type DnsPolicy string

The DNS Policy for the pod used by the Job Definition. See: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

Experimental.

const (
	// The Pod inherits the name resolution configuration from the node that the Pods run on.
	// Experimental.
	DnsPolicy_DEFAULT DnsPolicy = "DEFAULT"
	// Any DNS query that does not match the configured cluster domain suffix, such as `"www.kubernetes.io"`, is forwarded to an upstream nameserver by the DNS server. Cluster administrators may have extra stub-domain and upstream DNS servers configured.
	// Experimental.
	DnsPolicy_CLUSTER_FIRST DnsPolicy = "CLUSTER_FIRST"
	// For Pods running with `hostNetwork`, you should explicitly set its DNS policy to `CLUSTER_FIRST_WITH_HOST_NET`.
	//
	// Otherwise, Pods running with `hostNetwork` and `CLUSTER_FIRST` will fallback to the behavior of the `DEFAULT` policy.
	// Experimental.
	DnsPolicy_CLUSTER_FIRST_WITH_HOST_NET DnsPolicy = "CLUSTER_FIRST_WITH_HOST_NET"
)

type EcsContainerDefinitionProps

type EcsContainerDefinitionProps struct {
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	// Experimental.
	Cpu *float64 `field:"required" json:"cpu" yaml:"cpu"`
	// The image that this container will run.
	// Experimental.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	// Experimental.
	Memory awscdk.Size `field:"required" json:"memory" yaml:"memory"`
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	// Default: - no command.
	//
	// Experimental.
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	// Experimental.
	Environment *map[string]*string `field:"optional" json:"environment" yaml:"environment"`
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	// Default: - a Role will be created.
	//
	// Experimental.
	ExecutionRole awsiam.IRole `field:"optional" json:"executionRole" yaml:"executionRole"`
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no job role.
	//
	// Experimental.
	JobRole awsiam.IRole `field:"optional" json:"jobRole" yaml:"jobRole"`
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	// Experimental.
	LinuxParameters LinuxParameters `field:"optional" json:"linuxParameters" yaml:"linuxParameters"`
	// The loging configuration for this Job.
	// Default: - the log configuration of the Docker daemon.
	//
	// Experimental.
	Logging awsecs.LogDriver `field:"optional" json:"logging" yaml:"logging"`
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	// Experimental.
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	// Experimental.
	Secrets *map[string]Secret `field:"optional" json:"secrets" yaml:"secrets"`
	// The user name to use inside the container.
	// Default: - no user.
	//
	// Experimental.
	User *string `field:"optional" json:"user" yaml:"user"`
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	// Experimental.
	Volumes *[]EcsVolume `field:"optional" json:"volumes" yaml:"volumes"`
}

Props to configure an EcsContainerDefinition.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var containerImage containerImage
var ecsVolume ecsVolume
var linuxParameters linuxParameters
var logDriver logDriver
var role role
var secret secret
var size size

ecsContainerDefinitionProps := &EcsContainerDefinitionProps{
	Cpu: jsii.Number(123),
	Image: containerImage,
	Memory: size,

	// the properties below are optional
	Command: []*string{
		jsii.String("command"),
	},
	Environment: map[string]*string{
		"environmentKey": jsii.String("environment"),
	},
	ExecutionRole: role,
	JobRole: role,
	LinuxParameters: linuxParameters,
	Logging: logDriver,
	ReadonlyRootFilesystem: jsii.Boolean(false),
	Secrets: map[string]*secret{
		"secretsKey": secret,
	},
	User: jsii.String("user"),
	Volumes: []*ecsVolume{
		ecsVolume,
	},
}

Experimental.

type EcsEc2ContainerDefinition

type EcsEc2ContainerDefinition interface {
	constructs.Construct
	IEcsContainerDefinition
	IEcsEc2ContainerDefinition
	// The command that's passed to the container.
	// Experimental.
	Command() *[]*string
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	// Experimental.
	Cpu() *float64
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Experimental.
	Environment() *map[string]*string
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// Experimental.
	ExecutionRole() awsiam.IRole
	// The number of physical GPUs to reserve for the container.
	//
	// Make sure that the number of GPUs reserved for all containers in a job doesn't exceed
	// the number of available GPUs on the compute resource that the job is launched on.
	// Experimental.
	Gpu() *float64
	// The image that this container will run.
	// Experimental.
	Image() awsecs.ContainerImage
	// The role that the container can assume.
	// Experimental.
	JobRole() awsiam.IRole
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Experimental.
	LinuxParameters() LinuxParameters
	// The configuration of the log driver.
	// Experimental.
	LogDriverConfig() *awsecs.LogDriverConfig
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	// Experimental.
	Memory() awscdk.Size
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// When this parameter is true, the container is given elevated permissions on the host container instance (similar to the root user).
	// Experimental.
	Privileged() *bool
	// Gives the container readonly access to its root filesystem.
	// Experimental.
	ReadonlyRootFilesystem() *bool
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// Experimental.
	Secrets() *map[string]Secret
	// Limits to set for the user this docker container will run as.
	// Experimental.
	Ulimits() *[]*Ulimit
	// The user name to use inside the container.
	// Experimental.
	User() *string
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Experimental.
	Volumes() *[]EcsVolume
	// Add a ulimit to this container.
	// Experimental.
	AddUlimit(ulimit *Ulimit)
	// Add a Volume to this container.
	// Experimental.
	AddVolume(volume EcsVolume)
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A container orchestrated by ECS that uses EC2 resources.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Experimental.

func NewEcsEc2ContainerDefinition

func NewEcsEc2ContainerDefinition(scope constructs.Construct, id *string, props *EcsEc2ContainerDefinitionProps) EcsEc2ContainerDefinition

Experimental.

type EcsEc2ContainerDefinitionProps

type EcsEc2ContainerDefinitionProps struct {
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	// Experimental.
	Cpu *float64 `field:"required" json:"cpu" yaml:"cpu"`
	// The image that this container will run.
	// Experimental.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	// Experimental.
	Memory awscdk.Size `field:"required" json:"memory" yaml:"memory"`
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	// Default: - no command.
	//
	// Experimental.
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	// Experimental.
	Environment *map[string]*string `field:"optional" json:"environment" yaml:"environment"`
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	// Default: - a Role will be created.
	//
	// Experimental.
	ExecutionRole awsiam.IRole `field:"optional" json:"executionRole" yaml:"executionRole"`
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no job role.
	//
	// Experimental.
	JobRole awsiam.IRole `field:"optional" json:"jobRole" yaml:"jobRole"`
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	// Experimental.
	LinuxParameters LinuxParameters `field:"optional" json:"linuxParameters" yaml:"linuxParameters"`
	// The loging configuration for this Job.
	// Default: - the log configuration of the Docker daemon.
	//
	// Experimental.
	Logging awsecs.LogDriver `field:"optional" json:"logging" yaml:"logging"`
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	// Experimental.
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	// Experimental.
	Secrets *map[string]Secret `field:"optional" json:"secrets" yaml:"secrets"`
	// The user name to use inside the container.
	// Default: - no user.
	//
	// Experimental.
	User *string `field:"optional" json:"user" yaml:"user"`
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	// Experimental.
	Volumes *[]EcsVolume `field:"optional" json:"volumes" yaml:"volumes"`
	// The number of physical GPUs to reserve for the container.
	//
	// Make sure that the number of GPUs reserved for all containers in a job doesn't exceed
	// the number of available GPUs on the compute resource that the job is launched on.
	// Default: - no gpus.
	//
	// Experimental.
	Gpu *float64 `field:"optional" json:"gpu" yaml:"gpu"`
	// When this parameter is true, the container is given elevated permissions on the host container instance (similar to the root user).
	// Default: false.
	//
	// Experimental.
	Privileged *bool `field:"optional" json:"privileged" yaml:"privileged"`
	// Limits to set for the user this docker container will run as.
	// Default: - no ulimits.
	//
	// Experimental.
	Ulimits *[]*Ulimit `field:"optional" json:"ulimits" yaml:"ulimits"`
}

Props to configure an EcsEc2ContainerDefinition.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Experimental.

type EcsFargateContainerDefinition

type EcsFargateContainerDefinition interface {
	constructs.Construct
	IEcsContainerDefinition
	IEcsFargateContainerDefinition
	// Indicates whether the job has a public IP address.
	//
	// For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet
	// (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet.
	// Experimental.
	AssignPublicIp() *bool
	// The command that's passed to the container.
	// Experimental.
	Command() *[]*string
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	// Experimental.
	Cpu() *float64
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Experimental.
	Environment() *map[string]*string
	// The size for ephemeral storage.
	// Experimental.
	EphemeralStorageSize() awscdk.Size
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// Experimental.
	ExecutionRole() awsiam.IRole
	// Which version of Fargate to use when running this container.
	// Experimental.
	FargatePlatformVersion() awsecs.FargatePlatformVersion
	// The image that this container will run.
	// Experimental.
	Image() awsecs.ContainerImage
	// The role that the container can assume.
	// Experimental.
	JobRole() awsiam.IRole
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Experimental.
	LinuxParameters() LinuxParameters
	// The configuration of the log driver.
	// Experimental.
	LogDriverConfig() *awsecs.LogDriverConfig
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	// Experimental.
	Memory() awscdk.Size
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// Gives the container readonly access to its root filesystem.
	// Experimental.
	ReadonlyRootFilesystem() *bool
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// Experimental.
	Secrets() *map[string]Secret
	// The user name to use inside the container.
	// Experimental.
	User() *string
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Experimental.
	Volumes() *[]EcsVolume
	// Add a Volume to this container.
	// Experimental.
	AddVolume(volume EcsVolume)
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A container orchestrated by ECS that uses Fargate resources.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var containerImage containerImage
var ecsVolume ecsVolume
var linuxParameters linuxParameters
var logDriver logDriver
var role role
var secret secret
var size size

ecsFargateContainerDefinition := batch_alpha.NewEcsFargateContainerDefinition(this, jsii.String("MyEcsFargateContainerDefinition"), &EcsFargateContainerDefinitionProps{
	Cpu: jsii.Number(123),
	Image: containerImage,
	Memory: size,

	// the properties below are optional
	AssignPublicIp: jsii.Boolean(false),
	Command: []*string{
		jsii.String("command"),
	},
	Environment: map[string]*string{
		"environmentKey": jsii.String("environment"),
	},
	EphemeralStorageSize: size,
	ExecutionRole: role,
	FargatePlatformVersion: awscdk.Aws_ecs.FargatePlatformVersion_LATEST,
	JobRole: role,
	LinuxParameters: linuxParameters,
	Logging: logDriver,
	ReadonlyRootFilesystem: jsii.Boolean(false),
	Secrets: map[string]*secret{
		"secretsKey": secret,
	},
	User: jsii.String("user"),
	Volumes: []*ecsVolume{
		ecsVolume,
	},
})

Experimental.

func NewEcsFargateContainerDefinition

func NewEcsFargateContainerDefinition(scope constructs.Construct, id *string, props *EcsFargateContainerDefinitionProps) EcsFargateContainerDefinition

Experimental.

type EcsFargateContainerDefinitionProps

type EcsFargateContainerDefinitionProps struct {
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	// Experimental.
	Cpu *float64 `field:"required" json:"cpu" yaml:"cpu"`
	// The image that this container will run.
	// Experimental.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	// Experimental.
	Memory awscdk.Size `field:"required" json:"memory" yaml:"memory"`
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	// Default: - no command.
	//
	// Experimental.
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	// Experimental.
	Environment *map[string]*string `field:"optional" json:"environment" yaml:"environment"`
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	// Default: - a Role will be created.
	//
	// Experimental.
	ExecutionRole awsiam.IRole `field:"optional" json:"executionRole" yaml:"executionRole"`
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no job role.
	//
	// Experimental.
	JobRole awsiam.IRole `field:"optional" json:"jobRole" yaml:"jobRole"`
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	// Experimental.
	LinuxParameters LinuxParameters `field:"optional" json:"linuxParameters" yaml:"linuxParameters"`
	// The loging configuration for this Job.
	// Default: - the log configuration of the Docker daemon.
	//
	// Experimental.
	Logging awsecs.LogDriver `field:"optional" json:"logging" yaml:"logging"`
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	// Experimental.
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	// Experimental.
	Secrets *map[string]Secret `field:"optional" json:"secrets" yaml:"secrets"`
	// The user name to use inside the container.
	// Default: - no user.
	//
	// Experimental.
	User *string `field:"optional" json:"user" yaml:"user"`
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	// Experimental.
	Volumes *[]EcsVolume `field:"optional" json:"volumes" yaml:"volumes"`
	// Indicates whether the job has a public IP address.
	//
	// For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet
	// (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
	//
	// Default: false.
	//
	// Experimental.
	AssignPublicIp *bool `field:"optional" json:"assignPublicIp" yaml:"assignPublicIp"`
	// The size for ephemeral storage.
	// Default: - 20 GiB.
	//
	// Experimental.
	EphemeralStorageSize awscdk.Size `field:"optional" json:"ephemeralStorageSize" yaml:"ephemeralStorageSize"`
	// Which version of Fargate to use when running this container.
	// Default: LATEST.
	//
	// Experimental.
	FargatePlatformVersion awsecs.FargatePlatformVersion `field:"optional" json:"fargatePlatformVersion" yaml:"fargatePlatformVersion"`
}

Props to configure an EcsFargateContainerDefinition.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var containerImage containerImage
var ecsVolume ecsVolume
var linuxParameters linuxParameters
var logDriver logDriver
var role role
var secret secret
var size size

ecsFargateContainerDefinitionProps := &EcsFargateContainerDefinitionProps{
	Cpu: jsii.Number(123),
	Image: containerImage,
	Memory: size,

	// the properties below are optional
	AssignPublicIp: jsii.Boolean(false),
	Command: []*string{
		jsii.String("command"),
	},
	Environment: map[string]*string{
		"environmentKey": jsii.String("environment"),
	},
	EphemeralStorageSize: size,
	ExecutionRole: role,
	FargatePlatformVersion: awscdk.Aws_ecs.FargatePlatformVersion_LATEST,
	JobRole: role,
	LinuxParameters: linuxParameters,
	Logging: logDriver,
	ReadonlyRootFilesystem: jsii.Boolean(false),
	Secrets: map[string]*secret{
		"secretsKey": secret,
	},
	User: jsii.String("user"),
	Volumes: []*ecsVolume{
		ecsVolume,
	},
}

Experimental.

type EcsJobDefinition

type EcsJobDefinition interface {
	awscdk.Resource
	IJobDefinition
	// The container that this job will run.
	// Experimental.
	Container() IEcsContainerDefinition
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// The ARN of this job definition.
	// Experimental.
	JobDefinitionArn() *string
	// The name of this job definition.
	// Experimental.
	JobDefinitionName() *string
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// Experimental.
	Parameters() *map[string]interface{}
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	// Experimental.
	PropagateTags() *bool
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Experimental.
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	// Experimental.
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Experimental.
	SchedulingPriority() *float64
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Experimental.
	Timeout() awscdk.Duration
	// Add a RetryStrategy to this JobDefinition.
	// Experimental.
	AddRetryStrategy(strategy RetryStrategy)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Grants the `batch:submitJob` permission to the identity on both this job definition and the `queue`.
	// Experimental.
	GrantSubmitJob(identity awsiam.IGrantable, queue IJobQueue)
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A JobDefinition that uses ECS orchestration.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Experimental.

func NewEcsJobDefinition

func NewEcsJobDefinition(scope constructs.Construct, id *string, props *EcsJobDefinitionProps) EcsJobDefinition

Experimental.

type EcsJobDefinitionProps

type EcsJobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	// Experimental.
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	// Experimental.
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	// Experimental.
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	// Experimental.
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	// Experimental.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The container that this job will run.
	// Experimental.
	Container IEcsContainerDefinition `field:"required" json:"container" yaml:"container"`
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	// Default: false.
	//
	// Experimental.
	PropagateTags *bool `field:"optional" json:"propagateTags" yaml:"propagateTags"`
}

Props for EcsJobDefinition.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Experimental.

type EcsMachineImage

type EcsMachineImage struct {
	// The machine image to use.
	// Default: - chosen by batch.
	//
	// Experimental.
	Image awsec2.IMachineImage `field:"optional" json:"image" yaml:"image"`
	// Tells Batch which instance type to launch this image on.
	// Default: - 'ECS_AL2' for non-gpu instances, 'ECS_AL2_NVIDIA' for gpu instances.
	//
	// Experimental.
	ImageType EcsMachineImageType `field:"optional" json:"imageType" yaml:"imageType"`
}

A Batch MachineImage that is compatible with ECS.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import "github.com/aws/aws-cdk-go/awscdk"

var machineImage iMachineImage

ecsMachineImage := &EcsMachineImage{
	Image: machineImage,
	ImageType: batch_alpha.EcsMachineImageType_ECS_AL2,
}

Experimental.

type EcsMachineImageType

type EcsMachineImageType string

Maps the image to instance types. Experimental.

const (
	// Tells Batch that this machine image runs on non-GPU instances.
	// Experimental.
	EcsMachineImageType_ECS_AL2 EcsMachineImageType = "ECS_AL2"
	// Tells Batch that this machine image runs on GPU instances.
	// Experimental.
	EcsMachineImageType_ECS_AL2_NVIDIA EcsMachineImageType = "ECS_AL2_NVIDIA"
)

type EcsVolume

type EcsVolume interface {
	// The path on the container that this volume will be mounted to.
	// Experimental.
	ContainerPath() *string
	// The name of this volume.
	// Experimental.
	Name() *string
	// Whether or not the container has readonly access to this volume.
	// Default: false.
	//
	// Experimental.
	Readonly() *bool
}

Represents a Volume that can be mounted to a container that uses ECS.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"
import efs "github.com/aws/aws-cdk-go/awscdk"

var myFileSystem iFileSystem
var myJobRole role

myFileSystem.GrantRead(myJobRole)

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Volumes: []ecsVolume{
			batch.*ecsVolume_Efs(&EfsVolumeOptions{
				Name: jsii.String("myVolume"),
				FileSystem: myFileSystem,
				ContainerPath: jsii.String("/Volumes/myVolume"),
				UseJobRole: jsii.Boolean(true),
			}),
		},
		JobRole: myJobRole,
	}),
})

Experimental.

type EcsVolumeOptions

type EcsVolumeOptions struct {
	// the path on the container where this volume is mounted.
	// Experimental.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// the name of this volume.
	// Experimental.
	Name *string `field:"required" json:"name" yaml:"name"`
	// if set, the container will have readonly access to the volume.
	// Default: false.
	//
	// Experimental.
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
}

Options to configure an EcsVolume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

ecsVolumeOptions := &EcsVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	Readonly: jsii.Boolean(false),
}

Experimental.

type EfsVolume

type EfsVolume interface {
	EcsVolume
	// The Amazon EFS access point ID to use.
	//
	// If an access point is specified, `rootDirectory` must either be omitted or set to `/`
	// which enforces the path set on the EFS access point.
	// If an access point is used, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html
	//
	// Default: - no accessPointId.
	//
	// Experimental.
	AccessPointId() *string
	// The path on the container that this volume will be mounted to.
	// Experimental.
	ContainerPath() *string
	// Enables encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server.
	// See: https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html
	//
	// Default: false.
	//
	// Experimental.
	EnableTransitEncryption() *bool
	// The EFS File System that supports this volume.
	// Experimental.
	FileSystem() awsefs.IFileSystem
	// The name of this volume.
	// Experimental.
	Name() *string
	// Whether or not the container has readonly access to this volume.
	// Default: false.
	//
	// Experimental.
	Readonly() *bool
	// The directory within the Amazon EFS file system to mount as the root directory inside the host.
	//
	// If this parameter is omitted, the root of the Amazon EFS volume is used instead.
	// Specifying `/` has the same effect as omitting this parameter.
	// The maximum length is 4,096 characters.
	// Default: - root of the EFS File System.
	//
	// Experimental.
	RootDirectory() *string
	// The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server.
	//
	// The value must be between 0 and 65,535.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html
	//
	// Default: - chosen by the EFS Mount Helper.
	//
	// Experimental.
	TransitEncryptionPort() *float64
	// Whether or not to use the AWS Batch job IAM role defined in a job definition when mounting the Amazon EFS file system.
	//
	// If specified, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints
	//
	// Default: false.
	//
	// Experimental.
	UseJobRole() *bool
}

A Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import "github.com/aws/aws-cdk-go/awscdk"

var fileSystem fileSystem

efsVolume := batch_alpha.NewEfsVolume(&EfsVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	FileSystem: fileSystem,
	Name: jsii.String("name"),

	// the properties below are optional
	AccessPointId: jsii.String("accessPointId"),
	EnableTransitEncryption: jsii.Boolean(false),
	Readonly: jsii.Boolean(false),
	RootDirectory: jsii.String("rootDirectory"),
	TransitEncryptionPort: jsii.Number(123),
	UseJobRole: jsii.Boolean(false),
})

Experimental.

func EcsVolume_Efs

func EcsVolume_Efs(options *EfsVolumeOptions) EfsVolume

Creates a Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed. See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html

Experimental.

func EfsVolume_Efs

func EfsVolume_Efs(options *EfsVolumeOptions) EfsVolume

Creates a Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed. See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html

Experimental.

func HostVolume_Efs

func HostVolume_Efs(options *EfsVolumeOptions) EfsVolume

Creates a Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed. See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html

Experimental.

func NewEfsVolume

func NewEfsVolume(options *EfsVolumeOptions) EfsVolume

Experimental.

type EfsVolumeOptions

type EfsVolumeOptions struct {
	// the path on the container where this volume is mounted.
	// Experimental.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// the name of this volume.
	// Experimental.
	Name *string `field:"required" json:"name" yaml:"name"`
	// if set, the container will have readonly access to the volume.
	// Default: false.
	//
	// Experimental.
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The EFS File System that supports this volume.
	// Experimental.
	FileSystem awsefs.IFileSystem `field:"required" json:"fileSystem" yaml:"fileSystem"`
	// The Amazon EFS access point ID to use.
	//
	// If an access point is specified, `rootDirectory` must either be omitted or set to `/`
	// which enforces the path set on the EFS access point.
	// If an access point is used, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html
	//
	// Default: - no accessPointId.
	//
	// Experimental.
	AccessPointId *string `field:"optional" json:"accessPointId" yaml:"accessPointId"`
	// Enables encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server.
	// See: https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html
	//
	// Default: false.
	//
	// Experimental.
	EnableTransitEncryption *bool `field:"optional" json:"enableTransitEncryption" yaml:"enableTransitEncryption"`
	// The directory within the Amazon EFS file system to mount as the root directory inside the host.
	//
	// If this parameter is omitted, the root of the Amazon EFS volume is used instead.
	// Specifying `/` has the same effect as omitting this parameter.
	// The maximum length is 4,096 characters.
	// Default: - root of the EFS File System.
	//
	// Experimental.
	RootDirectory *string `field:"optional" json:"rootDirectory" yaml:"rootDirectory"`
	// The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server.
	//
	// The value must be between 0 and 65,535.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html
	//
	// Default: - chosen by the EFS Mount Helper.
	//
	// Experimental.
	TransitEncryptionPort *float64 `field:"optional" json:"transitEncryptionPort" yaml:"transitEncryptionPort"`
	// Whether or not to use the AWS Batch job IAM role defined in a job definition when mounting the Amazon EFS file system.
	//
	// If specified, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints
	//
	// Default: false.
	//
	// Experimental.
	UseJobRole *bool `field:"optional" json:"useJobRole" yaml:"useJobRole"`
}

Options for configuring an EfsVolume.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"
import efs "github.com/aws/aws-cdk-go/awscdk"

var myFileSystem iFileSystem
var myJobRole role

myFileSystem.GrantRead(myJobRole)

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Volumes: []ecsVolume{
			batch.*ecsVolume_Efs(&EfsVolumeOptions{
				Name: jsii.String("myVolume"),
				FileSystem: myFileSystem,
				ContainerPath: jsii.String("/Volumes/myVolume"),
				UseJobRole: jsii.Boolean(true),
			}),
		},
		JobRole: myJobRole,
	}),
})

Experimental.

type EksContainerDefinition

type EksContainerDefinition interface {
	constructs.Construct
	IEksContainerDefinition
	// An array of arguments to the entrypoint.
	//
	// If this isn't specified, the CMD of the container image is used.
	// This corresponds to the args member in the Entrypoint portion of the Pod in Kubernetes.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to "$(NAME1)" and the NAME1 environment variable doesn't exist,
	// the command string will remain "$(NAME1)." $$ is replaced with $, and the resulting string isn't expanded.
	// or example, $$(VAR_NAME) is passed as $(VAR_NAME) whether or not the VAR_NAME environment variable exists.
	// Experimental.
	Args() *[]*string
	// The entrypoint for the container.
	//
	// This isn't run within a shell.
	// If this isn't specified, the `ENTRYPOINT` of the container image is used.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to `"$(NAME1)"` and the `NAME1` environment variable doesn't exist,
	// the command string will remain `"$(NAME1)."` `$$` is replaced with `$` and the resulting string isn't expanded.
	// For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists.
	//
	// The entrypoint can't be updated.
	// Experimental.
	Command() *[]*string
	// The hard limit of CPUs to present to this container. Must be an even multiple of 0.25.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// Experimental.
	CpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// Experimental.
	CpuReservation() *float64
	// The environment variables to pass to this container.
	//
	// *Note*: Environment variables cannot start with "AWS_BATCH".
	// This naming convention is reserved for variables that AWS Batch sets.
	// Experimental.
	Env() *map[string]*string
	// The hard limit of GPUs to present to this container.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// Experimental.
	GpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// Experimental.
	GpuReservation() *float64
	// The image that this container will run.
	// Experimental.
	Image() awsecs.ContainerImage
	// The image pull policy for this container.
	// Experimental.
	ImagePullPolicy() ImagePullPolicy
	// The amount (in MiB) of memory to present to the container.
	//
	// If your container attempts to exceed the allocated memory, it will be terminated.
	//
	// Must be larger that 4 MiB
	//
	// At least one of `memoryLimit` and `memoryReservation` is required
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// Experimental.
	MemoryLimit() awscdk.Size
	// The soft limit (in MiB) of memory to reserve for the container.
	//
	// Your container will be given at least this much memory, but may consume more.
	//
	// Must be larger that 4 MiB
	//
	// When system memory is under heavy contention, Docker attempts to keep the
	// container memory to this soft limit. However, your container can consume more
	// memory when it needs to, up to either the hard limit specified with the memory
	// parameter (if applicable), or all of the available memory on the container
	// instance, whichever comes first.
	//
	// At least one of `memoryLimit` and `memoryReservation` is required.
	// If both are specified, then `memoryLimit` must be equal to `memoryReservation`
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// Experimental.
	MemoryReservation() awscdk.Size
	// The name of this container.
	// Experimental.
	Name() *string
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// If specified, gives this container elevated permissions on the host container instance.
	//
	// The level of permissions are similar to the root user permissions.
	//
	// This parameter maps to `privileged` policy in the Privileged pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// Experimental.
	Privileged() *bool
	// If specified, gives this container readonly access to its root file system.
	//
	// This parameter maps to `ReadOnlyRootFilesystem` policy in the Volumes and file systems pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// Experimental.
	ReadonlyRootFilesystem() *bool
	// If specified, the container is run as the specified group ID (`gid`).
	//
	// If this parameter isn't specified, the default is the group that's specified in the image metadata.
	// This parameter maps to `RunAsGroup` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// Experimental.
	RunAsGroup() *float64
	// If specified, the container is run as a user with a `uid` other than 0.
	//
	// Otherwise, no such rule is enforced.
	// This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// Experimental.
	RunAsRoot() *bool
	// If specified, this container is run as the specified user ID (`uid`).
	//
	// This parameter maps to `RunAsUser` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// Experimental.
	RunAsUser() *float64
	// The Volumes to mount to this container.
	//
	// Automatically added to the Pod.
	// Experimental.
	Volumes() *[]EksVolume
	// Mount a Volume to this container.
	//
	// Automatically added to the Pod.
	// Experimental.
	AddVolume(volume EksVolume)
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A container that can be run with EKS orchestration on EC2 resources.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

Experimental.

func NewEksContainerDefinition

func NewEksContainerDefinition(scope constructs.Construct, id *string, props *EksContainerDefinitionProps) EksContainerDefinition

Experimental.

type EksContainerDefinitionProps

type EksContainerDefinitionProps struct {
	// The image that this container will run.
	// Experimental.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// An array of arguments to the entrypoint.
	//
	// If this isn't specified, the CMD of the container image is used.
	// This corresponds to the args member in the Entrypoint portion of the Pod in Kubernetes.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to "$(NAME1)" and the NAME1 environment variable doesn't exist,
	// the command string will remain "$(NAME1)." $$ is replaced with $, and the resulting string isn't expanded.
	// or example, $$(VAR_NAME) is passed as $(VAR_NAME) whether or not the VAR_NAME environment variable exists.
	// See: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
	//
	// Default: - no args.
	//
	// Experimental.
	Args *[]*string `field:"optional" json:"args" yaml:"args"`
	// The entrypoint for the container.
	//
	// This isn't run within a shell.
	// If this isn't specified, the `ENTRYPOINT` of the container image is used.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to `"$(NAME1)"` and the `NAME1` environment variable doesn't exist,
	// the command string will remain `"$(NAME1)."` `$$` is replaced with `$` and the resulting string isn't expanded.
	// For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists.
	//
	// The entrypoint can't be updated.
	// See: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint
	//
	// Default: - no command.
	//
	// Experimental.
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The hard limit of CPUs to present to this container. Must be an even multiple of 0.25.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPU limit.
	//
	// Experimental.
	CpuLimit *float64 `field:"optional" json:"cpuLimit" yaml:"cpuLimit"`
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPUs reserved.
	//
	// Experimental.
	CpuReservation *float64 `field:"optional" json:"cpuReservation" yaml:"cpuReservation"`
	// The environment variables to pass to this container.
	//
	// *Note*: Environment variables cannot start with "AWS_BATCH".
	// This naming convention is reserved for variables that AWS Batch sets.
	// Default: - no environment variables.
	//
	// Experimental.
	Env *map[string]*string `field:"optional" json:"env" yaml:"env"`
	// The hard limit of GPUs to present to this container.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPU limit.
	//
	// Experimental.
	GpuLimit *float64 `field:"optional" json:"gpuLimit" yaml:"gpuLimit"`
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPUs reserved.
	//
	// Experimental.
	GpuReservation *float64 `field:"optional" json:"gpuReservation" yaml:"gpuReservation"`
	// The image pull policy for this container.
	// See: https://kubernetes.io/docs/concepts/containers/images/#updating-images
	//
	// Default: - `ALWAYS` if the `:latest` tag is specified, `IF_NOT_PRESENT` otherwise.
	//
	// Experimental.
	ImagePullPolicy ImagePullPolicy `field:"optional" json:"imagePullPolicy" yaml:"imagePullPolicy"`
	// The amount (in MiB) of memory to present to the container.
	//
	// If your container attempts to exceed the allocated memory, it will be terminated.
	//
	// Must be larger that 4 MiB
	//
	// At least one of `memoryLimit` and `memoryReservation` is required
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory limit.
	//
	// Experimental.
	MemoryLimit awscdk.Size `field:"optional" json:"memoryLimit" yaml:"memoryLimit"`
	// The soft limit (in MiB) of memory to reserve for the container.
	//
	// Your container will be given at least this much memory, but may consume more.
	//
	// Must be larger that 4 MiB
	//
	// When system memory is under heavy contention, Docker attempts to keep the
	// container memory to this soft limit. However, your container can consume more
	// memory when it needs to, up to either the hard limit specified with the memory
	// parameter (if applicable), or all of the available memory on the container
	// instance, whichever comes first.
	//
	// At least one of `memoryLimit` and `memoryReservation` is required.
	// If both are specified, then `memoryLimit` must be equal to `memoryReservation`
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory reserved.
	//
	// Experimental.
	MemoryReservation awscdk.Size `field:"optional" json:"memoryReservation" yaml:"memoryReservation"`
	// The name of this container.
	// Default: : `'Default'`.
	//
	// Experimental.
	Name *string `field:"optional" json:"name" yaml:"name"`
	// If specified, gives this container elevated permissions on the host container instance.
	//
	// The level of permissions are similar to the root user permissions.
	//
	// This parameter maps to `privileged` policy in the Privileged pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	// Experimental.
	Privileged *bool `field:"optional" json:"privileged" yaml:"privileged"`
	// If specified, gives this container readonly access to its root file system.
	//
	// This parameter maps to `ReadOnlyRootFilesystem` policy in the Volumes and file systems pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	// Experimental.
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// If specified, the container is run as the specified group ID (`gid`).
	//
	// If this parameter isn't specified, the default is the group that's specified in the image metadata.
	// This parameter maps to `RunAsGroup` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: none.
	//
	// Experimental.
	RunAsGroup *float64 `field:"optional" json:"runAsGroup" yaml:"runAsGroup"`
	// If specified, the container is run as a user with a `uid` other than 0.
	//
	// Otherwise, no such rule is enforced.
	// This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the container is *not* required to run as a non-root user.
	//
	// Experimental.
	RunAsRoot *bool `field:"optional" json:"runAsRoot" yaml:"runAsRoot"`
	// If specified, this container is run as the specified user ID (`uid`).
	//
	// This parameter maps to `RunAsUser` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the user that is specified in the image metadata.
	//
	// Experimental.
	RunAsUser *float64 `field:"optional" json:"runAsUser" yaml:"runAsUser"`
	// The Volumes to mount to this container.
	//
	// Automatically added to the Pod.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/
	//
	// Default: - no volumes.
	//
	// Experimental.
	Volumes *[]EksVolume `field:"optional" json:"volumes" yaml:"volumes"`
}

Props to configure an EksContainerDefinition.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

Experimental.

type EksJobDefinition

type EksJobDefinition interface {
	awscdk.Resource
	IEksJobDefinition
	IJobDefinition
	// The container this Job Definition will run.
	// Experimental.
	Container() EksContainerDefinition
	// The DNS Policy of the pod used by this Job Definition.
	// Experimental.
	DnsPolicy() DnsPolicy
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// The ARN of this job definition.
	// Experimental.
	JobDefinitionArn() *string
	// The name of this job definition.
	// Experimental.
	JobDefinitionName() *string
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// Experimental.
	Parameters() *map[string]interface{}
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Experimental.
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	// Experimental.
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Experimental.
	SchedulingPriority() *float64
	// The name of the service account that's used to run the container.
	//
	// service accounts are Kubernetes method of identification and authentication,
	// roughly analogous to IAM users.
	// Experimental.
	ServiceAccount() *string
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Experimental.
	Timeout() awscdk.Duration
	// If specified, the Pod used by this Job Definition will use the host's network IP address.
	//
	// Otherwise, the Kubernetes pod networking model is enabled.
	// Most AWS Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections.
	// Experimental.
	UseHostNetwork() *bool
	// Add a RetryStrategy to this JobDefinition.
	// Experimental.
	AddRetryStrategy(strategy RetryStrategy)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A JobDefinition that uses Eks orchestration.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

Experimental.

func NewEksJobDefinition

func NewEksJobDefinition(scope constructs.Construct, id *string, props *EksJobDefinitionProps) EksJobDefinition

Experimental.

type EksJobDefinitionProps

type EksJobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	// Experimental.
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	// Experimental.
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	// Experimental.
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	// Experimental.
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	// Experimental.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The container this Job Definition will run.
	// Experimental.
	Container EksContainerDefinition `field:"required" json:"container" yaml:"container"`
	// The DNS Policy of the pod used by this Job Definition.
	// See: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
	//
	// Default: `DnsPolicy.CLUSTER_FIRST`
	//
	// Experimental.
	DnsPolicy DnsPolicy `field:"optional" json:"dnsPolicy" yaml:"dnsPolicy"`
	// The name of the service account that's used to run the container.
	//
	// service accounts are Kubernetes method of identification and authentication,
	// roughly analogous to IAM users.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html
	//
	// Default: - the default service account of the container.
	//
	// Experimental.
	ServiceAccount *string `field:"optional" json:"serviceAccount" yaml:"serviceAccount"`
	// If specified, the Pod used by this Job Definition will use the host's network IP address.
	//
	// Otherwise, the Kubernetes pod networking model is enabled.
	// Most AWS Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections.
	// See: https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking
	//
	// Default: true.
	//
	// Experimental.
	UseHostNetwork *bool `field:"optional" json:"useHostNetwork" yaml:"useHostNetwork"`
}

Props for EksJobDefinition.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

Experimental.

type EksMachineImage

type EksMachineImage struct {
	// The machine image to use.
	// Default: - chosen by batch.
	//
	// Experimental.
	Image awsec2.IMachineImage `field:"optional" json:"image" yaml:"image"`
	// Tells Batch which instance type to launch this image on.
	// Default: - 'EKS_AL2' for non-gpu instances, 'EKS_AL2_NVIDIA' for gpu instances.
	//
	// Experimental.
	ImageType EksMachineImageType `field:"optional" json:"imageType" yaml:"imageType"`
}

A Batch MachineImage that is compatible with EKS.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import "github.com/aws/aws-cdk-go/awscdk"

var machineImage iMachineImage

eksMachineImage := &EksMachineImage{
	Image: machineImage,
	ImageType: batch_alpha.EksMachineImageType_EKS_AL2,
}

Experimental.

type EksMachineImageType

type EksMachineImageType string

Maps the image to instance types. Experimental.

const (
	// Tells Batch that this machine image runs on non-GPU instances.
	// Experimental.
	EksMachineImageType_EKS_AL2 EksMachineImageType = "EKS_AL2"
	// Tells Batch that this machine image runs on GPU instances.
	// Experimental.
	EksMachineImageType_EKS_AL2_NVIDIA EksMachineImageType = "EKS_AL2_NVIDIA"
)

type EksVolume

type EksVolume interface {
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	// Experimental.
	ContainerPath() *string
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name() *string
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly() *bool
}

A Volume that can be mounted to a container supported by EKS.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

Experimental.

type EksVolumeOptions

type EksVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	// Experimental.
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
}

Options to configure an EksVolume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

eksVolumeOptions := &EksVolumeOptions{
	Name: jsii.String("name"),

	// the properties below are optional
	MountPath: jsii.String("mountPath"),
	Readonly: jsii.Boolean(false),
}

Experimental.

type EmptyDirMediumType

type EmptyDirMediumType string

What medium the volume will live in.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

Experimental.

const (
	// Use the disk storage of the node.
	//
	// Items written here will survive node reboots.
	// Experimental.
	EmptyDirMediumType_DISK EmptyDirMediumType = "DISK"
	// Use the `tmpfs` volume that is backed by RAM of the node.
	//
	// Items written here will *not* survive node reboots.
	// Experimental.
	EmptyDirMediumType_MEMORY EmptyDirMediumType = "MEMORY"
)

type EmptyDirVolume

type EmptyDirVolume interface {
	EksVolume
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	// Experimental.
	ContainerPath() *string
	// The storage type to use for this Volume.
	// Default: `EmptyDirMediumType.DISK`
	//
	// Experimental.
	Medium() EmptyDirMediumType
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name() *string
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly() *bool
	// The maximum size for this Volume.
	// Default: - no size limit.
	//
	// Experimental.
	SizeLimit() awscdk.Size
}

A Kubernetes EmptyDir volume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"

var size size

emptyDirVolume := batch_alpha.NewEmptyDirVolume(&EmptyDirVolumeOptions{
	Name: jsii.String("name"),

	// the properties below are optional
	Medium: batch_alpha.EmptyDirMediumType_DISK,
	MountPath: jsii.String("mountPath"),
	Readonly: jsii.Boolean(false),
	SizeLimit: size,
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Experimental.

func EksVolume_EmptyDir

func EksVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Experimental.

func EmptyDirVolume_EmptyDir

func EmptyDirVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Experimental.

func HostPathVolume_EmptyDir

func HostPathVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Experimental.

func NewEmptyDirVolume

func NewEmptyDirVolume(options *EmptyDirVolumeOptions) EmptyDirVolume

Experimental.

func SecretPathVolume_EmptyDir

func SecretPathVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Experimental.

type EmptyDirVolumeOptions

type EmptyDirVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	// Experimental.
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The storage type to use for this Volume.
	// Default: `EmptyDirMediumType.DISK`
	//
	// Experimental.
	Medium EmptyDirMediumType `field:"optional" json:"medium" yaml:"medium"`
	// The maximum size for this Volume.
	// Default: - no size limit.
	//
	// Experimental.
	SizeLimit awscdk.Size `field:"optional" json:"sizeLimit" yaml:"sizeLimit"`
}

Options for a Kubernetes EmptyDir volume.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

Experimental.

type FairshareSchedulingPolicy

type FairshareSchedulingPolicy interface {
	awscdk.Resource
	IFairshareSchedulingPolicy
	ISchedulingPolicy
	// Used to calculate the percentage of the maximum available vCPU to reserve for share identifiers not present in the Queue.
	//
	// The percentage reserved is defined by the Scheduler as:
	// `(computeReservation/100)^ActiveFairShares` where `ActiveFairShares` is the number of active fair share identifiers.
	//
	// For example, a computeReservation value of 50 indicates that AWS Batch reserves 50% of the
	// maximum available vCPU if there's only one fair share identifier.
	// It reserves 25% if there are two fair share identifiers.
	// It reserves 12.5% if there are three fair share identifiers.
	//
	// A computeReservation value of 25 indicates that AWS Batch should reserve 25% of the
	// maximum available vCPU if there's only one fair share identifier,
	// 6.25% if there are two fair share identifiers,
	// and 1.56% if there are three fair share identifiers.
	// Experimental.
	ComputeReservation() *float64
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// The arn of this scheduling policy.
	// Experimental.
	SchedulingPolicyArn() *string
	// The name of this scheduling policy.
	// Experimental.
	SchedulingPolicyName() *string
	// The amount of time to use to measure the usage of each job.
	//
	// The usage is used to calculate a fair share percentage for each fair share identifier currently in the Queue.
	// A value of zero (0) indicates that only current usage is measured.
	// The decay is linear and gives preference to newer jobs.
	//
	// The maximum supported value is 604800 seconds (1 week).
	// Experimental.
	ShareDecay() awscdk.Duration
	// The shares that this Scheduling Policy applies to.
	//
	// *Note*: It is possible to submit Jobs to the queue with Share Identifiers that
	// are not recognized by the Scheduling Policy.
	// Experimental.
	Shares() *[]*Share
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// Add a share this to this Fairshare SchedulingPolicy.
	// Experimental.
	AddShare(share *Share)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

Represents a Fairshare Scheduling Policy. Instructs the scheduler to allocate ComputeEnvironment vCPUs based on Job shareIdentifiers.

The Faireshare Scheduling Policy ensures that each share gets a certain amount of vCPUs. The scheduler does this by deciding how many Jobs of each share to schedule *relative to how many jobs of each share are currently being executed by the ComputeEnvironment*. The weight factors associated with each share determine the ratio of vCPUs allocated; see the readme for a more in-depth discussion of fairshare policies.

Example:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"))

fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("A"),
	WeightFactor: jsii.Number(1),
})
fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("B"),
	WeightFactor: jsii.Number(1),
})
batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	SchedulingPolicy: fairsharePolicy,
})

Experimental.

func NewFairshareSchedulingPolicy

func NewFairshareSchedulingPolicy(scope constructs.Construct, id *string, props *FairshareSchedulingPolicyProps) FairshareSchedulingPolicy

Experimental.

type FairshareSchedulingPolicyProps

type FairshareSchedulingPolicyProps struct {
	// Used to calculate the percentage of the maximum available vCPU to reserve for share identifiers not present in the Queue.
	//
	// The percentage reserved is defined by the Scheduler as:
	// `(computeReservation/100)^ActiveFairShares` where `ActiveFairShares` is the number of active fair share identifiers.
	//
	// For example, a computeReservation value of 50 indicates that AWS Batch reserves 50% of the
	// maximum available vCPU if there's only one fair share identifier.
	// It reserves 25% if there are two fair share identifiers.
	// It reserves 12.5% if there are three fair share identifiers.
	//
	// A computeReservation value of 25 indicates that AWS Batch should reserve 25% of the
	// maximum available vCPU if there's only one fair share identifier,
	// 6.25% if there are two fair share identifiers,
	// and 1.56% if there are three fair share identifiers.
	// Default: - no vCPU is reserved.
	//
	// Experimental.
	ComputeReservation *float64 `field:"optional" json:"computeReservation" yaml:"computeReservation"`
	// The name of this SchedulingPolicy.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	SchedulingPolicyName *string `field:"optional" json:"schedulingPolicyName" yaml:"schedulingPolicyName"`
	// The amount of time to use to measure the usage of each job.
	//
	// The usage is used to calculate a fair share percentage for each fair share identifier currently in the Queue.
	// A value of zero (0) indicates that only current usage is measured.
	// The decay is linear and gives preference to newer jobs.
	//
	// The maximum supported value is 604800 seconds (1 week).
	// Default: - 0: only the current job usage is considered.
	//
	// Experimental.
	ShareDecay awscdk.Duration `field:"optional" json:"shareDecay" yaml:"shareDecay"`
	// The shares that this Scheduling Policy applies to.
	//
	// *Note*: It is possible to submit Jobs to the queue with Share Identifiers that
	// are not recognized by the Scheduling Policy.
	// Default: - no shares.
	//
	// Experimental.
	Shares *[]*Share `field:"optional" json:"shares" yaml:"shares"`
}

Fairshare SchedulingPolicy configuration.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"), &FairshareSchedulingPolicyProps{
	ShareDecay: cdk.Duration_Minutes(jsii.Number(5)),
})

Experimental.

type FargateComputeEnvironment

type FargateComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IFargateComputeEnvironment
	IManagedComputeEnvironment
	// The ARN of this compute environment.
	// Experimental.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	// Experimental.
	ComputeEnvironmentName() *string
	// The network connections associated with this resource.
	// Experimental.
	Connections() awsec2.Connections
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Experimental.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Experimental.
	MaxvCpus() *float64
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// Experimental.
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	// Experimental.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Experimental.
	ServiceRole() awsiam.IRole
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Experimental.
	Spot() *bool
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// TagManager to set, remove and format tags.
	// Experimental.
	Tags() awscdk.TagManager
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// Experimental.
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// Experimental.
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Experimental.
	UpdateToLatestImageVersion() *bool
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A ManagedComputeEnvironment that uses ECS orchestration on Fargate instances.

Example:

var vpc iVpc

sharedComputeEnv := batch.NewFargateComputeEnvironment(this, jsii.String("spotEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})
lowPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(1),
})
highPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(10),
})
lowPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
highPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))

Experimental.

func NewFargateComputeEnvironment

func NewFargateComputeEnvironment(scope constructs.Construct, id *string, props *FargateComputeEnvironmentProps) FargateComputeEnvironment

Experimental.

type FargateComputeEnvironmentProps

type FargateComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	// Experimental.
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	// Experimental.
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	// Experimental.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	// Experimental.
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	// Experimental.
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	// Experimental.
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	// Experimental.
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	// Experimental.
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	// Experimental.
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	// Experimental.
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	// Experimental.
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
}

Props for a FargateComputeEnvironment.

Example:

var vpc iVpc

sharedComputeEnv := batch.NewFargateComputeEnvironment(this, jsii.String("spotEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})
lowPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(1),
})
highPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(10),
})
lowPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
highPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))

Experimental.

type HostPathVolume

type HostPathVolume interface {
	EksVolume
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	// Experimental.
	ContainerPath() *string
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name() *string
	// The path of the file or directory on the host to mount into containers on the pod.
	//
	// *Note*: HothPath Volumes present many security risks, and should be avoided when possible.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
	//
	// Experimental.
	Path() *string
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly() *bool
}

A Kubernetes HostPath volume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

hostPathVolume := batch_alpha.NewHostPathVolume(&HostPathVolumeOptions{
	HostPath: jsii.String("hostPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	MountPath: jsii.String("mountPath"),
	Readonly: jsii.Boolean(false),
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Experimental.

func EksVolume_HostPath

func EksVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Experimental.

func EmptyDirVolume_HostPath

func EmptyDirVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Experimental.

func HostPathVolume_HostPath

func HostPathVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Experimental.

func NewHostPathVolume

func NewHostPathVolume(options *HostPathVolumeOptions) HostPathVolume

Experimental.

func SecretPathVolume_HostPath

func SecretPathVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Experimental.

type HostPathVolumeOptions

type HostPathVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	// Experimental.
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The path of the file or directory on the host to mount into containers on the pod.
	//
	// *Note*: HothPath Volumes present many security risks, and should be avoided when possible.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
	//
	// Experimental.
	HostPath *string `field:"required" json:"hostPath" yaml:"hostPath"`
}

Options for a kubernetes HostPath volume.

Example:

var jobDefn eksJobDefinition

jobDefn.Container.AddVolume(batch.EksVolume_EmptyDir(&EmptyDirVolumeOptions{
	Name: jsii.String("emptyDir"),
	MountPath: jsii.String("/Volumes/emptyDir"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_HostPath(&HostPathVolumeOptions{
	Name: jsii.String("hostPath"),
	HostPath: jsii.String("/sys"),
	MountPath: jsii.String("/Volumes/hostPath"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_Secret(&SecretPathVolumeOptions{
	Name: jsii.String("secret"),
	Optional: jsii.Boolean(true),
	MountPath: jsii.String("/Volumes/secret"),
	SecretName: jsii.String("mySecret"),
}))

See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Experimental.

type HostVolume

type HostVolume interface {
	EcsVolume
	// The path on the container that this volume will be mounted to.
	// Experimental.
	ContainerPath() *string
	// The path on the host machine this container will have access to.
	// Experimental.
	HostPath() *string
	// The name of this volume.
	// Experimental.
	Name() *string
	// Whether or not the container has readonly access to this volume.
	// Default: false.
	//
	// Experimental.
	Readonly() *bool
}

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

hostVolume := batch_alpha.NewHostVolume(&HostVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	HostPath: jsii.String("hostPath"),
	Readonly: jsii.Boolean(false),
})

Experimental.

func EcsVolume_Host

func EcsVolume_Host(options *HostVolumeOptions) HostVolume

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running. Experimental.

func EfsVolume_Host

func EfsVolume_Host(options *HostVolumeOptions) HostVolume

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running. Experimental.

func HostVolume_Host

func HostVolume_Host(options *HostVolumeOptions) HostVolume

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running. Experimental.

func NewHostVolume

func NewHostVolume(options *HostVolumeOptions) HostVolume

Experimental.

type HostVolumeOptions

type HostVolumeOptions struct {
	// the path on the container where this volume is mounted.
	// Experimental.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// the name of this volume.
	// Experimental.
	Name *string `field:"required" json:"name" yaml:"name"`
	// if set, the container will have readonly access to the volume.
	// Default: false.
	//
	// Experimental.
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The path on the host machine this container will have access to.
	// Default: - Docker will choose the host path.
	// The data may not persist after the containers that use it stop running.
	//
	// Experimental.
	HostPath *string `field:"optional" json:"hostPath" yaml:"hostPath"`
}

Options for configuring an ECS HostVolume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

hostVolumeOptions := &HostVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	HostPath: jsii.String("hostPath"),
	Readonly: jsii.Boolean(false),
}

Experimental.

type IComputeEnvironment

type IComputeEnvironment interface {
	awscdk.IResource
	// The ARN of this compute environment.
	// Experimental.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	// Experimental.
	ComputeEnvironmentName() *string
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Experimental.
	Enabled() *bool
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	// Experimental.
	ServiceRole() awsiam.IRole
}

Represents a ComputeEnvironment. Experimental.

type IEcsContainerDefinition

type IEcsContainerDefinition interface {
	constructs.IConstruct
	// Add a Volume to this container.
	// Experimental.
	AddVolume(volume EcsVolume)
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	// Experimental.
	Command() *[]*string
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	// Experimental.
	Cpu() *float64
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	// Experimental.
	Environment() *map[string]*string
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	// Experimental.
	ExecutionRole() awsiam.IRole
	// The image that this container will run.
	// Experimental.
	Image() awsecs.ContainerImage
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no jobRole.
	//
	// Experimental.
	JobRole() awsiam.IRole
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	// Experimental.
	LinuxParameters() LinuxParameters
	// The configuration of the log driver.
	// Experimental.
	LogDriverConfig() *awsecs.LogDriverConfig
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	// Experimental.
	Memory() awscdk.Size
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	// Experimental.
	ReadonlyRootFilesystem() *bool
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	// Experimental.
	Secrets() *map[string]Secret
	// The user name to use inside the container.
	// Default: - no user.
	//
	// Experimental.
	User() *string
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	// Experimental.
	Volumes() *[]EcsVolume
}

A container that can be run with ECS orchestration. Experimental.

type IEcsEc2ContainerDefinition

type IEcsEc2ContainerDefinition interface {
	IEcsContainerDefinition
	// Add a ulimit to this container.
	// Experimental.
	AddUlimit(ulimit *Ulimit)
	// The number of physical GPUs to reserve for the container.
	//
	// Make sure that the number of GPUs reserved for all containers in a job doesn't exceed
	// the number of available GPUs on the compute resource that the job is launched on.
	// Default: - no gpus.
	//
	// Experimental.
	Gpu() *float64
	// When this parameter is true, the container is given elevated permissions on the host container instance (similar to the root user).
	// Default: false.
	//
	// Experimental.
	Privileged() *bool
	// Limits to set for the user this docker container will run as.
	// Experimental.
	Ulimits() *[]*Ulimit
}

A container orchestrated by ECS that uses EC2 resources. Experimental.

type IEcsFargateContainerDefinition

type IEcsFargateContainerDefinition interface {
	IEcsContainerDefinition
	// Indicates whether the job has a public IP address.
	//
	// For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet
	// (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
	//
	// Default: false.
	//
	// Experimental.
	AssignPublicIp() *bool
	// The size for ephemeral storage.
	// Default: - 20 GiB.
	//
	// Experimental.
	EphemeralStorageSize() awscdk.Size
	// Which version of Fargate to use when running this container.
	// Default: LATEST.
	//
	// Experimental.
	FargatePlatformVersion() awsecs.FargatePlatformVersion
}

A container orchestrated by ECS that uses Fargate resources and is orchestrated by ECS. Experimental.

type IEksContainerDefinition

type IEksContainerDefinition interface {
	constructs.IConstruct
	// Mount a Volume to this container.
	//
	// Automatically added to the Pod.
	// Experimental.
	AddVolume(volume EksVolume)
	// An array of arguments to the entrypoint.
	//
	// If this isn't specified, the CMD of the container image is used.
	// This corresponds to the args member in the Entrypoint portion of the Pod in Kubernetes.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to "$(NAME1)" and the NAME1 environment variable doesn't exist,
	// the command string will remain "$(NAME1)." $$ is replaced with $, and the resulting string isn't expanded.
	// or example, $$(VAR_NAME) is passed as $(VAR_NAME) whether or not the VAR_NAME environment variable exists.
	// See: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
	//
	// Experimental.
	Args() *[]*string
	// The entrypoint for the container.
	//
	// This isn't run within a shell.
	// If this isn't specified, the `ENTRYPOINT` of the container image is used.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to `"$(NAME1)"` and the `NAME1` environment variable doesn't exist,
	// the command string will remain `"$(NAME1)."` `$$` is replaced with `$` and the resulting string isn't expanded.
	// For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists.
	//
	// The entrypoint can't be updated.
	// See: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint
	//
	// Experimental.
	Command() *[]*string
	// The hard limit of CPUs to present to this container. Must be an even multiple of 0.25.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPU limit.
	//
	// Experimental.
	CpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPUs reserved.
	//
	// Experimental.
	CpuReservation() *float64
	// The environment variables to pass to this container.
	//
	// *Note*: Environment variables cannot start with "AWS_BATCH".
	// This naming convention is reserved for variables that AWS Batch sets.
	// Experimental.
	Env() *map[string]*string
	// The hard limit of GPUs to present to this container.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPU limit.
	//
	// Experimental.
	GpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPUs reserved.
	//
	// Experimental.
	GpuReservation() *float64
	// The image that this container will run.
	// Experimental.
	Image() awsecs.ContainerImage
	// The image pull policy for this container.
	// See: https://kubernetes.io/docs/concepts/containers/images/#updating-images
	//
	// Default: - `ALWAYS` if the `:latest` tag is specified, `IF_NOT_PRESENT` otherwise.
	//
	// Experimental.
	ImagePullPolicy() ImagePullPolicy
	// The amount (in MiB) of memory to present to the container.
	//
	// If your container attempts to exceed the allocated memory, it will be terminated.
	//
	// Must be larger that 4 MiB
	//
	// At least one of `memoryLimit` and `memoryReservation` is required
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory limit.
	//
	// Experimental.
	MemoryLimit() awscdk.Size
	// The soft limit (in MiB) of memory to reserve for the container.
	//
	// Your container will be given at least this much memory, but may consume more.
	//
	// Must be larger that 4 MiB
	//
	// When system memory is under heavy contention, Docker attempts to keep the
	// container memory to this soft limit. However, your container can consume more
	// memory when it needs to, up to either the hard limit specified with the memory
	// parameter (if applicable), or all of the available memory on the container
	// instance, whichever comes first.
	//
	// At least one of `memoryLimit` and `memoryReservation` is required.
	// If both are specified, then `memoryLimit` must be equal to `memoryReservation`
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory reserved.
	//
	// Experimental.
	MemoryReservation() awscdk.Size
	// The name of this container.
	// Default: : `'Default'`.
	//
	// Experimental.
	Name() *string
	// If specified, gives this container elevated permissions on the host container instance.
	//
	// The level of permissions are similar to the root user permissions.
	//
	// This parameter maps to `privileged` policy in the Privileged pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	// Experimental.
	Privileged() *bool
	// If specified, gives this container readonly access to its root file system.
	//
	// This parameter maps to `ReadOnlyRootFilesystem` policy in the Volumes and file systems pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	// Experimental.
	ReadonlyRootFilesystem() *bool
	// If specified, the container is run as the specified group ID (`gid`).
	//
	// If this parameter isn't specified, the default is the group that's specified in the image metadata.
	// This parameter maps to `RunAsGroup` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: none.
	//
	// Experimental.
	RunAsGroup() *float64
	// If specified, the container is run as a user with a `uid` other than 0.
	//
	// Otherwise, no such rule is enforced.
	// This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the container is *not* required to run as a non-root user.
	//
	// Experimental.
	RunAsRoot() *bool
	// If specified, this container is run as the specified user ID (`uid`).
	//
	// This parameter maps to `RunAsUser` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the user that is specified in the image metadata.
	//
	// Experimental.
	RunAsUser() *float64
	// The Volumes to mount to this container.
	//
	// Automatically added to the Pod.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/
	//
	// Experimental.
	Volumes() *[]EksVolume
}

A container that can be run with EKS orchestration on EC2 resources. Experimental.

type IEksJobDefinition

type IEksJobDefinition interface {
	IJobDefinition
	// The container this Job Definition will run.
	// Experimental.
	Container() EksContainerDefinition
	// The DNS Policy of the pod used by this Job Definition.
	// See: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
	//
	// Default: `DnsPolicy.CLUSTER_FIRST`
	//
	// Experimental.
	DnsPolicy() DnsPolicy
	// The name of the service account that's used to run the container.
	//
	// service accounts are Kubernetes method of identification and authentication,
	// roughly analogous to IAM users.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html
	//
	// Default: - the default service account of the container.
	//
	// Experimental.
	ServiceAccount() *string
	// If specified, the Pod used by this Job Definition will use the host's network IP address.
	//
	// Otherwise, the Kubernetes pod networking model is enabled.
	// Most AWS Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections.
	// See: https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking
	//
	// Default: true.
	//
	// Experimental.
	UseHostNetwork() *bool
}

A JobDefinition that uses Eks orchestration. Experimental.

func EksJobDefinition_FromEksJobDefinitionArn

func EksJobDefinition_FromEksJobDefinitionArn(scope constructs.Construct, id *string, eksJobDefinitionArn *string) IEksJobDefinition

Import an EksJobDefinition by its arn. Experimental.

type IFairshareSchedulingPolicy

type IFairshareSchedulingPolicy interface {
	ISchedulingPolicy
	// Used to calculate the percentage of the maximum available vCPU to reserve for share identifiers not present in the Queue.
	//
	// The percentage reserved is defined by the Scheduler as:
	// `(computeReservation/100)^ActiveFairShares` where `ActiveFairShares` is the number of active fair share identifiers.
	//
	// For example, a computeReservation value of 50 indicates that AWS Batch reserves 50% of the
	// maximum available vCPU if there's only one fair share identifier.
	// It reserves 25% if there are two fair share identifiers.
	// It reserves 12.5% if there are three fair share identifiers.
	//
	// A computeReservation value of 25 indicates that AWS Batch should reserve 25% of the
	// maximum available vCPU if there's only one fair share identifier,
	// 6.25% if there are two fair share identifiers,
	// and 1.56% if there are three fair share identifiers.
	// Default: - no vCPU is reserved.
	//
	// Experimental.
	ComputeReservation() *float64
	// The amount of time to use to measure the usage of each job.
	//
	// The usage is used to calculate a fair share percentage for each fair share identifier currently in the Queue.
	// A value of zero (0) indicates that only current usage is measured.
	// The decay is linear and gives preference to newer jobs.
	//
	// The maximum supported value is 604800 seconds (1 week).
	// Default: - 0: only the current job usage is considered.
	//
	// Experimental.
	ShareDecay() awscdk.Duration
	// The shares that this Scheduling Policy applies to.
	//
	// *Note*: It is possible to submit Jobs to the queue with Share Identifiers that
	// are not recognized by the Scheduling Policy.
	// Experimental.
	Shares() *[]*Share
}

Represents a Fairshare Scheduling Policy. Instructs the scheduler to allocate ComputeEnvironment vCPUs based on Job shareIdentifiers.

The Faireshare Scheduling Policy ensures that each share gets a certain amount of vCPUs. It does this by deciding how many Jobs of each share to schedule *relative to how many jobs of each share are currently being executed by the ComputeEnvironment*. The weight factors associated with each share determine the ratio of vCPUs allocated; see the readme for a more in-depth discussion of fairshare policies. Experimental.

func FairshareSchedulingPolicy_FromFairshareSchedulingPolicyArn

func FairshareSchedulingPolicy_FromFairshareSchedulingPolicyArn(scope constructs.Construct, id *string, fairshareSchedulingPolicyArn *string) IFairshareSchedulingPolicy

Reference an exisiting Scheduling Policy by its ARN. Experimental.

type IFargateComputeEnvironment

type IFargateComputeEnvironment interface {
	IManagedComputeEnvironment
}

A ManagedComputeEnvironment that uses ECS orchestration on Fargate instances. Experimental.

func FargateComputeEnvironment_FromFargateComputeEnvironmentArn

func FargateComputeEnvironment_FromFargateComputeEnvironmentArn(scope constructs.Construct, id *string, fargateComputeEnvironmentArn *string) IFargateComputeEnvironment

Reference an existing FargateComputeEnvironment by its arn. Experimental.

type IJobDefinition

type IJobDefinition interface {
	awscdk.IResource
	// Add a RetryStrategy to this JobDefinition.
	// Experimental.
	AddRetryStrategy(strategy RetryStrategy)
	// The ARN of this job definition.
	// Experimental.
	JobDefinitionArn() *string
	// The name of this job definition.
	// Experimental.
	JobDefinitionName() *string
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	// Experimental.
	Parameters() *map[string]interface{}
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	// Experimental.
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	// Experimental.
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	// Experimental.
	SchedulingPriority() *float64
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	// Experimental.
	Timeout() awscdk.Duration
}

Represents a JobDefinition. Experimental.

func EcsJobDefinition_FromJobDefinitionArn

func EcsJobDefinition_FromJobDefinitionArn(scope constructs.Construct, id *string, jobDefinitionArn *string) IJobDefinition

Import a JobDefinition by its arn. Experimental.

func MultiNodeJobDefinition_FromJobDefinitionArn

func MultiNodeJobDefinition_FromJobDefinitionArn(scope constructs.Construct, id *string, jobDefinitionArn *string) IJobDefinition

refer to an existing JobDefinition by its arn. Experimental.

type IJobQueue

type IJobQueue interface {
	awscdk.IResource
	// Add a `ComputeEnvironment` to this Queue.
	//
	// The Queue will prefer lower-order `ComputeEnvironment`s.
	// Experimental.
	AddComputeEnvironment(computeEnvironment IComputeEnvironment, order *float64)
	// The set of compute environments mapped to a job queue and their order relative to each other.
	//
	// The job scheduler uses this parameter to determine which compute environment runs a specific job.
	// Compute environments must be in the VALID state before you can associate them with a job queue.
	// You can associate up to three compute environments with a job queue.
	// All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT);
	// EC2 and Fargate compute environments can't be mixed.
	//
	// *Note*: All compute environments that are associated with a job queue must share the same architecture.
	// AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
	// Experimental.
	ComputeEnvironments() *[]*OrderedComputeEnvironment
	// If the job queue is enabled, it is able to accept jobs.
	//
	// Otherwise, new jobs can't be added to the queue, but jobs already in the queue can finish.
	// Default: true.
	//
	// Experimental.
	Enabled() *bool
	// The ARN of this job queue.
	// Experimental.
	JobQueueArn() *string
	// The name of the job queue.
	//
	// It can be up to 128 letters long.
	// It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	// Experimental.
	JobQueueName() *string
	// The priority of the job queue.
	//
	// Job queues with a higher priority are evaluated first when associated with the same compute environment.
	// Priority is determined in descending order.
	// For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1.
	// Experimental.
	Priority() *float64
	// The SchedulingPolicy for this JobQueue.
	//
	// Instructs the Scheduler how to schedule different jobs.
	// Default: - no scheduling policy.
	//
	// Experimental.
	SchedulingPolicy() ISchedulingPolicy
}

Represents a JobQueue. Experimental.

func JobQueue_FromJobQueueArn

func JobQueue_FromJobQueueArn(scope constructs.Construct, id *string, jobQueueArn *string) IJobQueue

refer to an existing JobQueue by its arn. Experimental.

type IManagedComputeEnvironment

type IManagedComputeEnvironment interface {
	IComputeEnvironment
	awsec2.IConnectable
	awscdk.ITaggable
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Experimental.
	MaxvCpus() *float64
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Experimental.
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	// Experimental.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	// Experimental.
	Spot() *bool
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	// Experimental.
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	// Experimental.
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	// Experimental.
	UpdateToLatestImageVersion() *bool
	// The VPC Subnets this Compute Environment will launch instances in.
	// Experimental.
	VpcSubnets() *awsec2.SubnetSelection
}

Represents a Managed ComputeEnvironment.

Batch will provision EC2 Instances to meet the requirements of the jobs executing in this ComputeEnvironment. Experimental.

type IManagedEc2EcsComputeEnvironment

type IManagedEc2EcsComputeEnvironment interface {
	IManagedComputeEnvironment
	// Add an instance class to this compute environment.
	// Experimental.
	AddInstanceClass(instanceClass awsec2.InstanceClass)
	// Add an instance type to this compute environment.
	// Experimental.
	AddInstanceType(instanceType awsec2.InstanceType)
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Default: - `BEST_FIT_PROGRESSIVE` if not using Spot instances,
	// `SPOT_CAPACITY_OPTIMIZED` if using Spot instances.
	//
	// Experimental.
	AllocationStrategy() AllocationStrategy
	// Configure which AMIs this Compute Environment can launch.
	//
	// Leave this `undefined` to allow Batch to choose the latest AMIs it supports for each instance that it launches.
	// Default: - ECS_AL2 compatible AMI ids for non-GPU instances, ECS_AL2_NVIDIA compatible AMI ids for GPU instances.
	//
	// Experimental.
	Images() *[]*EcsMachineImage
	// The instance classes that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the size.
	// Experimental.
	InstanceClasses() *[]awsec2.InstanceClass
	// The execution Role that instances launched by this Compute Environment will use.
	// Default: - a role will be created.
	//
	// Experimental.
	InstanceRole() awsiam.IRole
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Experimental.
	InstanceTypes() *[]awsec2.InstanceType
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	// Default: no launch template.
	//
	// Experimental.
	LaunchTemplate() awsec2.ILaunchTemplate
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Default: 0.
	//
	// Experimental.
	MinvCpus() *float64
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
	//
	// Default: - no placement group.
	//
	// Experimental.
	PlacementGroup() awsec2.IPlacementGroup
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	// Default: - 100%.
	//
	// Experimental.
	SpotBidPercentage() *float64
	// The service-linked role that Spot Fleet needs to launch instances on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html
	//
	// Default: - a new Role will be created.
	//
	// Experimental.
	SpotFleetRole() awsiam.IRole
	// Whether or not to use batch's optimal instance type.
	//
	// The optimal instance type is equivalent to adding the
	// C4, M4, and R4 instance classes. You can specify other instance classes
	// (of the same architecture) in addition to the optimal instance classes.
	// Default: true.
	//
	// Experimental.
	UseOptimalInstanceClasses() *bool
}

A ManagedComputeEnvironment that uses ECS orchestration on EC2 instances. Experimental.

func ManagedEc2EcsComputeEnvironment_FromManagedEc2EcsComputeEnvironmentArn

func ManagedEc2EcsComputeEnvironment_FromManagedEc2EcsComputeEnvironmentArn(scope constructs.Construct, id *string, managedEc2EcsComputeEnvironmentArn *string) IManagedEc2EcsComputeEnvironment

refer to an existing ComputeEnvironment by its arn. Experimental.

type ISchedulingPolicy

type ISchedulingPolicy interface {
	awscdk.IResource
	// The arn of this scheduling policy.
	// Experimental.
	SchedulingPolicyArn() *string
	// The name of this scheduling policy.
	// Experimental.
	SchedulingPolicyName() *string
}

Represents a Scheduling Policy.

Scheduling Policies tell the Batch Job Scheduler how to schedule incoming jobs. Experimental.

type IUnmanagedComputeEnvironment

type IUnmanagedComputeEnvironment interface {
	IComputeEnvironment
	// The vCPUs this Compute Environment provides. Used only by the scheduler to schedule jobs in `Queue`s that use `FairshareSchedulingPolicy`s.
	//
	// **If this parameter is not provided on a fairshare queue, no capacity is reserved**;
	// that is, the `FairshareSchedulingPolicy` is ignored.
	// Experimental.
	UnmanagedvCPUs() *float64
}

Represents an UnmanagedComputeEnvironment.

Batch will not provision instances on your behalf in this ComputeEvironment. Experimental.

func UnmanagedComputeEnvironment_FromUnmanagedComputeEnvironmentArn

func UnmanagedComputeEnvironment_FromUnmanagedComputeEnvironmentArn(scope constructs.Construct, id *string, unmanagedComputeEnvironmentArn *string) IUnmanagedComputeEnvironment

Import an UnmanagedComputeEnvironment by its arn. Experimental.

type ImagePullPolicy

type ImagePullPolicy string

Determines when the image is pulled from the registry to launch a container. Experimental.

const (
	// Every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest.
	//
	// If the kubelet has a container image with that exact digest cached locally,
	// the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest,
	// and uses that image to launch the container.
	// See: https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier
	//
	// Experimental.
	ImagePullPolicy_ALWAYS ImagePullPolicy = "ALWAYS"
	// The image is pulled only if it is not already present locally.
	// Experimental.
	ImagePullPolicy_IF_NOT_PRESENT ImagePullPolicy = "IF_NOT_PRESENT"
	// The kubelet does not try fetching the image.
	//
	// If the image is somehow already present locally,
	// the kubelet attempts to start the container; otherwise, startup fails.
	// See pre-pulled images for more details.
	// See: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
	//
	// Experimental.
	ImagePullPolicy_NEVER ImagePullPolicy = "NEVER"
)

type JobDefinitionProps

type JobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	// Experimental.
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	// Experimental.
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	// Experimental.
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	// Experimental.
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	// Experimental.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
}

Props common to all JobDefinitions.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"

var parameters interface{}
var retryStrategy retryStrategy

jobDefinitionProps := &JobDefinitionProps{
	JobDefinitionName: jsii.String("jobDefinitionName"),
	Parameters: map[string]interface{}{
		"parametersKey": parameters,
	},
	RetryAttempts: jsii.Number(123),
	RetryStrategies: []*retryStrategy{
		retryStrategy,
	},
	SchedulingPriority: jsii.Number(123),
	Timeout: cdk.Duration_Minutes(jsii.Number(30)),
}

Experimental.

type JobQueue

type JobQueue interface {
	awscdk.Resource
	IJobQueue
	// The set of compute environments mapped to a job queue and their order relative to each other.
	//
	// The job scheduler uses this parameter to determine which compute environment runs a specific job.
	// Compute environments must be in the VALID state before you can associate them with a job queue.
	// You can associate up to three compute environments with a job queue.
	// All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT);
	// EC2 and Fargate compute environments can't be mixed.
	//
	// *Note*: All compute environments that are associated with a job queue must share the same architecture.
	// AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
	// Experimental.
	ComputeEnvironments() *[]*OrderedComputeEnvironment
	// If the job queue is enabled, it is able to accept jobs.
	//
	// Otherwise, new jobs can't be added to the queue, but jobs already in the queue can finish.
	// Experimental.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// The ARN of this job queue.
	// Experimental.
	JobQueueArn() *string
	// The name of the job queue.
	//
	// It can be up to 128 letters long.
	// It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	// Experimental.
	JobQueueName() *string
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// The priority of the job queue.
	//
	// Job queues with a higher priority are evaluated first when associated with the same compute environment.
	// Priority is determined in descending order.
	// For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1.
	// Experimental.
	Priority() *float64
	// The SchedulingPolicy for this JobQueue.
	//
	// Instructs the Scheduler how to schedule different jobs.
	// Experimental.
	SchedulingPolicy() ISchedulingPolicy
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// Add a `ComputeEnvironment` to this Queue.
	//
	// The Queue will prefer lower-order `ComputeEnvironment`s.
	// Experimental.
	AddComputeEnvironment(computeEnvironment IComputeEnvironment, order *float64)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

JobQueues can receive Jobs, which are removed from the queue when sent to the linked ComputeEnvironment(s) to be executed.

Jobs exit the queue in FIFO order unless a `SchedulingPolicy` is linked.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Experimental.

func NewJobQueue

func NewJobQueue(scope constructs.Construct, id *string, props *JobQueueProps) JobQueue

Experimental.

type JobQueueProps

type JobQueueProps struct {
	// The set of compute environments mapped to a job queue and their order relative to each other.
	//
	// The job scheduler uses this parameter to determine which compute environment runs a specific job.
	// Compute environments must be in the VALID state before you can associate them with a job queue.
	// You can associate up to three compute environments with a job queue.
	// All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT);
	// EC2 and Fargate compute environments can't be mixed.
	//
	// *Note*: All compute environments that are associated with a job queue must share the same architecture.
	// AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
	// Default: none.
	//
	// Experimental.
	ComputeEnvironments *[]*OrderedComputeEnvironment `field:"optional" json:"computeEnvironments" yaml:"computeEnvironments"`
	// If the job queue is enabled, it is able to accept jobs.
	//
	// Otherwise, new jobs can't be added to the queue, but jobs already in the queue can finish.
	// Default: true.
	//
	// Experimental.
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The name of the job queue.
	//
	// It can be up to 128 letters long.
	// It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	// Default: - no name.
	//
	// Experimental.
	JobQueueName *string `field:"optional" json:"jobQueueName" yaml:"jobQueueName"`
	// The priority of the job queue.
	//
	// Job queues with a higher priority are evaluated first when associated with the same compute environment.
	// Priority is determined in descending order.
	// For example, a job queue with a priority of 10 is given scheduling preference over a job queue with a priority of 1.
	// Default: 1.
	//
	// Experimental.
	Priority *float64 `field:"optional" json:"priority" yaml:"priority"`
	// The SchedulingPolicy for this JobQueue.
	//
	// Instructs the Scheduler how to schedule different jobs.
	// Default: - no scheduling policy.
	//
	// Experimental.
	SchedulingPolicy ISchedulingPolicy `field:"optional" json:"schedulingPolicy" yaml:"schedulingPolicy"`
}

Props to configure a JobQueue.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"
import iam "github.com/aws/aws-cdk-go/awscdk"

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Experimental.

type LinuxParameters

type LinuxParameters interface {
	constructs.Construct
	// Device mounts.
	// Experimental.
	Devices() *[]*Device
	// Whether the init process is enabled.
	// Experimental.
	InitProcessEnabled() *bool
	// The max swap memory.
	// Experimental.
	MaxSwap() awscdk.Size
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// The shared memory size (in MiB).
	//
	// Not valid for Fargate launch type.
	// Experimental.
	SharedMemorySize() awscdk.Size
	// The swappiness behavior.
	// Experimental.
	Swappiness() *float64
	// TmpFs mounts.
	// Experimental.
	Tmpfs() *[]*Tmpfs
	// Adds one or more host devices to a container.
	// Experimental.
	AddDevices(device ...*Device)
	// Specifies the container path, mount options, and size (in MiB) of the tmpfs mount for a container.
	//
	// Only works with EC2 launch type.
	// Experimental.
	AddTmpfs(tmpfs ...*Tmpfs)
	// Renders the Linux parameters to the Batch version of this resource, which does not have 'capabilities' and requires tmpfs.containerPath to be defined.
	// Experimental.
	RenderLinuxParameters() *awsbatch.CfnJobDefinition_LinuxParametersProperty
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

Linux-specific options that are applied to the container.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"

var size size

linuxParameters := batch_alpha.NewLinuxParameters(this, jsii.String("MyLinuxParameters"), &LinuxParametersProps{
	InitProcessEnabled: jsii.Boolean(false),
	MaxSwap: size,
	SharedMemorySize: size,
	Swappiness: jsii.Number(123),
})

Experimental.

func NewLinuxParameters

func NewLinuxParameters(scope constructs.Construct, id *string, props *LinuxParametersProps) LinuxParameters

Constructs a new instance of the LinuxParameters class. Experimental.

type LinuxParametersProps

type LinuxParametersProps struct {
	// Specifies whether to run an init process inside the container that forwards signals and reaps processes.
	// Default: false.
	//
	// Experimental.
	InitProcessEnabled *bool `field:"optional" json:"initProcessEnabled" yaml:"initProcessEnabled"`
	// The total amount of swap memory a container can use.
	//
	// This parameter
	// will be translated to the --memory-swap option to docker run.
	//
	// This parameter is only supported when you are using the EC2 launch type.
	// Accepted values are positive integers.
	// Default: No swap.
	//
	// Experimental.
	MaxSwap awscdk.Size `field:"optional" json:"maxSwap" yaml:"maxSwap"`
	// The value for the size of the /dev/shm volume.
	// Default: No shared memory.
	//
	// Experimental.
	SharedMemorySize awscdk.Size `field:"optional" json:"sharedMemorySize" yaml:"sharedMemorySize"`
	// This allows you to tune a container's memory swappiness behavior.
	//
	// This parameter
	// maps to the --memory-swappiness option to docker run. The swappiness relates
	// to the kernel's tendency to swap memory. A value of 0 will cause swapping to
	// not happen unless absolutely necessary. A value of 100 will cause pages to
	// be swapped very aggressively.
	//
	// This parameter is only supported when you are using the EC2 launch type.
	// Accepted values are whole numbers between 0 and 100. If a value is not
	// specified for maxSwap then this parameter is ignored.
	// Default: 60.
	//
	// Experimental.
	Swappiness *float64 `field:"optional" json:"swappiness" yaml:"swappiness"`
}

The properties for defining Linux-specific options that are applied to the container.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"

var size size

linuxParametersProps := &LinuxParametersProps{
	InitProcessEnabled: jsii.Boolean(false),
	MaxSwap: size,
	SharedMemorySize: size,
	Swappiness: jsii.Number(123),
}

Experimental.

type ManagedComputeEnvironmentProps

type ManagedComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	// Experimental.
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	// Experimental.
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	// Experimental.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	// Experimental.
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	// Experimental.
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	// Experimental.
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	// Experimental.
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	// Experimental.
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	// Experimental.
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	// Experimental.
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	// Experimental.
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
}

Props for a ManagedComputeEnvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

managedComputeEnvironmentProps := &ManagedComputeEnvironmentProps{
	Vpc: vpc,

	// the properties below are optional
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	MaxvCpus: jsii.Number(123),
	ReplaceComputeEnvironment: jsii.Boolean(false),
	SecurityGroups: []iSecurityGroup{
		securityGroup,
	},
	ServiceRole: role,
	Spot: jsii.Boolean(false),
	TerminateOnUpdate: jsii.Boolean(false),
	UpdateTimeout: cdk.Duration_Minutes(jsii.Number(30)),
	UpdateToLatestImageVersion: jsii.Boolean(false),
	VpcSubnets: &SubnetSelection{
		AvailabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		OnePerAz: jsii.Boolean(false),
		SubnetFilters: []*subnetFilter{
			subnetFilter,
		},
		SubnetGroupName: jsii.String("subnetGroupName"),
		Subnets: []iSubnet{
			subnet,
		},
		SubnetType: awscdk.Aws_ec2.SubnetType_PRIVATE_ISOLATED,
	},
}

Experimental.

type ManagedEc2EcsComputeEnvironment

type ManagedEc2EcsComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IManagedComputeEnvironment
	IManagedEc2EcsComputeEnvironment
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Experimental.
	AllocationStrategy() AllocationStrategy
	// The ARN of this compute environment.
	// Experimental.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	// Experimental.
	ComputeEnvironmentName() *string
	// The network connections associated with this resource.
	// Experimental.
	Connections() awsec2.Connections
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Experimental.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// Configure which AMIs this Compute Environment can launch.
	//
	// Leave this `undefined` to allow Batch to choose the latest AMIs it supports for each instance that it launches.
	// Experimental.
	Images() *[]*EcsMachineImage
	// The instance classes that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the size.
	// Experimental.
	InstanceClasses() *[]awsec2.InstanceClass
	// The execution Role that instances launched by this Compute Environment will use.
	// Experimental.
	InstanceRole() awsiam.IRole
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Experimental.
	InstanceTypes() *[]awsec2.InstanceType
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	// Experimental.
	LaunchTemplate() awsec2.ILaunchTemplate
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Experimental.
	MaxvCpus() *float64
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Experimental.
	MinvCpus() *float64
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// Experimental.
	PlacementGroup() awsec2.IPlacementGroup
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// Experimental.
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	// Experimental.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Experimental.
	ServiceRole() awsiam.IRole
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Experimental.
	Spot() *bool
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	// Experimental.
	SpotBidPercentage() *float64
	// The service-linked role that Spot Fleet needs to launch instances on your behalf.
	// Experimental.
	SpotFleetRole() awsiam.IRole
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// TagManager to set, remove and format tags.
	// Experimental.
	Tags() awscdk.TagManager
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// Experimental.
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// Experimental.
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Experimental.
	UpdateToLatestImageVersion() *bool
	// Add an instance class to this compute environment.
	// Experimental.
	AddInstanceClass(instanceClass awsec2.InstanceClass)
	// Add an instance type to this compute environment.
	// Experimental.
	AddInstanceType(instanceType awsec2.InstanceType)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A ManagedComputeEnvironment that uses ECS orchestration on EC2 instances.

Example:

var computeEnv iManagedEc2EcsComputeEnvironment
vpc := ec2.NewVpc(this, jsii.String("VPC"))
computeEnv.AddInstanceClass(ec2.InstanceClass_M5AD)
// Or, specify it on the constructor:
// Or, specify it on the constructor:
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
})

Experimental.

func NewManagedEc2EcsComputeEnvironment

func NewManagedEc2EcsComputeEnvironment(scope constructs.Construct, id *string, props *ManagedEc2EcsComputeEnvironmentProps) ManagedEc2EcsComputeEnvironment

Experimental.

type ManagedEc2EcsComputeEnvironmentProps

type ManagedEc2EcsComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	// Experimental.
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	// Experimental.
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	// Experimental.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	// Experimental.
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	// Experimental.
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	// Experimental.
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	// Experimental.
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	// Experimental.
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	// Experimental.
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	// Experimental.
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	// Experimental.
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Default: - `BEST_FIT_PROGRESSIVE` if not using Spot instances,
	// `SPOT_CAPACITY_OPTIMIZED` if using Spot instances.
	//
	// Experimental.
	AllocationStrategy AllocationStrategy `field:"optional" json:"allocationStrategy" yaml:"allocationStrategy"`
	// Configure which AMIs this Compute Environment can launch.
	//
	// If you specify this property with only `image` specified, then the
	// `imageType` will default to `ECS_AL2`. *If your image needs GPU resources,
	// specify `ECS_AL2_NVIDIA`; otherwise, the instances will not be able to properly
	// join the ComputeEnvironment*.
	// Default: - ECS_AL2 for non-GPU instances, ECS_AL2_NVIDIA for GPU instances.
	//
	// Experimental.
	Images *[]*EcsMachineImage `field:"optional" json:"images" yaml:"images"`
	// The instance classes that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the instance size.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	// Experimental.
	InstanceClasses *[]awsec2.InstanceClass `field:"optional" json:"instanceClasses" yaml:"instanceClasses"`
	// The execution Role that instances launched by this Compute Environment will use.
	// Default: - a role will be created.
	//
	// Experimental.
	InstanceRole awsiam.IRole `field:"optional" json:"instanceRole" yaml:"instanceRole"`
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	// Experimental.
	InstanceTypes *[]awsec2.InstanceType `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	// Default: no launch template.
	//
	// Experimental.
	LaunchTemplate awsec2.ILaunchTemplate `field:"optional" json:"launchTemplate" yaml:"launchTemplate"`
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Default: 0.
	//
	// Experimental.
	MinvCpus *float64 `field:"optional" json:"minvCpus" yaml:"minvCpus"`
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
	//
	// Default: - no placement group.
	//
	// Experimental.
	PlacementGroup awsec2.IPlacementGroup `field:"optional" json:"placementGroup" yaml:"placementGroup"`
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	//
	// Implies `spot == true` if set.
	// Default: 100%.
	//
	// Experimental.
	SpotBidPercentage *float64 `field:"optional" json:"spotBidPercentage" yaml:"spotBidPercentage"`
	// The service-linked role that Spot Fleet needs to launch instances on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html
	//
	// Default: - a new role will be created.
	//
	// Experimental.
	SpotFleetRole awsiam.IRole `field:"optional" json:"spotFleetRole" yaml:"spotFleetRole"`
	// Whether or not to use batch's optimal instance type.
	//
	// The optimal instance type is equivalent to adding the
	// C4, M4, and R4 instance classes. You can specify other instance classes
	// (of the same architecture) in addition to the optimal instance classes.
	// Default: true.
	//
	// Experimental.
	UseOptimalInstanceClasses *bool `field:"optional" json:"useOptimalInstanceClasses" yaml:"useOptimalInstanceClasses"`
}

Props for a ManagedEc2EcsComputeEnvironment.

Example:

var computeEnv iManagedEc2EcsComputeEnvironment
vpc := ec2.NewVpc(this, jsii.String("VPC"))
computeEnv.AddInstanceClass(ec2.InstanceClass_M5AD)
// Or, specify it on the constructor:
// Or, specify it on the constructor:
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
})

Experimental.

type ManagedEc2EksComputeEnvironment

type ManagedEc2EksComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IManagedComputeEnvironment
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Experimental.
	AllocationStrategy() AllocationStrategy
	// The ARN of this compute environment.
	// Experimental.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	// Experimental.
	ComputeEnvironmentName() *string
	// The network connections associated with this resource.
	// Experimental.
	Connections() awsec2.Connections
	// The cluster that backs this Compute Environment. Required for Compute Environments running Kubernetes jobs.
	//
	// Please ensure that you have followed the steps at
	//
	// https://docs.aws.amazon.com/batch/latest/userguide/getting-started-eks.html
	//
	// before attempting to deploy a `ManagedEc2EksComputeEnvironment` that uses this cluster.
	// If you do not follow the steps in the link, the deployment fail with a message that the
	// compute environment did not stabilize.
	// Experimental.
	EksCluster() awseks.ICluster
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Experimental.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// Configure which AMIs this Compute Environment can launch.
	// Experimental.
	Images() *[]*EksMachineImage
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Experimental.
	InstanceClasses() *[]awsec2.InstanceClass
	// The execution Role that instances launched by this Compute Environment will use.
	// Experimental.
	InstanceRole() awsiam.IRole
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Experimental.
	InstanceTypes() *[]awsec2.InstanceType
	// The namespace of the Cluster.
	//
	// Cannot be 'default', start with 'kube-', or be longer than 64 characters.
	// Experimental.
	KubernetesNamespace() *string
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	// Experimental.
	LaunchTemplate() awsec2.ILaunchTemplate
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Experimental.
	MaxvCpus() *float64
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Experimental.
	MinvCpus() *float64
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// Experimental.
	PlacementGroup() awsec2.IPlacementGroup
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// Experimental.
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	// Experimental.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Experimental.
	ServiceRole() awsiam.IRole
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Experimental.
	Spot() *bool
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	//
	// Implies `spot == true` if set.
	// Experimental.
	SpotBidPercentage() *float64
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// TagManager to set, remove and format tags.
	// Experimental.
	Tags() awscdk.TagManager
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// Experimental.
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// Experimental.
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Experimental.
	UpdateToLatestImageVersion() *bool
	// Add an instance class to this compute environment.
	// Experimental.
	AddInstanceClass(instanceClass awsec2.InstanceClass)
	// Add an instance type to this compute environment.
	// Experimental.
	AddInstanceType(instanceType awsec2.InstanceType)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A ManagedComputeEnvironment that uses ECS orchestration on EC2 instances.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster
var instanceType instanceType
var launchTemplate launchTemplate
var machineImage iMachineImage
var placementGroup placementGroup
var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

managedEc2EksComputeEnvironment := batch_alpha.NewManagedEc2EksComputeEnvironment(this, jsii.String("MyManagedEc2EksComputeEnvironment"), &ManagedEc2EksComputeEnvironmentProps{
	EksCluster: cluster,
	KubernetesNamespace: jsii.String("kubernetesNamespace"),
	Vpc: vpc,

	// the properties below are optional
	AllocationStrategy: batch_alpha.AllocationStrategy_BEST_FIT,
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	Images: []eksMachineImage{
		&eksMachineImage{
			Image: machineImage,
			ImageType: batch_alpha.EksMachineImageType_EKS_AL2,
		},
	},
	InstanceClasses: []instanceClass{
		awscdk.Aws_ec2.*instanceClass_STANDARD3,
	},
	InstanceRole: role,
	InstanceTypes: []*instanceType{
		instanceType,
	},
	LaunchTemplate: launchTemplate,
	MaxvCpus: jsii.Number(123),
	MinvCpus: jsii.Number(123),
	PlacementGroup: placementGroup,
	ReplaceComputeEnvironment: jsii.Boolean(false),
	SecurityGroups: []iSecurityGroup{
		securityGroup,
	},
	ServiceRole: role,
	Spot: jsii.Boolean(false),
	SpotBidPercentage: jsii.Number(123),
	TerminateOnUpdate: jsii.Boolean(false),
	UpdateTimeout: cdk.Duration_Minutes(jsii.Number(30)),
	UpdateToLatestImageVersion: jsii.Boolean(false),
	UseOptimalInstanceClasses: jsii.Boolean(false),
	VpcSubnets: &SubnetSelection{
		AvailabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		OnePerAz: jsii.Boolean(false),
		SubnetFilters: []*subnetFilter{
			subnetFilter,
		},
		SubnetGroupName: jsii.String("subnetGroupName"),
		Subnets: []iSubnet{
			subnet,
		},
		SubnetType: awscdk.*Aws_ec2.SubnetType_PRIVATE_ISOLATED,
	},
})

Experimental.

func NewManagedEc2EksComputeEnvironment

func NewManagedEc2EksComputeEnvironment(scope constructs.Construct, id *string, props *ManagedEc2EksComputeEnvironmentProps) ManagedEc2EksComputeEnvironment

Experimental.

type ManagedEc2EksComputeEnvironmentProps

type ManagedEc2EksComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	// Experimental.
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	// Experimental.
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	// Experimental.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	// Experimental.
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	// Experimental.
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	// Experimental.
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	// Experimental.
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	// Experimental.
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	// Experimental.
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	// Experimental.
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	// Experimental.
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// The cluster that backs this Compute Environment. Required for Compute Environments running Kubernetes jobs.
	//
	// Please ensure that you have followed the steps at
	//
	// https://docs.aws.amazon.com/batch/latest/userguide/getting-started-eks.html
	//
	// before attempting to deploy a `ManagedEc2EksComputeEnvironment` that uses this cluster.
	// If you do not follow the steps in the link, the deployment fail with a message that the
	// compute environment did not stabilize.
	// Experimental.
	EksCluster awseks.ICluster `field:"required" json:"eksCluster" yaml:"eksCluster"`
	// The namespace of the Cluster.
	// Experimental.
	KubernetesNamespace *string `field:"required" json:"kubernetesNamespace" yaml:"kubernetesNamespace"`
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Default: - `BEST_FIT_PROGRESSIVE` if not using Spot instances,
	// `SPOT_CAPACITY_OPTIMIZED` if using Spot instances.
	//
	// Experimental.
	AllocationStrategy AllocationStrategy `field:"optional" json:"allocationStrategy" yaml:"allocationStrategy"`
	// Configure which AMIs this Compute Environment can launch.
	// Default: If `imageKubernetesVersion` is specified,
	// - EKS_AL2 for non-GPU instances, EKS_AL2_NVIDIA for GPU instances,
	// Otherwise,
	// - ECS_AL2 for non-GPU instances, ECS_AL2_NVIDIA for GPU instances,.
	//
	// Experimental.
	Images *[]*EksMachineImage `field:"optional" json:"images" yaml:"images"`
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the instance size.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	// Experimental.
	InstanceClasses *[]awsec2.InstanceClass `field:"optional" json:"instanceClasses" yaml:"instanceClasses"`
	// The execution Role that instances launched by this Compute Environment will use.
	// Default: - a role will be created.
	//
	// Experimental.
	InstanceRole awsiam.IRole `field:"optional" json:"instanceRole" yaml:"instanceRole"`
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	// Experimental.
	InstanceTypes *[]awsec2.InstanceType `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.**
	// Default: - no launch template.
	//
	// Experimental.
	LaunchTemplate awsec2.ILaunchTemplate `field:"optional" json:"launchTemplate" yaml:"launchTemplate"`
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Default: 0.
	//
	// Experimental.
	MinvCpus *float64 `field:"optional" json:"minvCpus" yaml:"minvCpus"`
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
	//
	// Default: - no placement group.
	//
	// Experimental.
	PlacementGroup awsec2.IPlacementGroup `field:"optional" json:"placementGroup" yaml:"placementGroup"`
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	//
	// Implies `spot == true` if set.
	// Default: - 100%.
	//
	// Experimental.
	SpotBidPercentage *float64 `field:"optional" json:"spotBidPercentage" yaml:"spotBidPercentage"`
	// Whether or not to use batch's optimal instance type.
	//
	// The optimal instance type is equivalent to adding the
	// C4, M4, and R4 instance classes. You can specify other instance classes
	// (of the same architecture) in addition to the optimal instance classes.
	// Default: true.
	//
	// Experimental.
	UseOptimalInstanceClasses *bool `field:"optional" json:"useOptimalInstanceClasses" yaml:"useOptimalInstanceClasses"`
}

Props for a ManagedEc2EksComputeEnvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster
var instanceType instanceType
var launchTemplate launchTemplate
var machineImage iMachineImage
var placementGroup placementGroup
var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

managedEc2EksComputeEnvironmentProps := &ManagedEc2EksComputeEnvironmentProps{
	EksCluster: cluster,
	KubernetesNamespace: jsii.String("kubernetesNamespace"),
	Vpc: vpc,

	// the properties below are optional
	AllocationStrategy: batch_alpha.AllocationStrategy_BEST_FIT,
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	Images: []eksMachineImage{
		&eksMachineImage{
			Image: machineImage,
			ImageType: batch_alpha.EksMachineImageType_EKS_AL2,
		},
	},
	InstanceClasses: []instanceClass{
		awscdk.Aws_ec2.*instanceClass_STANDARD3,
	},
	InstanceRole: role,
	InstanceTypes: []*instanceType{
		instanceType,
	},
	LaunchTemplate: launchTemplate,
	MaxvCpus: jsii.Number(123),
	MinvCpus: jsii.Number(123),
	PlacementGroup: placementGroup,
	ReplaceComputeEnvironment: jsii.Boolean(false),
	SecurityGroups: []iSecurityGroup{
		securityGroup,
	},
	ServiceRole: role,
	Spot: jsii.Boolean(false),
	SpotBidPercentage: jsii.Number(123),
	TerminateOnUpdate: jsii.Boolean(false),
	UpdateTimeout: cdk.Duration_Minutes(jsii.Number(30)),
	UpdateToLatestImageVersion: jsii.Boolean(false),
	UseOptimalInstanceClasses: jsii.Boolean(false),
	VpcSubnets: &SubnetSelection{
		AvailabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		OnePerAz: jsii.Boolean(false),
		SubnetFilters: []*subnetFilter{
			subnetFilter,
		},
		SubnetGroupName: jsii.String("subnetGroupName"),
		Subnets: []iSubnet{
			subnet,
		},
		SubnetType: awscdk.*Aws_ec2.SubnetType_PRIVATE_ISOLATED,
	},
}

Experimental.

type MultiNodeContainer

type MultiNodeContainer struct {
	// The container that this node range will run.
	// Experimental.
	Container IEcsContainerDefinition `field:"required" json:"container" yaml:"container"`
	// The index of the last node to run this container.
	//
	// The container is run on all nodes in the range [startNode, endNode] (inclusive).
	// Experimental.
	EndNode *float64 `field:"required" json:"endNode" yaml:"endNode"`
	// The index of the first node to run this container.
	//
	// The container is run on all nodes in the range [startNode, endNode] (inclusive).
	// Experimental.
	StartNode *float64 `field:"required" json:"startNode" yaml:"startNode"`
}

Runs the container on nodes [startNode, endNode].

Example:

import "github.com/aws/aws-cdk-go/awscdk"

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

Experimental.

type MultiNodeJobDefinition

type MultiNodeJobDefinition interface {
	awscdk.Resource
	IJobDefinition
	// The containers that this multinode job will run.
	// Experimental.
	Containers() *[]*MultiNodeContainer
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// The instance type that this job definition will run.
	// Experimental.
	InstanceType() awsec2.InstanceType
	// The ARN of this job definition.
	// Experimental.
	JobDefinitionArn() *string
	// The name of this job definition.
	// Experimental.
	JobDefinitionName() *string
	// The index of the main node in this job.
	//
	// The main node is responsible for orchestration.
	// Experimental.
	MainNode() *float64
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// Experimental.
	Parameters() *map[string]interface{}
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	// Experimental.
	PropagateTags() *bool
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Experimental.
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	// Experimental.
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Experimental.
	SchedulingPriority() *float64
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Experimental.
	Timeout() awscdk.Duration
	// Add a container to this multinode job.
	// Experimental.
	AddContainer(container *MultiNodeContainer)
	// Add a RetryStrategy to this JobDefinition.
	// Experimental.
	AddRetryStrategy(strategy RetryStrategy)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

A JobDefinition that uses Ecs orchestration to run multiple containers.

Example:

import "github.com/aws/aws-cdk-go/awscdk"

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

Experimental.

func NewMultiNodeJobDefinition

func NewMultiNodeJobDefinition(scope constructs.Construct, id *string, props *MultiNodeJobDefinitionProps) MultiNodeJobDefinition

Experimental.

type MultiNodeJobDefinitionProps

type MultiNodeJobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	// Experimental.
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	// Experimental.
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	// Experimental.
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	// Experimental.
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	// Experimental.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The instance type that this job definition will run.
	// Experimental.
	InstanceType awsec2.InstanceType `field:"required" json:"instanceType" yaml:"instanceType"`
	// The containers that this multinode job will run.
	// See: https://aws.amazon.com/blogs/compute/building-a-tightly-coupled-molecular-dynamics-workflow-with-multi-node-parallel-jobs-in-aws-batch/
	//
	// Default: none.
	//
	// Experimental.
	Containers *[]*MultiNodeContainer `field:"optional" json:"containers" yaml:"containers"`
	// The index of the main node in this job.
	//
	// The main node is responsible for orchestration.
	// Default: 0.
	//
	// Experimental.
	MainNode *float64 `field:"optional" json:"mainNode" yaml:"mainNode"`
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	// Default: false.
	//
	// Experimental.
	PropagateTags *bool `field:"optional" json:"propagateTags" yaml:"propagateTags"`
}

Props to configure a MultiNodeJobDefinition.

Example:

import "github.com/aws/aws-cdk-go/awscdk"

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

Experimental.

type OrderedComputeEnvironment

type OrderedComputeEnvironment struct {
	// The ComputeEnvironment to link to this JobQueue.
	// Experimental.
	ComputeEnvironment IComputeEnvironment `field:"required" json:"computeEnvironment" yaml:"computeEnvironment"`
	// The order associated with `computeEnvironment`.
	// Experimental.
	Order *float64 `field:"required" json:"order" yaml:"order"`
}

Assigns an order to a ComputeEnvironment.

The JobQueue will prioritize the lowest-order ComputeEnvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

var computeEnvironment iComputeEnvironment

orderedComputeEnvironment := &OrderedComputeEnvironment{
	ComputeEnvironment: computeEnvironment,
	Order: jsii.Number(123),
}

Experimental.

type Reason

type Reason interface {
}

Common job exit reasons.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

Experimental.

func NewReason

func NewReason() Reason

Experimental.

func Reason_CANNOT_PULL_CONTAINER

func Reason_CANNOT_PULL_CONTAINER() Reason

func Reason_Custom

func Reason_Custom(customReasonProps *CustomReason) Reason

A custom Reason that can match on multiple conditions.

Note that all specified conditions must be met for this reason to match. Experimental.

func Reason_NON_ZERO_EXIT_CODE

func Reason_NON_ZERO_EXIT_CODE() Reason

func Reason_SPOT_INSTANCE_RECLAIMED

func Reason_SPOT_INSTANCE_RECLAIMED() Reason

type RetryStrategy

type RetryStrategy interface {
	// The action to take when the job exits with the Reason specified.
	// Experimental.
	Action() Action
	// If the job exits with this Reason it will trigger the specified Action.
	// Experimental.
	On() Reason
}

Define how Jobs using this JobDefinition respond to different exit conditions.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

Experimental.

func NewRetryStrategy

func NewRetryStrategy(action Action, on Reason) RetryStrategy

Experimental.

func RetryStrategy_Of

func RetryStrategy_Of(action Action, on Reason) RetryStrategy

Create a new RetryStrategy. Experimental.

type Secret

type Secret interface {
	// The ARN of the secret.
	// Experimental.
	Arn() *string
	// Whether this secret uses a specific JSON field.
	// Experimental.
	HasField() *bool
	// Grants reading the secret to a principal.
	// Experimental.
	GrantRead(grantee awsiam.IGrantable) awsiam.Grant
}

A secret environment variable.

Example:

import cdk "github.com/aws/aws-cdk-go/awscdk"

var mySecret iSecret

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Secrets: map[string]secret{
			"MY_SECRET_ENV_VAR": batch.*secret_fromSecretsManager(mySecret),
		},
	}),
})

Experimental.

func Secret_FromSecretsManager

func Secret_FromSecretsManager(secret awssecretsmanager.ISecret, field *string) Secret

Creates a environment variable value from a secret stored in AWS Secrets Manager. Experimental.

func Secret_FromSecretsManagerVersion

func Secret_FromSecretsManagerVersion(secret awssecretsmanager.ISecret, versionInfo *SecretVersionInfo, field *string) Secret

Creates a environment variable value from a secret stored in AWS Secrets Manager. Experimental.

func Secret_FromSsmParameter

func Secret_FromSsmParameter(parameter awsssm.IParameter) Secret

Creates an environment variable value from a parameter stored in AWS Systems Manager Parameter Store. Experimental.

type SecretPathVolume

type SecretPathVolume interface {
	EksVolume
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	// Experimental.
	ContainerPath() *string
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name() *string
	// Specifies whether the secret or the secret's keys must be defined.
	// Default: true.
	//
	// Experimental.
	Optional() *bool
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly() *bool
	// The name of the secret.
	//
	// Must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	SecretName() *string
}

Specifies the configuration of a Kubernetes secret volume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

secretPathVolume := batch_alpha.NewSecretPathVolume(&SecretPathVolumeOptions{
	Name: jsii.String("name"),
	SecretName: jsii.String("secretName"),

	// the properties below are optional
	MountPath: jsii.String("mountPath"),
	Optional: jsii.Boolean(false),
	Readonly: jsii.Boolean(false),
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

Experimental.

func EksVolume_Secret

func EksVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

Experimental.

func EmptyDirVolume_Secret

func EmptyDirVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

Experimental.

func HostPathVolume_Secret

func HostPathVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

Experimental.

func NewSecretPathVolume

func NewSecretPathVolume(options *SecretPathVolumeOptions) SecretPathVolume

Experimental.

func SecretPathVolume_Secret

func SecretPathVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

Experimental.

type SecretPathVolumeOptions

type SecretPathVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	// Experimental.
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	// Experimental.
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The name of the secret.
	//
	// Must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	// Experimental.
	SecretName *string `field:"required" json:"secretName" yaml:"secretName"`
	// Specifies whether the secret or the secret's keys must be defined.
	// Default: true.
	//
	// Experimental.
	Optional *bool `field:"optional" json:"optional" yaml:"optional"`
}

Options for a Kubernetes SecretPath Volume.

Example:

var jobDefn eksJobDefinition

jobDefn.Container.AddVolume(batch.EksVolume_EmptyDir(&EmptyDirVolumeOptions{
	Name: jsii.String("emptyDir"),
	MountPath: jsii.String("/Volumes/emptyDir"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_HostPath(&HostPathVolumeOptions{
	Name: jsii.String("hostPath"),
	HostPath: jsii.String("/sys"),
	MountPath: jsii.String("/Volumes/hostPath"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_Secret(&SecretPathVolumeOptions{
	Name: jsii.String("secret"),
	Optional: jsii.Boolean(true),
	MountPath: jsii.String("/Volumes/secret"),
	SecretName: jsii.String("mySecret"),
}))

See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

Experimental.

type SecretVersionInfo

type SecretVersionInfo struct {
	// version id of the secret.
	// Default: - use default version id.
	//
	// Experimental.
	VersionId *string `field:"optional" json:"versionId" yaml:"versionId"`
	// version stage of the secret.
	// Default: - use default version stage.
	//
	// Experimental.
	VersionStage *string `field:"optional" json:"versionStage" yaml:"versionStage"`
}

Specify the secret's version id or version stage.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

secretVersionInfo := &SecretVersionInfo{
	VersionId: jsii.String("versionId"),
	VersionStage: jsii.String("versionStage"),
}

Experimental.

type Share

type Share struct {
	// The identifier of this Share.
	//
	// All jobs that specify this share identifier
	// when submitted to the queue will be considered as part of this Share.
	// Experimental.
	ShareIdentifier *string `field:"required" json:"shareIdentifier" yaml:"shareIdentifier"`
	// The weight factor given to this Share.
	//
	// The Scheduler decides which jobs to put in the Compute Environment
	// such that the following ratio is equal for each job:
	//
	// `sharevCpu / weightFactor`,
	//
	// where `sharevCpu` is the total amount of vCPU given to that particular share; that is,
	// the sum of the vCPU of each job currently in the Compute Environment for that share.
	//
	// See the readme of this module for a detailed example that shows how these are used,
	// how it relates to `computeReservation`, and how `shareDecay` affects these calculations.
	// Experimental.
	WeightFactor *float64 `field:"required" json:"weightFactor" yaml:"weightFactor"`
}

Represents a group of Job Definitions.

All Job Definitions that declare a share identifier will be considered members of the Share defined by that share identifier.

The Scheduler divides the maximum available vCPUs of the ComputeEnvironment among Jobs in the Queue based on their shareIdentifier and the weightFactor associated with that shareIdentifier.

Example:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"))

fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("A"),
	WeightFactor: jsii.Number(1),
})
fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("B"),
	WeightFactor: jsii.Number(1),
})
batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	SchedulingPolicy: fairsharePolicy,
})

Experimental.

type Tmpfs

type Tmpfs struct {
	// The absolute file path where the tmpfs volume is to be mounted.
	// Experimental.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// The size (in MiB) of the tmpfs volume.
	// Experimental.
	Size awscdk.Size `field:"required" json:"size" yaml:"size"`
	// The list of tmpfs volume mount options.
	//
	// For more information, see
	// [TmpfsMountOptions](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Tmpfs.html).
	// Default: none.
	//
	// Experimental.
	MountOptions *[]TmpfsMountOption `field:"optional" json:"mountOptions" yaml:"mountOptions"`
}

The details of a tmpfs mount for a container.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import cdk "github.com/aws/aws-cdk-go/awscdk"

var size size

tmpfs := &Tmpfs{
	ContainerPath: jsii.String("containerPath"),
	Size: size,

	// the properties below are optional
	MountOptions: []tmpfsMountOption{
		batch_alpha.*tmpfsMountOption_DEFAULTS,
	},
}

Experimental.

type TmpfsMountOption

type TmpfsMountOption string

The supported options for a tmpfs mount for a container. Experimental.

const (
	// Experimental.
	TmpfsMountOption_DEFAULTS TmpfsMountOption = "DEFAULTS"
	// Experimental.
	TmpfsMountOption_RO TmpfsMountOption = "RO"
	// Experimental.
	TmpfsMountOption_RW TmpfsMountOption = "RW"
	// Experimental.
	TmpfsMountOption_SUID TmpfsMountOption = "SUID"
	// Experimental.
	TmpfsMountOption_NOSUID TmpfsMountOption = "NOSUID"
	// Experimental.
	TmpfsMountOption_DEV TmpfsMountOption = "DEV"
	// Experimental.
	TmpfsMountOption_NODEV TmpfsMountOption = "NODEV"
	// Experimental.
	TmpfsMountOption_EXEC TmpfsMountOption = "EXEC"
	// Experimental.
	TmpfsMountOption_NOEXEC TmpfsMountOption = "NOEXEC"
	// Experimental.
	TmpfsMountOption_SYNC TmpfsMountOption = "SYNC"
	// Experimental.
	TmpfsMountOption_ASYNC TmpfsMountOption = "ASYNC"
	// Experimental.
	TmpfsMountOption_DIRSYNC TmpfsMountOption = "DIRSYNC"
	// Experimental.
	TmpfsMountOption_REMOUNT TmpfsMountOption = "REMOUNT"
	// Experimental.
	TmpfsMountOption_MAND TmpfsMountOption = "MAND"
	// Experimental.
	TmpfsMountOption_NOMAND TmpfsMountOption = "NOMAND"
	// Experimental.
	TmpfsMountOption_ATIME TmpfsMountOption = "ATIME"
	// Experimental.
	TmpfsMountOption_NOATIME TmpfsMountOption = "NOATIME"
	// Experimental.
	TmpfsMountOption_DIRATIME TmpfsMountOption = "DIRATIME"
	// Experimental.
	TmpfsMountOption_NODIRATIME TmpfsMountOption = "NODIRATIME"
	// Experimental.
	TmpfsMountOption_BIND TmpfsMountOption = "BIND"
	// Experimental.
	TmpfsMountOption_RBIND TmpfsMountOption = "RBIND"
	// Experimental.
	TmpfsMountOption_UNBINDABLE TmpfsMountOption = "UNBINDABLE"
	// Experimental.
	TmpfsMountOption_RUNBINDABLE TmpfsMountOption = "RUNBINDABLE"
	// Experimental.
	TmpfsMountOption_PRIVATE TmpfsMountOption = "PRIVATE"
	// Experimental.
	TmpfsMountOption_RPRIVATE TmpfsMountOption = "RPRIVATE"
	// Experimental.
	TmpfsMountOption_SHARED TmpfsMountOption = "SHARED"
	// Experimental.
	TmpfsMountOption_RSHARED TmpfsMountOption = "RSHARED"
	// Experimental.
	TmpfsMountOption_SLAVE TmpfsMountOption = "SLAVE"
	// Experimental.
	TmpfsMountOption_RSLAVE TmpfsMountOption = "RSLAVE"
	// Experimental.
	TmpfsMountOption_RELATIME TmpfsMountOption = "RELATIME"
	// Experimental.
	TmpfsMountOption_NORELATIME TmpfsMountOption = "NORELATIME"
	// Experimental.
	TmpfsMountOption_STRICTATIME TmpfsMountOption = "STRICTATIME"
	// Experimental.
	TmpfsMountOption_NOSTRICTATIME TmpfsMountOption = "NOSTRICTATIME"
	// Experimental.
	TmpfsMountOption_MODE TmpfsMountOption = "MODE"
	// Experimental.
	TmpfsMountOption_UID TmpfsMountOption = "UID"
	// Experimental.
	TmpfsMountOption_GID TmpfsMountOption = "GID"
	// Experimental.
	TmpfsMountOption_NR_INODES TmpfsMountOption = "NR_INODES"
	// Experimental.
	TmpfsMountOption_NR_BLOCKS TmpfsMountOption = "NR_BLOCKS"
	// Experimental.
	TmpfsMountOption_MPOL TmpfsMountOption = "MPOL"
)

type Ulimit

type Ulimit struct {
	// The hard limit for this resource.
	//
	// The container will
	// be terminated if it exceeds this limit.
	// Experimental.
	HardLimit *float64 `field:"required" json:"hardLimit" yaml:"hardLimit"`
	// The resource to limit.
	// Experimental.
	Name UlimitName `field:"required" json:"name" yaml:"name"`
	// The reservation for this resource.
	//
	// The container will
	// not be terminated if it exceeds this limit.
	// Experimental.
	SoftLimit *float64 `field:"required" json:"softLimit" yaml:"softLimit"`
}

Sets limits for a resource with `ulimit` on linux systems.

Used by the Docker daemon.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"

ulimit := &Ulimit{
	HardLimit: jsii.Number(123),
	Name: batch_alpha.UlimitName_CORE,
	SoftLimit: jsii.Number(123),
}

Experimental.

type UlimitName

type UlimitName string

The resources to be limited. Experimental.

const (
	// max core dump file size.
	// Experimental.
	UlimitName_CORE UlimitName = "CORE"
	// max cpu time (seconds) for a process.
	// Experimental.
	UlimitName_CPU UlimitName = "CPU"
	// max data segment size.
	// Experimental.
	UlimitName_DATA UlimitName = "DATA"
	// max file size.
	// Experimental.
	UlimitName_FSIZE UlimitName = "FSIZE"
	// max number of file locks.
	// Experimental.
	UlimitName_LOCKS UlimitName = "LOCKS"
	// max locked memory.
	// Experimental.
	UlimitName_MEMLOCK UlimitName = "MEMLOCK"
	// max POSIX message queue size.
	// Experimental.
	UlimitName_MSGQUEUE UlimitName = "MSGQUEUE"
	// max nice value for any process this user is running.
	// Experimental.
	UlimitName_NICE UlimitName = "NICE"
	// maximum number of open file descriptors.
	// Experimental.
	UlimitName_NOFILE UlimitName = "NOFILE"
	// maximum number of processes.
	// Experimental.
	UlimitName_NPROC UlimitName = "NPROC"
	// size of the process' resident set (in pages).
	// Experimental.
	UlimitName_RSS UlimitName = "RSS"
	// max realtime priority.
	// Experimental.
	UlimitName_RTPRIO UlimitName = "RTPRIO"
	// timeout for realtime tasks.
	// Experimental.
	UlimitName_RTTIME UlimitName = "RTTIME"
	// max number of pending signals.
	// Experimental.
	UlimitName_SIGPENDING UlimitName = "SIGPENDING"
	// max stack size (in bytes).
	// Experimental.
	UlimitName_STACK UlimitName = "STACK"
)

type UnmanagedComputeEnvironment

type UnmanagedComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IUnmanagedComputeEnvironment
	// The ARN of this compute environment.
	// Experimental.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	// Experimental.
	ComputeEnvironmentName() *string
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Experimental.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	// Experimental.
	Env() *awscdk.ResourceEnvironment
	// The tree node.
	// Experimental.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	// Experimental.
	PhysicalName() *string
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Experimental.
	ServiceRole() awsiam.IRole
	// The stack in which this resource is defined.
	// Experimental.
	Stack() awscdk.Stack
	// The vCPUs this Compute Environment provides. Used only by the scheduler to schedule jobs in `Queue`s that use `FairshareSchedulingPolicy`s.
	//
	// **If this parameter is not provided on a fairshare queue, no capacity is reserved**;
	// that is, the `FairshareSchedulingPolicy` is ignored.
	// Experimental.
	UnmanagedvCPUs() *float64
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	// Experimental.
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Experimental.
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	// Experimental.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	// Experimental.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	// Experimental.
	ToString() *string
}

Unmanaged ComputeEnvironments do not provision or manage EC2 instances on your behalf.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import "github.com/aws/aws-cdk-go/awscdk"

var role role

unmanagedComputeEnvironment := batch_alpha.NewUnmanagedComputeEnvironment(this, jsii.String("MyUnmanagedComputeEnvironment"), &UnmanagedComputeEnvironmentProps{
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	ServiceRole: role,
	UnmanagedvCpus: jsii.Number(123),
})

Experimental.

func NewUnmanagedComputeEnvironment

func NewUnmanagedComputeEnvironment(scope constructs.Construct, id *string, props *UnmanagedComputeEnvironmentProps) UnmanagedComputeEnvironment

Experimental.

type UnmanagedComputeEnvironmentProps

type UnmanagedComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	// Experimental.
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	// Experimental.
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	// Experimental.
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// The vCPUs this Compute Environment provides. Used only by the scheduler to schedule jobs in `Queue`s that use `FairshareSchedulingPolicy`s.
	//
	// **If this parameter is not provided on a fairshare queue, no capacity is reserved**;
	// that is, the `FairshareSchedulingPolicy` is ignored.
	// Default: 0.
	//
	// Experimental.
	UnmanagedvCpus *float64 `field:"optional" json:"unmanagedvCpus" yaml:"unmanagedvCpus"`
}

Represents an UnmanagedComputeEnvironment.

Batch will not provision instances on your behalf in this ComputeEvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import batch_alpha "github.com/aws/aws-cdk-go/awscdkbatchalpha"
import "github.com/aws/aws-cdk-go/awscdk"

var role role

unmanagedComputeEnvironmentProps := &UnmanagedComputeEnvironmentProps{
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	ServiceRole: role,
	UnmanagedvCpus: jsii.Number(123),
}

Experimental.

Source Files

Directories

Path Synopsis
Package jsii contains the functionaility needed for jsii packages to initialize their dependencies and themselves.
Package jsii contains the functionaility needed for jsii packages to initialize their dependencies and themselves.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL