awsbatch

package
v2.103.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 26, 2023 License: Apache-2.0 Imports: 14 Imported by: 2

README

AWS Batch Construct Library

This module is part of the AWS Cloud Development Kit project.

AWS Batch is a batch processing tool for efficiently running hundreds of thousands computing jobs in AWS. Batch can dynamically provision Amazon EC2 Instances to meet the resource requirements of submitted jobs and simplifies the planning, scheduling, and executions of your batch workloads. Batch achieves this through four different resources:

  • ComputeEnvironments: Contain the resources used to execute Jobs
  • JobDefinitions: Define a type of Job that can be submitted
  • JobQueues: Route waiting Jobs to ComputeEnvironments
  • SchedulingPolicies: Applied to Queues to control how and when Jobs exit the JobQueue and enter the ComputeEnvironment

ComputeEnvironments can be managed or unmanaged. Batch will automatically provision EC2 Instances in a managed ComputeEnvironment and will not provision any Instances in an unmanaged ComputeEnvironment. Managed ComputeEnvironments can use ECS, Fargate, or EKS resources to spin up EC2 Instances in (ensure your EKS Cluster has been configured to support a Batch ComputeEnvironment before linking it). You can use Launch Templates and Placement Groups to configure exactly how these resources will be provisioned.

JobDefinitions can use either ECS resources or EKS resources. ECS JobDefinitions can use multiple containers to execute distributed workloads. EKS JobDefinitions can only execute a single container. Submitted Jobs use JobDefinitions as templates.

JobQueues must link at least one ComputeEnvironment. Jobs exit the Queue in FIFO order unless a SchedulingPolicy is specified.

SchedulingPolicys tell the Scheduler how to choose which Jobs should be executed next by the ComputeEnvironment.

Use Cases & Examples

Cost Optimization
Spot Instances

Spot instances are significantly discounted EC2 instances that can be reclaimed at any time by AWS. Workloads that are fault-tolerant or stateless can take advantage of spot pricing. To use spot spot instances, set spot to true on a managed Ec2 or Fargate Compute Environment:

vpc := ec2.NewVpc(this, jsii.String("VPC"))
batch.NewFargateComputeEnvironment(this, jsii.String("myFargateComputeEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})

Batch allows you to specify the percentage of the on-demand instance that the current spot price must be to provision the instance using the spotBidPercentage. This defaults to 100%, which is the recommended value. This value cannot be specified for FargateComputeEnvironments and only applies to ManagedEc2EcsComputeEnvironments. The following code configures a Compute Environment to only use spot instances that are at most 20% the price of the on-demand instance price:

vpc := ec2.NewVpc(this, jsii.String("VPC"))
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
	SpotBidPercentage: jsii.Number(20),
})

For stateful or otherwise non-interruption-tolerant workflows, omit spot or set it to false to only provision on-demand instances.

Choosing Your Instance Types

Batch allows you to choose the instance types or classes that will run your workload. This example configures your ComputeEnvironment to use only the M5AD.large instance:

vpc := ec2.NewVpc(this, jsii.String("VPC"))

batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceTypes: []instanceType{
		ec2.*instanceType_Of(ec2.InstanceClass_M5AD, ec2.InstanceSize_LARGE),
	},
})

Batch allows you to specify only the instance class and to let it choose the size, which you can do like this:

var computeEnv iManagedEc2EcsComputeEnvironment
vpc := ec2.NewVpc(this, jsii.String("VPC"))
computeEnv.AddInstanceClass(ec2.InstanceClass_M5AD)
// Or, specify it on the constructor:
// Or, specify it on the constructor:
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
})

Unless you explicitly specify useOptimalInstanceClasses: false, this compute environment will use 'optimal' instances, which tells Batch to pick an instance from the C4, M4, and R4 instance families. Note: Batch does not allow specifying instance types or classes with different architectures. For example, InstanceClass.A1 cannot be specified alongside 'optimal', because A1 uses ARM and 'optimal' uses x86_64. You can specify both 'optimal' alongside several different instance types in the same compute environment:

var vpc iVpc


computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	InstanceTypes: []instanceType{
		ec2.*instanceType_Of(ec2.InstanceClass_M5AD, ec2.InstanceSize_LARGE),
	},
	UseOptimalInstanceClasses: jsii.Boolean(true),
	 // default
	Vpc: Vpc,
})
// Note: this is equivalent to specifying
computeEnv.AddInstanceType(ec2.instanceType_Of(ec2.InstanceClass_M5AD, ec2.InstanceSize_LARGE))
computeEnv.AddInstanceClass(ec2.InstanceClass_C4)
computeEnv.AddInstanceClass(ec2.InstanceClass_M4)
computeEnv.AddInstanceClass(ec2.InstanceClass_R4)
Allocation Strategies
Allocation Strategy Optimized for Downsides
BEST_FIT Cost May limit throughput
BEST_FIT_PROGRESSIVE Throughput May increase cost
SPOT_CAPACITY_OPTIMIZED Least interruption Only useful on Spot instances
SPOT_PRICE_CAPACITY_OPTIMIZED Least interruption + Price Only useful on Spot instances

Batch provides different Allocation Strategies to help it choose which instances to provision. If your workflow tolerates interruptions, you should enable spot on your ComputeEnvironment and use SPOT_PRICE_CAPACITY_OPTIMIZED (this is the default if spot is enabled). This will tell Batch to choose the instance types from the ones you’ve specified that have the most spot capacity available to minimize the chance of interruption and have the lowest price. To get the most benefit from your spot instances, you should allow Batch to choose from as many different instance types as possible. If you only care about minimal interruptions and not want Batch to optimize for cost, use SPOT_CAPACITY_OPTIMIZED. SPOT_PRICE_CAPACITY_OPTIMIZED is recommended over SPOT_CAPACITY_OPTIMIZED for most use cases.

If your workflow does not tolerate interruptions and you want to minimize your costs at the expense of potentially longer waiting times, use AllocationStrategy.BEST_FIT. This will choose the lowest-cost instance type that fits all the jobs in the queue. If instances of that type are not available, the queue will not choose a new type; instead, it will wait for the instance to become available. This can stall your Queue, with your compute environment only using part of its max capacity (or none at all) until the BEST_FIT instance becomes available.

If you are running a workflow that does not tolerate interruptions and you want to maximize throughput, you can use AllocationStrategy.BEST_FIT_PROGRESSIVE. This is the default Allocation Strategy if spot is false or unspecified. This strategy will examine the Jobs in the queue and choose whichever instance type meets the requirements of the jobs in the queue and with the lowest cost per vCPU, just as BEST_FIT. However, if not all of the capacity can be filled with this instance type, it will choose a new next-best instance type to run any jobs that couldn’t fit into the BEST_FIT capacity. To make the most use of this allocation strategy, it is recommended to use as many instance classes as is feasible for your workload. This example shows a ComputeEnvironment that uses BEST_FIT_PROGRESSIVE with 'optimal' and InstanceClass.M5 instance types:

var vpc iVpc


computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_M5,
	},
})

This example shows a ComputeEnvironment that uses BEST_FIT with 'optimal' instances:

var vpc iVpc


computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	AllocationStrategy: batch.AllocationStrategy_BEST_FIT,
})

Note: allocationStrategy cannot be specified on Fargate Compute Environments.

Controlling vCPU allocation

You can specify the maximum and minimum vCPUs a managed ComputeEnvironment can have at any given time. Batch will always maintain minvCpus worth of instances in your ComputeEnvironment, even if it is not executing any jobs, and even if it is disabled. Batch will scale the instances up to maxvCpus worth of instances as jobs exit the JobQueue and enter the ComputeEnvironment. If you use AllocationStrategy.BEST_FIT_PROGRESSIVE, AllocationStrategy.SPOT_PRICE_CAPACITY_OPTIMIZED, or AllocationStrategy.SPOT_CAPACITY_OPTIMIZED, batch may exceed maxvCpus; it will never exceed maxvCpus by more than a single instance type. This example configures a minvCpus of 10 and a maxvCpus of 100:

var vpc iVpc


batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
	MinvCpus: jsii.Number(10),
	MaxvCpus: jsii.Number(100),
})
Tagging Instances

You can tag any instances launched by your managed EC2 ComputeEnvironments by using the CDK Tags API:

var vpc iVpc


tagCE := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("CEThatMakesTaggedInstnaces"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
})

awscdk.Tags_Of(tagCE).Add(jsii.String("super"), jsii.String("salamander"))

Unmanaged ComputeEnvironments do not support maxvCpus or minvCpus because you must provision and manage the instances yourself; that is, Batch will not scale them up and down as needed.

Sharing a ComputeEnvironment between multiple JobQueues

Multiple JobQueues can share the same ComputeEnvironment. If multiple Queues are attempting to submit Jobs to the same ComputeEnvironment, Batch will pick the Job from the Queue with the highest priority. This example creates two JobQueues that share a ComputeEnvironment:

var vpc iVpc

sharedComputeEnv := batch.NewFargateComputeEnvironment(this, jsii.String("spotEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})
lowPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(1),
})
highPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(10),
})
lowPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
highPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
Fairshare Scheduling

Batch JobQueues execute Jobs submitted to them in FIFO order unless you specify a SchedulingPolicy. FIFO queuing can cause short-running jobs to be starved while long-running jobs fill the compute environment. To solve this, Jobs can be associated with a share.

Shares consist of a shareIdentifier and a weightFactor, which is inversely correlated with the vCPU allocated to that share identifier. When submitting a Job, you can specify its shareIdentifier to associate that particular job with that share. Let's see how the scheduler uses this information to schedule jobs.

For example, if there are two shares defined as follows:

Share Identifier Weight Factor
A 1
B 1

The weight factors share the following relationship:

A_{vCpus} / A_{Weight} = B_{vCpus} / B_{Weight}

where BvCpus is the number of vCPUs allocated to jobs with share identifier 'B', and B_weight is the weight factor of B.

The total number of vCpus allocated to a share is equal to the amount of jobs in that share times the number of vCpus necessary for every job. Let's say that each A job needs 32 VCpus (A_requirement = 32) and each B job needs 64 vCpus (B_requirement = 64):

A_{vCpus} = A_{Jobs} * A_{Requirement}
B_{vCpus} = B_{Jobs} * B_{Requirement}

We have:

A_{vCpus} / A_{Weight} = B_{vCpus} / B_{Weight}
A_{Jobs} * A_{Requirement} / A_{Weight} = B_{Jobs} * B_{Requirement} / B_{Weight}
A_{Jobs} * 32 / 1 = B_{Jobs} * 64 / 1
A_{Jobs} * 32 = B_{Jobs} * 64
A_{Jobs} = B_{Jobs} * 2

Thus the scheduler will schedule two 'A' jobs for each 'B' job.

You can control the weight factors to change these ratios, but note that weight factors are inversely correlated with the vCpus allocated to the corresponding share.

This example would be configured like this:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"))

fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("A"),
	WeightFactor: jsii.Number(1),
})
fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("B"),
	WeightFactor: jsii.Number(1),
})
batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	SchedulingPolicy: fairsharePolicy,
})

Note: The scheduler will only consider the current usage of the compute environment unless you specify shareDecay. For example, a shareDecay of 5 minutes in the above example means that at any given point in time, twice as many 'A' jobs will be scheduled for each 'B' job, but only for the past 5 minutes. If 'B' jobs run longer than 5 minutes, then the scheduler is allowed to put more than two 'A' jobs for each 'B' job, because the usage of those long-running 'B' jobs will no longer be considered after 5 minutes. shareDecay linearly decreases the usage of long running jobs for calculation purposes. For example if share decay is 60 seconds, then jobs that run for 30 seconds have their usage considered to be only 50% of what it actually is, but after a whole minute the scheduler pretends they don't exist for fairness calculations.

The following code specifies a shareDecay of 5 minutes:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"), &FairshareSchedulingPolicyProps{
	ShareDecay: cdk.Duration_Minutes(jsii.Number(5)),
})

If you have high priority jobs that should always be executed as soon as they arrive, you can define a computeReservation to specify the percentage of the maximum vCPU capacity that should be reserved for shares that are not in the queue. The actual reserved percentage is defined by Batch as:

 (\frac{computeReservation}{100}) ^ {ActiveFairShares}

where ActiveFairShares is the number of shares for which there exists at least one job in the queue with a unique share identifier.

This is best illustrated with an example. Suppose there are three shares with share identifiers A, B and C respectively and we specify the computeReservation to be 75%. The queue is currently empty, and no other shares exist.

There are no active fair shares, since the queue is empty. Thus (75/100)^0 = 1 = 100% of the maximum vCpus are reserved for all shares.

A job with identifier A enters the queue.

The number of active fair shares is now 1, hence (75/100)^1 = .75 = 75% of the maximum vCpus are reserved for all shares that do not have the identifier A; for this example, this is B and C, (but if jobs are submitted with a share identifier not covered by this fairshare policy, those would be considered just as B and C are).

Now a B job enters the queue. The number of active fair shares is now 2, so (75/100)^2 = .5625 = 56.25% of the maximum vCpus are reserved for all shares that do not have the identifier A or B.

Now a second A job enters the queue. The number of active fair shares is still 2, so the percentage reserved is still 56.25%

Now a C job enters the queue. The number of active fair shares is now 3, so (75/100)^3 = .421875 = 42.1875% of the maximum vCpus are reserved for all shares that do not have the identifier A, B, or C.

If there are no other shares that your jobs can specify, this means that 42.1875% of your capacity will never be used!

Now, A, B, and C can only consume 100% - 42.1875% = 57.8125% of the maximum vCpus. Note that the this percentage is not split between A, B, and C. Instead, the scheduler will use their weightFactors to decide which jobs to schedule; the only difference is that instead of competing for 100% of the max capacity, jobs compete for 57.8125% of the max capacity.

This example specifies a computeReservation of 75% that will behave as explained in the example above:

batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"), &FairshareSchedulingPolicyProps{
	ComputeReservation: jsii.Number(75),
	Shares: []share{
		&share{
			WeightFactor: jsii.Number(1),
			ShareIdentifier: jsii.String("A"),
		},
		&share{
			WeightFactor: jsii.Number(0.5),
			ShareIdentifier: jsii.String("B"),
		},
		&share{
			WeightFactor: jsii.Number(2),
			ShareIdentifier: jsii.String("C"),
		},
	},
})

You can specify a priority on your JobDefinitions to tell the scheduler to prioritize certain jobs that share the same share identifier.

Configuring Job Retry Policies

Certain workflows may result in Jobs failing due to intermittent issues. Jobs can specify retry policies to respond to different failures with different actions. There are three different ways information about the way a Job exited can be conveyed;

  • exitCode: the exit code returned from the process executed by the container. Will only match non-zero exit codes.
  • reason: any middleware errors, like your Docker registry being down.
  • statusReason: infrastructure errors, most commonly your spot instance being reclaimed.

For most use cases, only one of these will be associated with a particular action at a time. To specify common exitCodes, reasons, or statusReasons, use the corresponding value from the Reason class. This example shows some common failure reasons:

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

When specifying a custom reason, you can specify a glob string to match each of these and react to different failures accordingly. Up to five different retry strategies can be configured for each Job, and each strategy can match against some or all of exitCode, reason, and statusReason. You can optionally configure the number of times a job will be retried, but you cannot configure different retry counts for different strategies; they all share the same count. If multiple conditions are specified in a given retry strategy, they must all match for the action to be taken; the conditions are ANDed together, not ORed.

Running single-container ECS workflows

Batch can run jobs on ECS or EKS. ECS jobs can be defined as single container or multinode. This example creates a JobDefinition that runs a single container with ECS:

var myFileSystem iFileSystem
var myJobRole role

myFileSystem.GrantRead(myJobRole)

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Volumes: []ecsVolume{
			batch.*ecsVolume_Efs(&EfsVolumeOptions{
				Name: jsii.String("myVolume"),
				FileSystem: myFileSystem,
				ContainerPath: jsii.String("/Volumes/myVolume"),
				UseJobRole: jsii.Boolean(true),
			}),
		},
		JobRole: myJobRole,
	}),
})

For workflows that need persistent storage, batch supports mounting Volumes to the container. You can both provision the volume and mount it to the container in a single operation:

var myFileSystem iFileSystem
var jobDefn ecsJobDefinition


jobDefn.Container.AddVolume(batch.EcsVolume_Efs(&EfsVolumeOptions{
	Name: jsii.String("myVolume"),
	FileSystem: myFileSystem,
	ContainerPath: jsii.String("/Volumes/myVolume"),
}))
Secrets

You can expose SecretsManager Secret ARNs or SSM Parameters to your container as environment variables. The following example defines the MY_SECRET_ENV_VAR environment variable that contains the ARN of the Secret defined by mySecret:

var mySecret iSecret


jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Secrets: map[string]secret{
			"MY_SECRET_ENV_VAR": batch.*secret_fromSecretsManager(mySecret),
		},
	}),
})
Running Kubernetes Workflows

Batch also supports running workflows on EKS. The following example creates a JobDefinition that runs on EKS:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

You can mount Volumes to these containers in a single operation:

var jobDefn eksJobDefinition

jobDefn.Container.AddVolume(batch.EksVolume_EmptyDir(&EmptyDirVolumeOptions{
	Name: jsii.String("emptyDir"),
	MountPath: jsii.String("/Volumes/emptyDir"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_HostPath(&HostPathVolumeOptions{
	Name: jsii.String("hostPath"),
	HostPath: jsii.String("/sys"),
	MountPath: jsii.String("/Volumes/hostPath"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_Secret(&SecretPathVolumeOptions{
	Name: jsii.String("secret"),
	Optional: jsii.Boolean(true),
	MountPath: jsii.String("/Volumes/secret"),
	SecretName: jsii.String("mySecret"),
}))
Running Distributed Workflows

Some workflows benefit from parallellization and are most powerful when run in a distributed environment, such as certain numerical calculations or simulations. Batch offers MultiNodeJobDefinitions, which allow a single job to run on multiple instances in parallel, for this purpose. Message Passing Interface (MPI) is often used with these workflows. You must configure your containers to use MPI properly, but Batch allows different nodes running different containers to communicate easily with one another. You must configure your containers to use certain environment variables that Batch will provide them, which lets them know which one is the main node, among other information. For an in-depth example on using MPI to perform numerical computations on Batch, see this blog post In particular, the environment variable that tells the containers which one is the main node can be configured on your MultiNodeJobDefinition as follows:

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	 // optional, omit to let Batch choose the type for you
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

If you need to set the control node to an index other than 0, specify it in directly:

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	MainNode: jsii.Number(5),
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
})
Pass Parameters to a Job

Batch allows you define parameters in your JobDefinition, which can be referenced in the container command. For example:

batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Parameters: map[string]interface{}{
		"echoParam": jsii.String("foobar"),
	},
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Command: []*string{
			jsii.String("echo"),
			jsii.String("Ref::echoParam"),
		},
	}),
})
Understanding Progressive Allocation Strategies

AWS Batch uses an allocation strategy to determine what compute resource will efficiently handle incoming job requests. By default, BEST_FIT will pick an available compute instance based on vCPU requirements. If none exist, the job will wait until resources become available. However, with this strategy, you may have jobs waiting in the queue unnecessarily despite having more powerful instances available. Below is an example of how that situation might look like:

Compute Environment:

1. m5.xlarge => 4 vCPU
2. m5.2xlarge => 8 vCPU
Job Queue:
---------
| A | B |
---------

Job Requirements:
A => 4 vCPU - ALLOCATED TO m5.xlarge
B => 2 vCPU - WAITING

In this situation, Batch will allocate Job A to compute resource #1 because it is the most cost efficient resource that matches the vCPU requirement. However, with this BEST_FIT strategy, Job B will not be allocated to our other available compute resource even though it is strong enough to handle it. Instead, it will wait until the first job is finished processing or wait a similar m5.xlarge resource to be provisioned.

The alternative would be to use the BEST_FIT_PROGRESSIVE strategy in order for the remaining job to be handled in larger containers regardless of vCPU requirement and costs.

Permissions

You can grant any Principal the batch:submitJob permission on both a job definition and a job queue like this:

var vpc iVpc


ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func CfnComputeEnvironment_CFN_RESOURCE_TYPE_NAME

func CfnComputeEnvironment_CFN_RESOURCE_TYPE_NAME() *string

func CfnComputeEnvironment_IsCfnElement

func CfnComputeEnvironment_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnComputeEnvironment_IsCfnResource

func CfnComputeEnvironment_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnComputeEnvironment_IsConstruct

func CfnComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnJobDefinition_CFN_RESOURCE_TYPE_NAME

func CfnJobDefinition_CFN_RESOURCE_TYPE_NAME() *string

func CfnJobDefinition_IsCfnElement

func CfnJobDefinition_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnJobDefinition_IsCfnResource

func CfnJobDefinition_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnJobDefinition_IsConstruct

func CfnJobDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnJobQueue_CFN_RESOURCE_TYPE_NAME

func CfnJobQueue_CFN_RESOURCE_TYPE_NAME() *string

func CfnJobQueue_IsCfnElement

func CfnJobQueue_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnJobQueue_IsCfnResource

func CfnJobQueue_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnJobQueue_IsConstruct

func CfnJobQueue_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnSchedulingPolicy_CFN_RESOURCE_TYPE_NAME

func CfnSchedulingPolicy_CFN_RESOURCE_TYPE_NAME() *string

func CfnSchedulingPolicy_IsCfnElement

func CfnSchedulingPolicy_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnSchedulingPolicy_IsCfnResource

func CfnSchedulingPolicy_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnSchedulingPolicy_IsConstruct

func CfnSchedulingPolicy_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func EcsEc2ContainerDefinition_IsConstruct added in v2.96.0

func EcsEc2ContainerDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func EcsFargateContainerDefinition_IsConstruct added in v2.96.0

func EcsFargateContainerDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func EcsJobDefinition_IsConstruct added in v2.96.0

func EcsJobDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func EcsJobDefinition_IsOwnedResource added in v2.96.0

func EcsJobDefinition_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func EcsJobDefinition_IsResource added in v2.96.0

func EcsJobDefinition_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func EfsVolume_IsEfsVolume added in v2.96.0

func EfsVolume_IsEfsVolume(x interface{}) *bool

Returns true if x is an EfsVolume, false otherwise.

func EksContainerDefinition_IsConstruct added in v2.96.0

func EksContainerDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func EksJobDefinition_IsConstruct added in v2.96.0

func EksJobDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func EksJobDefinition_IsOwnedResource added in v2.96.0

func EksJobDefinition_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func EksJobDefinition_IsResource added in v2.96.0

func EksJobDefinition_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func EmptyDirVolume_IsEmptyDirVolume added in v2.96.0

func EmptyDirVolume_IsEmptyDirVolume(x interface{}) *bool

Returns `true` if `x` is an EmptyDirVolume, `false` otherwise.

func FairshareSchedulingPolicy_IsConstruct added in v2.96.0

func FairshareSchedulingPolicy_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func FairshareSchedulingPolicy_IsOwnedResource added in v2.96.0

func FairshareSchedulingPolicy_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func FairshareSchedulingPolicy_IsResource added in v2.96.0

func FairshareSchedulingPolicy_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func FargateComputeEnvironment_IsConstruct added in v2.96.0

func FargateComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func FargateComputeEnvironment_IsOwnedResource added in v2.96.0

func FargateComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func FargateComputeEnvironment_IsResource added in v2.96.0

func FargateComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func HostPathVolume_IsHostPathVolume added in v2.96.0

func HostPathVolume_IsHostPathVolume(x interface{}) *bool

returns `true` if `x` is a HostPathVolume, `false` otherwise.

func HostVolume_IsHostVolume added in v2.96.0

func HostVolume_IsHostVolume(x interface{}) *bool

returns `true` if `x` is a `HostVolume`, `false` otherwise.

func JobQueue_IsConstruct added in v2.96.0

func JobQueue_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func JobQueue_IsOwnedResource added in v2.96.0

func JobQueue_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func JobQueue_IsResource added in v2.96.0

func JobQueue_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func LinuxParameters_IsConstruct added in v2.96.0

func LinuxParameters_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func ManagedEc2EcsComputeEnvironment_IsConstruct added in v2.96.0

func ManagedEc2EcsComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func ManagedEc2EcsComputeEnvironment_IsOwnedResource added in v2.96.0

func ManagedEc2EcsComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func ManagedEc2EcsComputeEnvironment_IsResource added in v2.96.0

func ManagedEc2EcsComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func ManagedEc2EksComputeEnvironment_IsConstruct added in v2.96.0

func ManagedEc2EksComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func ManagedEc2EksComputeEnvironment_IsOwnedResource added in v2.96.0

func ManagedEc2EksComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func ManagedEc2EksComputeEnvironment_IsResource added in v2.96.0

func ManagedEc2EksComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func MultiNodeJobDefinition_IsConstruct added in v2.96.0

func MultiNodeJobDefinition_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func MultiNodeJobDefinition_IsOwnedResource added in v2.96.0

func MultiNodeJobDefinition_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func MultiNodeJobDefinition_IsResource added in v2.96.0

func MultiNodeJobDefinition_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func NewCfnComputeEnvironment_Override

func NewCfnComputeEnvironment_Override(c CfnComputeEnvironment, scope constructs.Construct, id *string, props *CfnComputeEnvironmentProps)

func NewCfnJobDefinition_Override

func NewCfnJobDefinition_Override(c CfnJobDefinition, scope constructs.Construct, id *string, props *CfnJobDefinitionProps)

func NewCfnJobQueue_Override

func NewCfnJobQueue_Override(c CfnJobQueue, scope constructs.Construct, id *string, props *CfnJobQueueProps)

func NewCfnSchedulingPolicy_Override

func NewCfnSchedulingPolicy_Override(c CfnSchedulingPolicy, scope constructs.Construct, id *string, props *CfnSchedulingPolicyProps)

func NewEcsEc2ContainerDefinition_Override added in v2.96.0

func NewEcsEc2ContainerDefinition_Override(e EcsEc2ContainerDefinition, scope constructs.Construct, id *string, props *EcsEc2ContainerDefinitionProps)

func NewEcsFargateContainerDefinition_Override added in v2.96.0

func NewEcsFargateContainerDefinition_Override(e EcsFargateContainerDefinition, scope constructs.Construct, id *string, props *EcsFargateContainerDefinitionProps)

func NewEcsJobDefinition_Override added in v2.96.0

func NewEcsJobDefinition_Override(e EcsJobDefinition, scope constructs.Construct, id *string, props *EcsJobDefinitionProps)

func NewEcsVolume_Override added in v2.96.0

func NewEcsVolume_Override(e EcsVolume, options *EcsVolumeOptions)

func NewEfsVolume_Override added in v2.96.0

func NewEfsVolume_Override(e EfsVolume, options *EfsVolumeOptions)

func NewEksContainerDefinition_Override added in v2.96.0

func NewEksContainerDefinition_Override(e EksContainerDefinition, scope constructs.Construct, id *string, props *EksContainerDefinitionProps)

func NewEksJobDefinition_Override added in v2.96.0

func NewEksJobDefinition_Override(e EksJobDefinition, scope constructs.Construct, id *string, props *EksJobDefinitionProps)

func NewEksVolume_Override added in v2.96.0

func NewEksVolume_Override(e EksVolume, options *EksVolumeOptions)

func NewEmptyDirVolume_Override added in v2.96.0

func NewEmptyDirVolume_Override(e EmptyDirVolume, options *EmptyDirVolumeOptions)

func NewFairshareSchedulingPolicy_Override added in v2.96.0

func NewFairshareSchedulingPolicy_Override(f FairshareSchedulingPolicy, scope constructs.Construct, id *string, props *FairshareSchedulingPolicyProps)

func NewFargateComputeEnvironment_Override added in v2.96.0

func NewFargateComputeEnvironment_Override(f FargateComputeEnvironment, scope constructs.Construct, id *string, props *FargateComputeEnvironmentProps)

func NewHostPathVolume_Override added in v2.96.0

func NewHostPathVolume_Override(h HostPathVolume, options *HostPathVolumeOptions)

func NewHostVolume_Override added in v2.96.0

func NewHostVolume_Override(h HostVolume, options *HostVolumeOptions)

func NewJobQueue_Override added in v2.96.0

func NewJobQueue_Override(j JobQueue, scope constructs.Construct, id *string, props *JobQueueProps)

func NewLinuxParameters_Override added in v2.96.0

func NewLinuxParameters_Override(l LinuxParameters, scope constructs.Construct, id *string, props *LinuxParametersProps)

Constructs a new instance of the LinuxParameters class.

func NewManagedEc2EcsComputeEnvironment_Override added in v2.96.0

func NewManagedEc2EcsComputeEnvironment_Override(m ManagedEc2EcsComputeEnvironment, scope constructs.Construct, id *string, props *ManagedEc2EcsComputeEnvironmentProps)

func NewManagedEc2EksComputeEnvironment_Override added in v2.96.0

func NewManagedEc2EksComputeEnvironment_Override(m ManagedEc2EksComputeEnvironment, scope constructs.Construct, id *string, props *ManagedEc2EksComputeEnvironmentProps)

func NewMultiNodeJobDefinition_Override added in v2.96.0

func NewMultiNodeJobDefinition_Override(m MultiNodeJobDefinition, scope constructs.Construct, id *string, props *MultiNodeJobDefinitionProps)

func NewOptimalInstanceType_Override added in v2.99.0

func NewOptimalInstanceType_Override(o OptimalInstanceType)

func NewReason_Override added in v2.96.0

func NewReason_Override(r Reason)

func NewRetryStrategy_Override added in v2.96.0

func NewRetryStrategy_Override(r RetryStrategy, action Action, on Reason)

func NewSecretPathVolume_Override added in v2.96.0

func NewSecretPathVolume_Override(s SecretPathVolume, options *SecretPathVolumeOptions)

func NewSecret_Override added in v2.96.0

func NewSecret_Override(s Secret)

func NewUnmanagedComputeEnvironment_Override added in v2.96.0

func NewUnmanagedComputeEnvironment_Override(u UnmanagedComputeEnvironment, scope constructs.Construct, id *string, props *UnmanagedComputeEnvironmentProps)

func OptimalInstanceType_Of added in v2.99.0

func OptimalInstanceType_Of(instanceClass awsec2.InstanceClass, instanceSize awsec2.InstanceSize) awsec2.InstanceType

Instance type for EC2 instances.

This class takes a combination of a class and size.

Be aware that not all combinations of class and size are available, and not all classes are available in all regions.

func SecretPathVolume_IsSecretPathVolume added in v2.96.0

func SecretPathVolume_IsSecretPathVolume(x interface{}) *bool

returns `true` if `x` is a `SecretPathVolume` and `false` otherwise.

func UnmanagedComputeEnvironment_IsConstruct added in v2.96.0

func UnmanagedComputeEnvironment_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func UnmanagedComputeEnvironment_IsOwnedResource added in v2.96.0

func UnmanagedComputeEnvironment_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func UnmanagedComputeEnvironment_IsResource added in v2.96.0

func UnmanagedComputeEnvironment_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

Types

type Action added in v2.96.0

type Action string

The Action to take when all specified conditions in a RetryStrategy are met.

Example:

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))
const (
	// The job will not retry.
	Action_EXIT Action = "EXIT"
	// The job will retry.
	//
	// It can be retried up to the number of times specified in `retryAttempts`.
	Action_RETRY Action = "RETRY"
)

type AllocationStrategy added in v2.96.0

type AllocationStrategy string

Determines how this compute environment chooses instances to spawn.

Example:

var vpc iVpc

computeEnv := batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	AllocationStrategy: batch.AllocationStrategy_BEST_FIT,
})

See: https://aws.amazon.com/blogs/compute/optimizing-for-cost-availability-and-throughput-by-selecting-your-aws-batch-allocation-strategy/

const (
	// Batch chooses the lowest-cost instance type that fits all the jobs in the queue.
	//
	// If instances of that type are not available, the queue will not choose a new type;
	// instead, it will wait for the instance to become available.
	// This can stall your `Queue`, with your compute environment only using part of its max capacity
	// (or none at all) until the `BEST_FIT` instance becomes available.
	// This allocation strategy keeps costs lower but can limit scaling.
	// `BEST_FIT` isn't supported when updating compute environments.
	AllocationStrategy_BEST_FIT AllocationStrategy = "BEST_FIT"
	// This is the default Allocation Strategy if `spot` is `false` or unspecified.
	//
	// This strategy will examine the Jobs in the queue and choose whichever instance type meets the requirements
	// of the jobs in the queue and with the lowest cost per vCPU, just as `BEST_FIT`.
	// However, if not all of the capacity can be filled with this instance type,
	// it will choose a new next-best instance type to run any jobs that couldn’t fit into the `BEST_FIT` capacity.
	// To make the most use of this allocation strategy,
	// it is recommended to use as many instance classes as is feasible for your workload.
	AllocationStrategy_BEST_FIT_PROGRESSIVE AllocationStrategy = "BEST_FIT_PROGRESSIVE"
	// If your workflow tolerates interruptions, you should enable `spot` on your `ComputeEnvironment` and use `SPOT_CAPACITY_OPTIMIZED` (this is the default if `spot` is enabled).
	//
	// This will tell Batch to choose the instance types from the ones you’ve specified that have
	// the most spot capacity available to minimize the chance of interruption.
	// To get the most benefit from your spot instances,
	// you should allow Batch to choose from as many different instance types as possible.
	AllocationStrategy_SPOT_CAPACITY_OPTIMIZED AllocationStrategy = "SPOT_CAPACITY_OPTIMIZED"
	// The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
	//
	// The Batch team recommends this over `SPOT_CAPACITY_OPTIMIZED` in most instances.
	AllocationStrategy_SPOT_PRICE_CAPACITY_OPTIMIZED AllocationStrategy = "SPOT_PRICE_CAPACITY_OPTIMIZED"
)

type CfnComputeEnvironment

type CfnComputeEnvironment interface {
	awscdk.CfnResource
	awscdk.IInspectable
	awscdk.ITaggable
	// Returns the compute environment ARN, such as `batch: *us-east-1* : *111122223333* :compute-environment/ *ComputeEnvironmentName*` .
	AttrComputeEnvironmentArn() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// The name for your compute environment.
	ComputeEnvironmentName() *string
	SetComputeEnvironmentName(val *string)
	// The ComputeResources property type specifies details of the compute resources managed by the compute environment.
	ComputeResources() interface{}
	SetComputeResources(val interface{})
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The details for the Amazon EKS cluster that supports the compute environment.
	EksConfiguration() interface{}
	SetEksConfiguration(val interface{})
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The tree node.
	Node() constructs.Node
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment.
	ReplaceComputeEnvironment() interface{}
	SetReplaceComputeEnvironment(val interface{})
	// The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.
	ServiceRole() *string
	SetServiceRole(val *string)
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// The state of the compute environment.
	State() *string
	SetState(val *string)
	// Tag Manager which manages the tags for this resource.
	Tags() awscdk.TagManager
	// The tags applied to the compute environment.
	TagsRaw() *map[string]*string
	SetTagsRaw(val *map[string]*string)
	// The type of the compute environment: `MANAGED` or `UNMANAGED` .
	Type() *string
	SetType(val *string)
	// The maximum number of vCPUs for an unmanaged compute environment.
	UnmanagedvCpus() *float64
	SetUnmanagedvCpus(val *float64)
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// Specifies the infrastructure update policy for the compute environment.
	UpdatePolicy() interface{}
	SetUpdatePolicy(val interface{})
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependency(target awscdk.CfnResource)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	// Deprecated: use addDependency.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//   "GlobalSecondaryIndexes": [
	//     {
	//       "Projection": {
	//         "NonKeyAttributes": [ "myattribute" ]
	//         ...
	//       }
	//       ...
	//     },
	//     {
	//       "ProjectionType": "INCLUDE"
	//       ...
	//     },
	//   ]
	//   ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string, typeHint awscdk.ResolutionTypeHint) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Retrieves an array of resources this resource depends on.
	//
	// This assembles dependencies on resources across stacks (including nested stacks)
	// automatically.
	ObtainDependencies() *[]interface{}
	// Get a shallow copy of dependencies between this resource and other resources in the same stack.
	ObtainResourceDependencies() *[]awscdk.CfnResource
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	// Indicates that this resource no longer depends on another resource.
	//
	// This can be used for resources across stacks (including nested stacks)
	// and the dependency will automatically be removed from the relevant scope.
	RemoveDependency(target awscdk.CfnResource)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Replaces one dependency with another.
	ReplaceDependency(target awscdk.CfnResource, newTarget awscdk.CfnResource)
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

The `AWS::Batch::ComputeEnvironment` resource defines your AWS Batch compute environment.

You can define `MANAGED` or `UNMANAGED` compute environments. `MANAGED` compute environments can use Amazon EC2 or AWS Fargate resources. `UNMANAGED` compute environments can only use EC2 resources. For more information, see [Compute Environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the ** .

In a managed compute environment, AWS Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the [launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) that you specify when you create the compute environment. You can choose either to use EC2 On-Demand Instances and EC2 Spot Instances, or to use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is below a specified percentage of the On-Demand price.

> Multi-node parallel jobs are not supported on Spot Instances.

In an unmanaged compute environment, you can manage your own EC2 compute resources and have a lot of flexibility with how you configure your compute resources. For example, you can use custom AMI. However, you need to verify that your AMI meets the Amazon ECS container instance AMI specification. For more information, see [container instance AMIs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container_instance_AMIs.html) in the *Amazon Elastic Container Service Developer Guide* . After you have created your unmanaged compute environment, you can use the [DescribeComputeEnvironments](https://docs.aws.amazon.com/batch/latest/APIReference/API_DescribeComputeEnvironments.html) operation to find the Amazon ECS cluster that is associated with it. Then, manually launch your container instances into that Amazon ECS cluster. For more information, see [Launching an Amazon ECS container instance](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html) in the *Amazon Elastic Container Service Developer Guide* .

> To create a compute environment that uses EKS resources, the caller must have permissions to call `eks:DescribeCluster` . > AWS Batch doesn't upgrade the AMIs in a compute environment after it's created except under specific conditions. For example, it doesn't automatically update the AMIs when a newer version of the Amazon ECS optimized AMI is available. Therefore, you're responsible for the management of the guest operating system (including updates and security patches) and any additional application software or utilities that you install on the compute resources. There are two ways to use a new AMI for your AWS Batch jobs. The original method is to complete these steps: > > - Create a new compute environment with the new AMI. > - Add the compute environment to an existing job queue. > - Remove the earlier compute environment from your job queue. > - Delete the earlier compute environment. > > In April 2022, AWS Batch added enhanced support for updating compute environments. For example, the `UpdateComputeEnvironent` API lets you use the `ReplaceComputeEnvironment` property to dynamically update compute environment parameters such as the launch template or instance type without replacement. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* . > > To use the enhanced updating of compute environments to update AMIs, follow these rules: > > - Either do not set the [ServiceRole](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-servicerole) property or set it to the *AWSServiceRoleForBatch* service-linked role. > - Set the AllocationStrategy(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) property to `BEST_FIT_PROGRESSIVE` or `SPOT_CAPACITY_OPTIMIZED` . > - Set the [ReplaceComputeEnvironment](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-replacecomputeenvironment) property to `false` . > > > Set the `ReplaceComputeEnvironment` property to `true` if the compute environment uses the `BEST_FIT` allocation strategy. > If the `ReplaceComputeEnvironment` property is set to `false` , you might receive an error message when you update the CFN template for a compute environment. This issue occurs if the updated `desiredvcpus` value is less than the current `desiredvcpus` value. As a workaround, delete the `desiredvcpus` value from the updated template or use the `minvcpus` property to manage the number of vCPUs. For information, see [Error message when you update the `DesiredvCpus` setting](https://docs.aws.amazon.com/batch/latest/userguide/troubleshooting.html#error-desired-vcpus-update) . > - Set the [UpdateToLatestImageVersion](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) property to `true` . This property is used when you update a compute environment. The [UpdateToLatestImageVersion](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) property is ignored when you create a compute environment. > - Either do not specify an image ID in [ImageId](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) or [ImageIdOverride](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imageidoverride) properties, or in the launch template identified by the [Launch Template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) property. In that case AWS Batch will select the latest Amazon ECS optimized AMI supported by AWS Batch at the time the infrastructure update is initiated. Alternatively you can specify the AMI ID in the `ImageId` or `ImageIdOverride` properties, or the launch template identified by the `LaunchTemplate` properties. Changing any of these properties will trigger an infrastructure update. > > If these rules are followed, any update that triggers an infrastructure update will cause the AMI ID to be re-selected. If the [Version](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-version) property of the [LaunchTemplateSpecification](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html) is set to `$Latest` or `$Default` , the latest or default version of the launch template will be evaluated up at the time of the infrastructure update, even if the `LaunchTemplateSpecification` was not updated.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnComputeEnvironment := awscdk.Aws_batch.NewCfnComputeEnvironment(this, jsii.String("MyCfnComputeEnvironment"), &CfnComputeEnvironmentProps{
	Type: jsii.String("type"),

	// the properties below are optional
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	ComputeResources: &ComputeResourcesProperty{
		MaxvCpus: jsii.Number(123),
		Subnets: []*string{
			jsii.String("subnets"),
		},
		Type: jsii.String("type"),

		// the properties below are optional
		AllocationStrategy: jsii.String("allocationStrategy"),
		BidPercentage: jsii.Number(123),
		DesiredvCpus: jsii.Number(123),
		Ec2Configuration: []interface{}{
			&Ec2ConfigurationObjectProperty{
				ImageType: jsii.String("imageType"),

				// the properties below are optional
				ImageIdOverride: jsii.String("imageIdOverride"),
				ImageKubernetesVersion: jsii.String("imageKubernetesVersion"),
			},
		},
		Ec2KeyPair: jsii.String("ec2KeyPair"),
		ImageId: jsii.String("imageId"),
		InstanceRole: jsii.String("instanceRole"),
		InstanceTypes: []*string{
			jsii.String("instanceTypes"),
		},
		LaunchTemplate: &LaunchTemplateSpecificationProperty{
			LaunchTemplateId: jsii.String("launchTemplateId"),
			LaunchTemplateName: jsii.String("launchTemplateName"),
			Version: jsii.String("version"),
		},
		MinvCpus: jsii.Number(123),
		PlacementGroup: jsii.String("placementGroup"),
		SecurityGroupIds: []*string{
			jsii.String("securityGroupIds"),
		},
		SpotIamFleetRole: jsii.String("spotIamFleetRole"),
		Tags: map[string]*string{
			"tagsKey": jsii.String("tags"),
		},
		UpdateToLatestImageVersion: jsii.Boolean(false),
	},
	EksConfiguration: &EksConfigurationProperty{
		EksClusterArn: jsii.String("eksClusterArn"),
		KubernetesNamespace: jsii.String("kubernetesNamespace"),
	},
	ReplaceComputeEnvironment: jsii.Boolean(false),
	ServiceRole: jsii.String("serviceRole"),
	State: jsii.String("state"),
	Tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
	UnmanagedvCpus: jsii.Number(123),
	UpdatePolicy: &UpdatePolicyProperty{
		JobExecutionTimeoutMinutes: jsii.Number(123),
		TerminateJobsOnUpdate: jsii.Boolean(false),
	},
})

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html

func NewCfnComputeEnvironment

func NewCfnComputeEnvironment(scope constructs.Construct, id *string, props *CfnComputeEnvironmentProps) CfnComputeEnvironment

type CfnComputeEnvironmentProps

type CfnComputeEnvironmentProps struct {
	// The type of the compute environment: `MANAGED` or `UNMANAGED` .
	//
	// For more information, see [Compute Environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-type
	//
	Type *string `field:"required" json:"type" yaml:"type"`
	// The name for your compute environment.
	//
	// It can be up to 128 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-computeenvironmentname
	//
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// The ComputeResources property type specifies details of the compute resources managed by the compute environment.
	//
	// This parameter is required for managed compute environments. For more information, see [Compute Environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the ** .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-computeresources
	//
	ComputeResources interface{} `field:"optional" json:"computeResources" yaml:"computeResources"`
	// The details for the Amazon EKS cluster that supports the compute environment.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-eksconfiguration
	//
	EksConfiguration interface{} `field:"optional" json:"eksConfiguration" yaml:"eksConfiguration"`
	// Specifies whether the compute environment is replaced if an update is made that requires replacing the instances in the compute environment.
	//
	// The default value is `true` . To enable more properties to be updated, set this property to `false` . When changing the value of this property to `false` , do not change any other properties at the same time. If other properties are changed at the same time, and the change needs to be rolled back but it can't, it's possible for the stack to go into the `UPDATE_ROLLBACK_FAILED` state. You can't update a stack that is in the `UPDATE_ROLLBACK_FAILED` state. However, if you can continue to roll it back, you can return the stack to its original settings and then try to update it again. For more information, see [Continue rolling back an update](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) in the *AWS CloudFormation User Guide* .
	//
	// The properties that can't be changed without replacing the compute environment are in the [`ComputeResources`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html) property type: [`AllocationStrategy`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy) , [`BidPercentage`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage) , [`Ec2Configuration`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`Ec2KeyPair`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair) , [`ImageId`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid) , [`InstanceRole`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole) , [`InstanceTypes`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes) , [`LaunchTemplate`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate) , [`MaxvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus) , [`MinvCpus`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus) , [`PlacementGroup`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup) , [`SecurityGroupIds`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids) , [`Subnets`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets) , [Tags](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags) , [`Type`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type) , and [`UpdateToLatestImageVersion`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion) .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-replacecomputeenvironment
	//
	// Default: - true.
	//
	ReplaceComputeEnvironment interface{} `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.
	//
	// For more information, see [AWS Batch service IAM role](https://docs.aws.amazon.com/batch/latest/userguide/service_IAM_role.html) in the *AWS Batch User Guide* .
	//
	// > If your account already created the AWS Batch service-linked role, that role is used by default for your compute environment unless you specify a different role here. If the AWS Batch service-linked role doesn't exist in your account, and no role is specified here, the service attempts to create the AWS Batch service-linked role in your account.
	//
	// If your specified role has a path other than `/` , then you must specify either the full role ARN (recommended) or prefix the role name with the path. For example, if a role with the name `bar` has a path of `/foo/` , specify `/foo/bar` as the role name. For more information, see [Friendly names and paths](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-friendly-names) in the *IAM User Guide* .
	//
	// > Depending on how you created your AWS Batch service role, its ARN might contain the `service-role` path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN doesn't use the `service-role` path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-servicerole
	//
	ServiceRole *string `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// The state of the compute environment.
	//
	// If the state is `ENABLED` , then the compute environment accepts jobs from a queue and can scale out automatically based on queues.
	//
	// If the state is `ENABLED` , then the AWS Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.
	//
	// If the state is `DISABLED` , then the AWS Batch scheduler doesn't attempt to place jobs within the environment. Jobs in a `STARTING` or `RUNNING` state continue to progress normally. Managed compute environments in the `DISABLED` state don't scale out.
	//
	// > Compute environments in a `DISABLED` state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see [State](https://docs.aws.amazon.com/batch/latest/userguide/compute_environment_parameters.html#compute_environment_state) in the *AWS Batch User Guide* .
	//
	// When an instance is idle, the instance scales down to the `minvCpus` value. However, the instance size doesn't change. For example, consider a `c5.8xlarge` instance with a `minvCpus` value of `4` and a `desiredvCpus` value of `36` . This instance doesn't scale down to a `c5.large` instance.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-state
	//
	State *string `field:"optional" json:"state" yaml:"state"`
	// The tags applied to the compute environment.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-tags
	//
	Tags *map[string]*string `field:"optional" json:"tags" yaml:"tags"`
	// The maximum number of vCPUs for an unmanaged compute environment.
	//
	// This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn't provided for a fair share job queue, no vCPU capacity is reserved.
	//
	// > This parameter is only supported when the `type` parameter is set to `UNMANAGED` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-unmanagedvcpus
	//
	UnmanagedvCpus *float64 `field:"optional" json:"unmanagedvCpus" yaml:"unmanagedvCpus"`
	// Specifies the infrastructure update policy for the compute environment.
	//
	// For more information about infrastructure updates, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html#cfn-batch-computeenvironment-updatepolicy
	//
	UpdatePolicy interface{} `field:"optional" json:"updatePolicy" yaml:"updatePolicy"`
}

Properties for defining a `CfnComputeEnvironment`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnComputeEnvironmentProps := &CfnComputeEnvironmentProps{
	Type: jsii.String("type"),

	// the properties below are optional
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	ComputeResources: &ComputeResourcesProperty{
		MaxvCpus: jsii.Number(123),
		Subnets: []*string{
			jsii.String("subnets"),
		},
		Type: jsii.String("type"),

		// the properties below are optional
		AllocationStrategy: jsii.String("allocationStrategy"),
		BidPercentage: jsii.Number(123),
		DesiredvCpus: jsii.Number(123),
		Ec2Configuration: []interface{}{
			&Ec2ConfigurationObjectProperty{
				ImageType: jsii.String("imageType"),

				// the properties below are optional
				ImageIdOverride: jsii.String("imageIdOverride"),
				ImageKubernetesVersion: jsii.String("imageKubernetesVersion"),
			},
		},
		Ec2KeyPair: jsii.String("ec2KeyPair"),
		ImageId: jsii.String("imageId"),
		InstanceRole: jsii.String("instanceRole"),
		InstanceTypes: []*string{
			jsii.String("instanceTypes"),
		},
		LaunchTemplate: &LaunchTemplateSpecificationProperty{
			LaunchTemplateId: jsii.String("launchTemplateId"),
			LaunchTemplateName: jsii.String("launchTemplateName"),
			Version: jsii.String("version"),
		},
		MinvCpus: jsii.Number(123),
		PlacementGroup: jsii.String("placementGroup"),
		SecurityGroupIds: []*string{
			jsii.String("securityGroupIds"),
		},
		SpotIamFleetRole: jsii.String("spotIamFleetRole"),
		Tags: map[string]*string{
			"tagsKey": jsii.String("tags"),
		},
		UpdateToLatestImageVersion: jsii.Boolean(false),
	},
	EksConfiguration: &EksConfigurationProperty{
		EksClusterArn: jsii.String("eksClusterArn"),
		KubernetesNamespace: jsii.String("kubernetesNamespace"),
	},
	ReplaceComputeEnvironment: jsii.Boolean(false),
	ServiceRole: jsii.String("serviceRole"),
	State: jsii.String("state"),
	Tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
	UnmanagedvCpus: jsii.Number(123),
	UpdatePolicy: &UpdatePolicyProperty{
		JobExecutionTimeoutMinutes: jsii.Number(123),
		TerminateJobsOnUpdate: jsii.Boolean(false),
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-computeenvironment.html

type CfnComputeEnvironment_ComputeResourcesProperty

type CfnComputeEnvironment_ComputeResourcesProperty struct {
	// The maximum number of Amazon EC2 vCPUs that an environment can reach.
	//
	// > With `BEST_FIT_PROGRESSIVE` , `SPOT_CAPACITY_OPTIMIZED` and `SPOT_PRICE_CAPACITY_OPTIMIZED` (recommended) strategies using On-Demand or Spot Instances, and the `BEST_FIT` strategy using Spot Instances, AWS Batch might need to exceed `maxvCpus` to meet your capacity requirements. In this event, AWS Batch never exceeds `maxvCpus` by more than a single instance.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-maxvcpus
	//
	MaxvCpus *float64 `field:"required" json:"maxvCpus" yaml:"maxvCpus"`
	// The VPC subnets where the compute resources are launched.
	//
	// Fargate compute resources can contain up to 16 subnets. For Fargate compute resources, providing an empty list will be handled as if this parameter wasn't specified and no change is made. For EC2 compute resources, providing an empty list removes the VPC subnets from the compute resource. For more information, see [VPCs and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) in the *Amazon VPC User Guide* .
	//
	// When updating a compute environment, changing the VPC subnets requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > AWS Batch on Amazon EC2 and AWS Batch on Amazon EKS support Local Zones. For more information, see [Local Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-local-zones) in the *Amazon EC2 User Guide for Linux Instances* , [Amazon EKS and AWS Local Zones](https://docs.aws.amazon.com/eks/latest/userguide/local-zones.html) in the *Amazon EKS User Guide* and [Amazon ECS clusters in Local Zones, Wavelength Zones, and AWS Outposts](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-regions-zones.html#clusters-local-zones) in the *Amazon ECS Developer Guide* .
	// >
	// > AWS Batch on Fargate doesn't currently support Local Zones.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-subnets
	//
	Subnets *[]*string `field:"required" json:"subnets" yaml:"subnets"`
	// The type of compute environment: `EC2` , `SPOT` , `FARGATE` , or `FARGATE_SPOT` .
	//
	// For more information, see [Compute environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the *AWS Batch User Guide* .
	//
	// If you choose `SPOT` , you must also specify an Amazon EC2 Spot Fleet role with the `spotIamFleetRole` parameter. For more information, see [Amazon EC2 spot fleet role](https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html) in the *AWS Batch User Guide* .
	//
	// When updating compute environment, changing the type of a compute environment requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// When updating the type of a compute environment, changing between `EC2` and `SPOT` or between `FARGATE` and `FARGATE_SPOT` will initiate an infrastructure update, but if you switch between `EC2` and `FARGATE` , AWS CloudFormation will create a new compute environment.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-type
	//
	Type *string `field:"required" json:"type" yaml:"type"`
	// The allocation strategy to use for the compute resource if not enough instances of the best fitting instance type can be allocated.
	//
	// This might be because of availability of the instance type in the Region or [Amazon EC2 service limits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html) . For more information, see [Allocation strategies](https://docs.aws.amazon.com/batch/latest/userguide/allocation-strategies.html) in the *AWS Batch User Guide* .
	//
	// When updating a compute environment, changing the allocation strategy requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* . `BEST_FIT` is not supported when updating a compute environment.
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources, and shouldn't be specified.
	//
	// - **BEST_FIT (default)** - AWS Batch selects an instance type that best fits the needs of the jobs with a preference for the lowest-cost instance type. If additional instances of the selected instance type aren't available, AWS Batch waits for the additional instances to be available. If there aren't enough instances available, or if the user is reaching [Amazon EC2 service limits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html) then additional jobs aren't run until the currently running jobs have completed. This allocation strategy keeps costs lower but can limit scaling. If you are using Spot Fleets with `BEST_FIT` then the Spot Fleet IAM role must be specified.
	// - **BEST_FIT_PROGRESSIVE** - AWS Batch will select additional instance types that are large enough to meet the requirements of the jobs in the queue, with a preference for instance types with a lower cost per unit vCPU. If additional instances of the previously selected instance types aren't available, AWS Batch will select new instance types.
	// - **SPOT_CAPACITY_OPTIMIZED** - AWS Batch will select one or more instance types that are large enough to meet the requirements of the jobs in the queue, with a preference for instance types that are less likely to be interrupted. This allocation strategy is only available for Spot Instance compute resources.
	// - **SPOT_PRICE_CAPACITY_OPTIMIZED** - The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price. This allocation strategy is only available for Spot Instance compute resources.
	//
	// > We recommend that you use `SPOT_PRICE_CAPACITY_OPTIMIZED` rather than `SPOT_CAPACITY_OPTIMIZED` in most instances.
	//
	// With `BEST_FIT_PROGRESSIVE` , `SPOT_CAPACITY_OPTIMIZED` , and `SPOT_PRICE_CAPACITY_OPTIMIZED` allocation strategies using On-Demand or Spot Instances, and the `BEST_FIT` strategy using Spot Instances, AWS Batch might need to go above `maxvCpus` to meet your capacity requirements. In this event, AWS Batch never exceeds `maxvCpus` by more than a single instance.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-allocationstrategy
	//
	AllocationStrategy *string `field:"optional" json:"allocationStrategy" yaml:"allocationStrategy"`
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be less than 20% of the current On-Demand price for that Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum percentage. For most use cases, we recommend leaving this field empty.
	//
	// When updating a compute environment, changing the bid percentage requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-bidpercentage
	//
	BidPercentage *float64 `field:"optional" json:"bidPercentage" yaml:"bidPercentage"`
	// The desired number of vCPUS in the compute environment.
	//
	// AWS Batch modifies this value between the minimum and maximum values based on job queue demand.
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it. > AWS Batch doesn't support changing the desired number of vCPUs of an existing compute environment. Don't specify this parameter for compute environments using Amazon EKS clusters. > When you update the `desiredvCpus` setting, the value must be between the `minvCpus` and `maxvCpus` values.
	// >
	// > Additionally, the updated `desiredvCpus` value must be greater than or equal to the current `desiredvCpus` value. For more information, see [Troubleshooting AWS Batch](https://docs.aws.amazon.com/batch/latest/userguide/troubleshooting.html#error-desired-vcpus-update) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-desiredvcpus
	//
	DesiredvCpus *float64 `field:"optional" json:"desiredvCpus" yaml:"desiredvCpus"`
	// Provides information used to select Amazon Machine Images (AMIs) for EC2 instances in the compute environment.
	//
	// If `Ec2Configuration` isn't specified, the default is `ECS_AL2` .
	//
	// When updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* . To remove the EC2 configuration and any custom AMI ID specified in `imageIdOverride` , set this value to an empty string.
	//
	// One or two values can be provided.
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2configuration
	//
	Ec2Configuration interface{} `field:"optional" json:"ec2Configuration" yaml:"ec2Configuration"`
	// The Amazon EC2 key pair that's used for instances launched in the compute environment.
	//
	// You can use this key pair to log in to your instances with SSH. To remove the Amazon EC2 key pair, set this value to an empty string.
	//
	// When updating a compute environment, changing the EC2 key pair requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-ec2keypair
	//
	Ec2KeyPair *string `field:"optional" json:"ec2KeyPair" yaml:"ec2KeyPair"`
	// The Amazon Machine Image (AMI) ID used for instances launched in the compute environment.
	//
	// This parameter is overridden by the `imageIdOverride` member of the `Ec2Configuration` structure. To remove the custom AMI ID and use the default AMI ID, set this value to an empty string.
	//
	// When updating a compute environment, changing the AMI ID requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it. > The AMI that you choose for a compute environment must match the architecture of the instance types that you intend to use for that compute environment. For example, if your compute environment uses A1 instance types, the compute resource AMI that you choose must support ARM instances. Amazon ECS vends both x86 and ARM versions of the Amazon ECS-optimized Amazon Linux 2 AMI. For more information, see [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#ecs-optimized-ami-linux-variants.html) in the *Amazon Elastic Container Service Developer Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-imageid
	//
	ImageId *string `field:"optional" json:"imageId" yaml:"imageId"`
	// The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment.
	//
	// Required for Amazon EC2 instances. You can specify the short name or full Amazon Resource Name (ARN) of an instance profile. For example, `*ecsInstanceRole*` or `arn:aws:iam:: *<aws_account_id>* :instance-profile/ *ecsInstanceRole*` . For more information, see [Amazon ECS instance role](https://docs.aws.amazon.com/batch/latest/userguide/instance_IAM_role.html) in the *AWS Batch User Guide* .
	//
	// When updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancerole
	//
	InstanceRole *string `field:"optional" json:"instanceRole" yaml:"instanceRole"`
	// The instances types that can be launched.
	//
	// You can specify instance families to launch any instance type within those families (for example, `c5` or `p3` ), or you can specify specific sizes within a family (such as `c5.8xlarge` ). You can also choose `optimal` to select instance types (from the C4, M4, and R4 instance families) that match the demand of your job queues.
	//
	// When updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it. > When you create a compute environment, the instance types that you select for the compute environment must share the same architecture. For example, you can't mix x86 and ARM instances in the same compute environment. > Currently, `optimal` uses instance types from the C4, M4, and R4 instance families. In Regions that don't have instance types from those instance families, instance types from the C5, M5, and R5 instance families are used.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-instancetypes
	//
	InstanceTypes *[]*string `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The launch template to use for your compute resources.
	//
	// Any other compute resource parameters that you specify in a [CreateComputeEnvironment](https://docs.aws.amazon.com/batch/latest/APIReference/API_CreateComputeEnvironment.html) API operation override the same parameters in the launch template. You must specify either the launch template ID or launch template name in the request, but not both. For more information, see [Launch Template Support](https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html) in the ** . Removing the launch template from a compute environment will not remove the AMI specified in the launch template. In order to update the AMI specified in a launch template, the `updateToLatestImageVersion` parameter must be set to `true` .
	//
	// When updating a compute environment, changing the launch template requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the ** .
	//
	// > This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-launchtemplate
	//
	LaunchTemplate interface{} `field:"optional" json:"launchTemplate" yaml:"launchTemplate"`
	// The minimum number of vCPUs that an environment should maintain (even if the compute environment is `DISABLED` ).
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-minvcpus
	//
	MinvCpus *float64 `field:"optional" json:"minvCpus" yaml:"minvCpus"`
	// The Amazon EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to your compute environment, you should consider creating a cluster placement group and associate it with your compute resources. This keeps your multi-node parallel job on a logical grouping of instances within a single Availability Zone with high network flow potential. For more information, see [Placement groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) in the *Amazon EC2 User Guide for Linux Instances* .
	//
	// When updating a compute environment, changing the placement group requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-placementgroup
	//
	PlacementGroup *string `field:"optional" json:"placementGroup" yaml:"placementGroup"`
	// The Amazon EC2 security groups that are associated with instances launched in the compute environment.
	//
	// This parameter is required for Fargate compute resources, where it can contain up to 5 security groups. For Fargate compute resources, providing an empty list is handled as if this parameter wasn't specified and no change is made. For EC2 compute resources, providing an empty list removes the security groups from the compute resource.
	//
	// When updating a compute environment, changing the EC2 security groups requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-securitygroupids
	//
	SecurityGroupIds *[]*string `field:"optional" json:"securityGroupIds" yaml:"securityGroupIds"`
	// The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied to a `SPOT` compute environment.
	//
	// This role is required if the allocation strategy set to `BEST_FIT` or if the allocation strategy isn't specified. For more information, see [Amazon EC2 spot fleet role](https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it. > To tag your Spot Instances on creation, the Spot Fleet IAM role specified here must use the newer *AmazonEC2SpotFleetTaggingRole* managed policy. The previously recommended *AmazonEC2SpotFleetRole* managed policy doesn't have the required permissions to tag Spot Instances. For more information, see [Spot instances not tagged on creation](https://docs.aws.amazon.com/batch/latest/userguide/troubleshooting.html#spot-instance-no-tag) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-spotiamfleetrole
	//
	SpotIamFleetRole *string `field:"optional" json:"spotIamFleetRole" yaml:"spotIamFleetRole"`
	// Key-value pair tags to be applied to EC2 resources that are launched in the compute environment.
	//
	// For AWS Batch , these take the form of `"String1": "String2"` , where `String1` is the tag key and `String2` is the tag value-for example, `{ "Name": "Batch Instance - C4OnDemand" }` . This is helpful for recognizing your Batch instances in the Amazon EC2 console. These tags aren't seen when using the AWS Batch `ListTagsForResource` API operation.
	//
	// When updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't specify it.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-tags
	//
	Tags *map[string]*string `field:"optional" json:"tags" yaml:"tags"`
	// Specifies whether the AMI ID is updated to the latest one that's supported by AWS Batch when the compute environment has an infrastructure update.
	//
	// The default value is `false` .
	//
	// > An AMI ID can either be specified in the `imageId` or `imageIdOverride` parameters or be determined by the launch template that's specified in the `launchTemplate` parameter. If an AMI ID is specified any of these ways, this parameter is ignored. For more information about to update AMI IDs during an infrastructure update, see [Updating the AMI ID](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html#updating-compute-environments-ami) in the *AWS Batch User Guide* .
	//
	// When updating a compute environment, changing this setting requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html#cfn-batch-computeenvironment-computeresources-updatetolatestimageversion
	//
	// Default: - false.
	//
	UpdateToLatestImageVersion interface{} `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
}

Details about the compute resources managed by the compute environment.

This parameter is required for managed compute environments. For more information, see [Compute Environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the *AWS Batch User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

computeResourcesProperty := &ComputeResourcesProperty{
	MaxvCpus: jsii.Number(123),
	Subnets: []*string{
		jsii.String("subnets"),
	},
	Type: jsii.String("type"),

	// the properties below are optional
	AllocationStrategy: jsii.String("allocationStrategy"),
	BidPercentage: jsii.Number(123),
	DesiredvCpus: jsii.Number(123),
	Ec2Configuration: []interface{}{
		&Ec2ConfigurationObjectProperty{
			ImageType: jsii.String("imageType"),

			// the properties below are optional
			ImageIdOverride: jsii.String("imageIdOverride"),
			ImageKubernetesVersion: jsii.String("imageKubernetesVersion"),
		},
	},
	Ec2KeyPair: jsii.String("ec2KeyPair"),
	ImageId: jsii.String("imageId"),
	InstanceRole: jsii.String("instanceRole"),
	InstanceTypes: []*string{
		jsii.String("instanceTypes"),
	},
	LaunchTemplate: &LaunchTemplateSpecificationProperty{
		LaunchTemplateId: jsii.String("launchTemplateId"),
		LaunchTemplateName: jsii.String("launchTemplateName"),
		Version: jsii.String("version"),
	},
	MinvCpus: jsii.Number(123),
	PlacementGroup: jsii.String("placementGroup"),
	SecurityGroupIds: []*string{
		jsii.String("securityGroupIds"),
	},
	SpotIamFleetRole: jsii.String("spotIamFleetRole"),
	Tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
	UpdateToLatestImageVersion: jsii.Boolean(false),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-computeresources.html

type CfnComputeEnvironment_Ec2ConfigurationObjectProperty

type CfnComputeEnvironment_Ec2ConfigurationObjectProperty struct {
	// The image type to match with the instance type to select an AMI.
	//
	// The supported values are different for `ECS` and `EKS` resources.
	//
	// - **ECS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ( `ECS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon ECS optimized AMI for that image type that's supported by AWS Batch is used.
	//
	// - **ECS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) : Default for all non-GPU instance families.
	// - **ECS_AL2_NVIDIA** - [Amazon Linux 2 (GPU)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#gpuami) : Default for all GPU instance families (for example `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.
	// - **ECS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) : AWS Batch supports Amazon Linux 2023.
	//
	// > Amazon Linux 2023 does not support `A1` instances.
	// - **ECS_AL1** - [Amazon Linux](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#alami) . Amazon Linux has reached the end-of-life of standard support. For more information, see [Amazon Linux AMI](https://docs.aws.amazon.com/amazon-linux-ami/) .
	// - **EKS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon EKS-optimized Amazon Linux AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) ( `EKS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon EKS optimized AMI for that image type that AWS Batch supports is used.
	//
	// - **EKS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all non-GPU instance families.
	// - **EKS_AL2_NVIDIA** - [Amazon Linux 2 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all GPU instance families (for example, `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imagetype
	//
	ImageType *string `field:"required" json:"imageType" yaml:"imageType"`
	// The AMI ID used for instances launched in the compute environment that match the image type.
	//
	// This setting overrides the `imageId` set in the `computeResource` object.
	//
	// > The AMI that you choose for a compute environment must match the architecture of the instance types that you intend to use for that compute environment. For example, if your compute environment uses A1 instance types, the compute resource AMI that you choose must support ARM instances. Amazon ECS vends both x86 and ARM versions of the Amazon ECS-optimized Amazon Linux 2 AMI. For more information, see [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#ecs-optimized-ami-linux-variants.html) in the *Amazon Elastic Container Service Developer Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imageidoverride
	//
	ImageIdOverride *string `field:"optional" json:"imageIdOverride" yaml:"imageIdOverride"`
	// The Kubernetes version for the compute environment.
	//
	// If you don't specify a value, the latest version that AWS Batch supports is used.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html#cfn-batch-computeenvironment-ec2configurationobject-imagekubernetesversion
	//
	ImageKubernetesVersion *string `field:"optional" json:"imageKubernetesVersion" yaml:"imageKubernetesVersion"`
}

Provides information used to select Amazon Machine Images (AMIs) for instances in the compute environment.

If `Ec2Configuration` isn't specified, the default is `ECS_AL2` ( [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ).

> This object isn't applicable to jobs that are running on Fargate resources.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

ec2ConfigurationObjectProperty := &Ec2ConfigurationObjectProperty{
	ImageType: jsii.String("imageType"),

	// the properties below are optional
	ImageIdOverride: jsii.String("imageIdOverride"),
	ImageKubernetesVersion: jsii.String("imageKubernetesVersion"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-ec2configurationobject.html

type CfnComputeEnvironment_EksConfigurationProperty added in v2.51.0

type CfnComputeEnvironment_EksConfigurationProperty struct {
	// The Amazon Resource Name (ARN) of the Amazon EKS cluster.
	//
	// An example is `arn: *aws* :eks: *us-east-1* : *123456789012* :cluster/ *ClusterForBatch*` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-eksconfiguration.html#cfn-batch-computeenvironment-eksconfiguration-eksclusterarn
	//
	EksClusterArn *string `field:"required" json:"eksClusterArn" yaml:"eksClusterArn"`
	// The namespace of the Amazon EKS cluster.
	//
	// AWS Batch manages pods in this namespace. The value can't left empty or null. It must be fewer than 64 characters long, can't be set to `default` , can't start with " `kube-` ," and must match this regular expression: `^[a-z0-9]([-a-z0-9]*[a-z0-9])?$` . For more information, see [Namespaces](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) in the Kubernetes documentation.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-eksconfiguration.html#cfn-batch-computeenvironment-eksconfiguration-kubernetesnamespace
	//
	KubernetesNamespace *string `field:"required" json:"kubernetesNamespace" yaml:"kubernetesNamespace"`
}

Configuration for the Amazon EKS cluster that supports the AWS Batch compute environment.

The cluster must exist before the compute environment can be created.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksConfigurationProperty := &EksConfigurationProperty{
	EksClusterArn: jsii.String("eksClusterArn"),
	KubernetesNamespace: jsii.String("kubernetesNamespace"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-eksconfiguration.html

type CfnComputeEnvironment_LaunchTemplateSpecificationProperty

type CfnComputeEnvironment_LaunchTemplateSpecificationProperty struct {
	// The ID of the launch template.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-launchtemplateid
	//
	LaunchTemplateId *string `field:"optional" json:"launchTemplateId" yaml:"launchTemplateId"`
	// The name of the launch template.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-launchtemplatename
	//
	LaunchTemplateName *string `field:"optional" json:"launchTemplateName" yaml:"launchTemplateName"`
	// The version number of the launch template, `$Latest` , or `$Default` .
	//
	// If the value is `$Latest` , the latest version of the launch template is used. If the value is `$Default` , the default version of the launch template is used.
	//
	// > If the AMI ID that's used in a compute environment is from the launch template, the AMI isn't changed when the compute environment is updated. It's only changed if the `updateToLatestImageVersion` parameter for the compute environment is set to `true` . During an infrastructure update, if either `$Latest` or `$Default` is specified, AWS Batch re-evaluates the launch template version, and it might use a different version of the launch template. This is the case even if the launch template isn't specified in the update. When updating a compute environment, changing the launch template requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .
	//
	// Default: `$Default` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html#cfn-batch-computeenvironment-launchtemplatespecification-version
	//
	Version *string `field:"optional" json:"version" yaml:"version"`
}

An object that represents a launch template that's associated with a compute resource.

You must specify either the launch template ID or launch template name in the request, but not both.

If security groups are specified using both the `securityGroupIds` parameter of `CreateComputeEnvironment` and the launch template, the values in the `securityGroupIds` parameter of `CreateComputeEnvironment` will be used.

> This object isn't applicable to jobs that are running on Fargate resources.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

launchTemplateSpecificationProperty := &LaunchTemplateSpecificationProperty{
	LaunchTemplateId: jsii.String("launchTemplateId"),
	LaunchTemplateName: jsii.String("launchTemplateName"),
	Version: jsii.String("version"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-launchtemplatespecification.html

type CfnComputeEnvironment_UpdatePolicyProperty added in v2.23.0

type CfnComputeEnvironment_UpdatePolicyProperty struct {
	// Specifies the job timeout (in minutes) when the compute environment infrastructure is updated.
	//
	// The default value is 30.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-updatepolicy.html#cfn-batch-computeenvironment-updatepolicy-jobexecutiontimeoutminutes
	//
	// Default: - 30.
	//
	JobExecutionTimeoutMinutes *float64 `field:"optional" json:"jobExecutionTimeoutMinutes" yaml:"jobExecutionTimeoutMinutes"`
	// Specifies whether jobs are automatically terminated when the computer environment infrastructure is updated.
	//
	// The default value is `false` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-updatepolicy.html#cfn-batch-computeenvironment-updatepolicy-terminatejobsonupdate
	//
	// Default: - false.
	//
	TerminateJobsOnUpdate interface{} `field:"optional" json:"terminateJobsOnUpdate" yaml:"terminateJobsOnUpdate"`
}

Specifies the infrastructure update policy for the compute environment.

For more information about infrastructure updates, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

updatePolicyProperty := &UpdatePolicyProperty{
	JobExecutionTimeoutMinutes: jsii.Number(123),
	TerminateJobsOnUpdate: jsii.Boolean(false),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-computeenvironment-updatepolicy.html

type CfnJobDefinition

type CfnJobDefinition interface {
	awscdk.CfnResource
	awscdk.IInspectable
	awscdk.ITaggable
	AttrId() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// An object with various properties specific to Amazon ECS based jobs.
	ContainerProperties() interface{}
	SetContainerProperties(val interface{})
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// An object with various properties that are specific to Amazon EKS based jobs.
	EksProperties() interface{}
	SetEksProperties(val interface{})
	// The name of the job definition.
	JobDefinitionName() *string
	SetJobDefinitionName(val *string)
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The tree node.
	Node() constructs.Node
	// An object with various properties that are specific to multi-node parallel jobs.
	NodeProperties() interface{}
	SetNodeProperties(val interface{})
	// Default parameters or parameter substitution placeholders that are set in the job definition.
	Parameters() interface{}
	SetParameters(val interface{})
	// The platform capabilities required by the job definition.
	PlatformCapabilities() *[]*string
	SetPlatformCapabilities(val *[]*string)
	// Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task.
	PropagateTags() interface{}
	SetPropagateTags(val interface{})
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// The retry strategy to use for failed jobs that are submitted with this job definition.
	RetryStrategy() interface{}
	SetRetryStrategy(val interface{})
	// The scheduling priority of the job definition.
	SchedulingPriority() *float64
	SetSchedulingPriority(val *float64)
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// Tag Manager which manages the tags for this resource.
	Tags() awscdk.TagManager
	// The tags that are applied to the job definition.
	TagsRaw() interface{}
	SetTagsRaw(val interface{})
	// The timeout time for jobs that are submitted with this job definition.
	Timeout() interface{}
	SetTimeout(val interface{})
	// The type of job definition.
	Type() *string
	SetType(val *string)
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependency(target awscdk.CfnResource)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	// Deprecated: use addDependency.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//   "GlobalSecondaryIndexes": [
	//     {
	//       "Projection": {
	//         "NonKeyAttributes": [ "myattribute" ]
	//         ...
	//       }
	//       ...
	//     },
	//     {
	//       "ProjectionType": "INCLUDE"
	//       ...
	//     },
	//   ]
	//   ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string, typeHint awscdk.ResolutionTypeHint) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Retrieves an array of resources this resource depends on.
	//
	// This assembles dependencies on resources across stacks (including nested stacks)
	// automatically.
	ObtainDependencies() *[]interface{}
	// Get a shallow copy of dependencies between this resource and other resources in the same stack.
	ObtainResourceDependencies() *[]awscdk.CfnResource
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	// Indicates that this resource no longer depends on another resource.
	//
	// This can be used for resources across stacks (including nested stacks)
	// and the dependency will automatically be removed from the relevant scope.
	RemoveDependency(target awscdk.CfnResource)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Replaces one dependency with another.
	ReplaceDependency(target awscdk.CfnResource, newTarget awscdk.CfnResource)
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

The `AWS::Batch::JobDefinition` resource specifies the parameters for an AWS Batch job definition.

For more information, see [Job Definitions](https://docs.aws.amazon.com/batch/latest/userguide/job_definitions.html) in the ** .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var labels interface{}
var limits interface{}
var options interface{}
var parameters interface{}
var requests interface{}
var tags interface{}

cfnJobDefinition := awscdk.Aws_batch.NewCfnJobDefinition(this, jsii.String("MyCfnJobDefinition"), &CfnJobDefinitionProps{
	Type: jsii.String("type"),

	// the properties below are optional
	ContainerProperties: &ContainerPropertiesProperty{
		Image: jsii.String("image"),

		// the properties below are optional
		Command: []*string{
			jsii.String("command"),
		},
		Environment: []interface{}{
			&EnvironmentProperty{
				Name: jsii.String("name"),
				Value: jsii.String("value"),
			},
		},
		EphemeralStorage: &EphemeralStorageProperty{
			SizeInGiB: jsii.Number(123),
		},
		ExecutionRoleArn: jsii.String("executionRoleArn"),
		FargatePlatformConfiguration: &FargatePlatformConfigurationProperty{
			PlatformVersion: jsii.String("platformVersion"),
		},
		InstanceType: jsii.String("instanceType"),
		JobRoleArn: jsii.String("jobRoleArn"),
		LinuxParameters: &LinuxParametersProperty{
			Devices: []interface{}{
				&DeviceProperty{
					ContainerPath: jsii.String("containerPath"),
					HostPath: jsii.String("hostPath"),
					Permissions: []*string{
						jsii.String("permissions"),
					},
				},
			},
			InitProcessEnabled: jsii.Boolean(false),
			MaxSwap: jsii.Number(123),
			SharedMemorySize: jsii.Number(123),
			Swappiness: jsii.Number(123),
			Tmpfs: []interface{}{
				&TmpfsProperty{
					ContainerPath: jsii.String("containerPath"),
					Size: jsii.Number(123),

					// the properties below are optional
					MountOptions: []*string{
						jsii.String("mountOptions"),
					},
				},
			},
		},
		LogConfiguration: &LogConfigurationProperty{
			LogDriver: jsii.String("logDriver"),

			// the properties below are optional
			Options: options,
			SecretOptions: []interface{}{
				&SecretProperty{
					Name: jsii.String("name"),
					ValueFrom: jsii.String("valueFrom"),
				},
			},
		},
		Memory: jsii.Number(123),
		MountPoints: []interface{}{
			&MountPointsProperty{
				ContainerPath: jsii.String("containerPath"),
				ReadOnly: jsii.Boolean(false),
				SourceVolume: jsii.String("sourceVolume"),
			},
		},
		NetworkConfiguration: &NetworkConfigurationProperty{
			AssignPublicIp: jsii.String("assignPublicIp"),
		},
		Privileged: jsii.Boolean(false),
		ReadonlyRootFilesystem: jsii.Boolean(false),
		ResourceRequirements: []interface{}{
			&ResourceRequirementProperty{
				Type: jsii.String("type"),
				Value: jsii.String("value"),
			},
		},
		RuntimePlatform: &RuntimePlatformProperty{
			CpuArchitecture: jsii.String("cpuArchitecture"),
			OperatingSystemFamily: jsii.String("operatingSystemFamily"),
		},
		Secrets: []interface{}{
			&SecretProperty{
				Name: jsii.String("name"),
				ValueFrom: jsii.String("valueFrom"),
			},
		},
		Ulimits: []interface{}{
			&UlimitProperty{
				HardLimit: jsii.Number(123),
				Name: jsii.String("name"),
				SoftLimit: jsii.Number(123),
			},
		},
		User: jsii.String("user"),
		Vcpus: jsii.Number(123),
		Volumes: []interface{}{
			&VolumesProperty{
				EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
					FileSystemId: jsii.String("fileSystemId"),

					// the properties below are optional
					AuthorizationConfig: &AuthorizationConfigProperty{
						AccessPointId: jsii.String("accessPointId"),
						Iam: jsii.String("iam"),
					},
					RootDirectory: jsii.String("rootDirectory"),
					TransitEncryption: jsii.String("transitEncryption"),
					TransitEncryptionPort: jsii.Number(123),
				},
				Host: &VolumesHostProperty{
					SourcePath: jsii.String("sourcePath"),
				},
				Name: jsii.String("name"),
			},
		},
	},
	EksProperties: &EksPropertiesProperty{
		PodProperties: &PodPropertiesProperty{
			Containers: []interface{}{
				&EksContainerProperty{
					Image: jsii.String("image"),

					// the properties below are optional
					Args: []*string{
						jsii.String("args"),
					},
					Command: []*string{
						jsii.String("command"),
					},
					Env: []interface{}{
						&EksContainerEnvironmentVariableProperty{
							Name: jsii.String("name"),

							// the properties below are optional
							Value: jsii.String("value"),
						},
					},
					ImagePullPolicy: jsii.String("imagePullPolicy"),
					Name: jsii.String("name"),
					Resources: &ResourcesProperty{
						Limits: limits,
						Requests: requests,
					},
					SecurityContext: &SecurityContextProperty{
						Privileged: jsii.Boolean(false),
						ReadOnlyRootFilesystem: jsii.Boolean(false),
						RunAsGroup: jsii.Number(123),
						RunAsNonRoot: jsii.Boolean(false),
						RunAsUser: jsii.Number(123),
					},
					VolumeMounts: []interface{}{
						&EksContainerVolumeMountProperty{
							MountPath: jsii.String("mountPath"),
							Name: jsii.String("name"),
							ReadOnly: jsii.Boolean(false),
						},
					},
				},
			},
			DnsPolicy: jsii.String("dnsPolicy"),
			HostNetwork: jsii.Boolean(false),
			Metadata: &MetadataProperty{
				Labels: labels,
			},
			ServiceAccountName: jsii.String("serviceAccountName"),
			Volumes: []interface{}{
				&EksVolumeProperty{
					Name: jsii.String("name"),

					// the properties below are optional
					EmptyDir: &EmptyDirProperty{
						Medium: jsii.String("medium"),
						SizeLimit: jsii.String("sizeLimit"),
					},
					HostPath: &HostPathProperty{
						Path: jsii.String("path"),
					},
					Secret: &EksSecretProperty{
						SecretName: jsii.String("secretName"),

						// the properties below are optional
						Optional: jsii.Boolean(false),
					},
				},
			},
		},
	},
	JobDefinitionName: jsii.String("jobDefinitionName"),
	NodeProperties: &NodePropertiesProperty{
		MainNode: jsii.Number(123),
		NodeRangeProperties: []interface{}{
			&NodeRangePropertyProperty{
				TargetNodes: jsii.String("targetNodes"),

				// the properties below are optional
				Container: &ContainerPropertiesProperty{
					Image: jsii.String("image"),

					// the properties below are optional
					Command: []*string{
						jsii.String("command"),
					},
					Environment: []interface{}{
						&EnvironmentProperty{
							Name: jsii.String("name"),
							Value: jsii.String("value"),
						},
					},
					EphemeralStorage: &EphemeralStorageProperty{
						SizeInGiB: jsii.Number(123),
					},
					ExecutionRoleArn: jsii.String("executionRoleArn"),
					FargatePlatformConfiguration: &FargatePlatformConfigurationProperty{
						PlatformVersion: jsii.String("platformVersion"),
					},
					InstanceType: jsii.String("instanceType"),
					JobRoleArn: jsii.String("jobRoleArn"),
					LinuxParameters: &LinuxParametersProperty{
						Devices: []interface{}{
							&DeviceProperty{
								ContainerPath: jsii.String("containerPath"),
								HostPath: jsii.String("hostPath"),
								Permissions: []*string{
									jsii.String("permissions"),
								},
							},
						},
						InitProcessEnabled: jsii.Boolean(false),
						MaxSwap: jsii.Number(123),
						SharedMemorySize: jsii.Number(123),
						Swappiness: jsii.Number(123),
						Tmpfs: []interface{}{
							&TmpfsProperty{
								ContainerPath: jsii.String("containerPath"),
								Size: jsii.Number(123),

								// the properties below are optional
								MountOptions: []*string{
									jsii.String("mountOptions"),
								},
							},
						},
					},
					LogConfiguration: &LogConfigurationProperty{
						LogDriver: jsii.String("logDriver"),

						// the properties below are optional
						Options: options,
						SecretOptions: []interface{}{
							&SecretProperty{
								Name: jsii.String("name"),
								ValueFrom: jsii.String("valueFrom"),
							},
						},
					},
					Memory: jsii.Number(123),
					MountPoints: []interface{}{
						&MountPointsProperty{
							ContainerPath: jsii.String("containerPath"),
							ReadOnly: jsii.Boolean(false),
							SourceVolume: jsii.String("sourceVolume"),
						},
					},
					NetworkConfiguration: &NetworkConfigurationProperty{
						AssignPublicIp: jsii.String("assignPublicIp"),
					},
					Privileged: jsii.Boolean(false),
					ReadonlyRootFilesystem: jsii.Boolean(false),
					ResourceRequirements: []interface{}{
						&ResourceRequirementProperty{
							Type: jsii.String("type"),
							Value: jsii.String("value"),
						},
					},
					RuntimePlatform: &RuntimePlatformProperty{
						CpuArchitecture: jsii.String("cpuArchitecture"),
						OperatingSystemFamily: jsii.String("operatingSystemFamily"),
					},
					Secrets: []interface{}{
						&SecretProperty{
							Name: jsii.String("name"),
							ValueFrom: jsii.String("valueFrom"),
						},
					},
					Ulimits: []interface{}{
						&UlimitProperty{
							HardLimit: jsii.Number(123),
							Name: jsii.String("name"),
							SoftLimit: jsii.Number(123),
						},
					},
					User: jsii.String("user"),
					Vcpus: jsii.Number(123),
					Volumes: []interface{}{
						&VolumesProperty{
							EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
								FileSystemId: jsii.String("fileSystemId"),

								// the properties below are optional
								AuthorizationConfig: &AuthorizationConfigProperty{
									AccessPointId: jsii.String("accessPointId"),
									Iam: jsii.String("iam"),
								},
								RootDirectory: jsii.String("rootDirectory"),
								TransitEncryption: jsii.String("transitEncryption"),
								TransitEncryptionPort: jsii.Number(123),
							},
							Host: &VolumesHostProperty{
								SourcePath: jsii.String("sourcePath"),
							},
							Name: jsii.String("name"),
						},
					},
				},
			},
		},
		NumNodes: jsii.Number(123),
	},
	Parameters: parameters,
	PlatformCapabilities: []*string{
		jsii.String("platformCapabilities"),
	},
	PropagateTags: jsii.Boolean(false),
	RetryStrategy: &RetryStrategyProperty{
		Attempts: jsii.Number(123),
		EvaluateOnExit: []interface{}{
			&EvaluateOnExitProperty{
				Action: jsii.String("action"),

				// the properties below are optional
				OnExitCode: jsii.String("onExitCode"),
				OnReason: jsii.String("onReason"),
				OnStatusReason: jsii.String("onStatusReason"),
			},
		},
	},
	SchedulingPriority: jsii.Number(123),
	Tags: tags,
	Timeout: &TimeoutProperty{
		AttemptDurationSeconds: jsii.Number(123),
	},
})

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html

func NewCfnJobDefinition

func NewCfnJobDefinition(scope constructs.Construct, id *string, props *CfnJobDefinitionProps) CfnJobDefinition

type CfnJobDefinitionProps

type CfnJobDefinitionProps struct {
	// The type of job definition.
	//
	// For more information about multi-node parallel jobs, see [Creating a multi-node parallel job definition](https://docs.aws.amazon.com/batch/latest/userguide/multi-node-job-def.html) in the *AWS Batch User Guide* .
	//
	// > If the job is run on Fargate resources, then `multinode` isn't supported.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-type
	//
	Type *string `field:"required" json:"type" yaml:"type"`
	// An object with various properties specific to Amazon ECS based jobs.
	//
	// Valid values are `containerProperties` , `eksProperties` , and `nodeProperties` . Only one can be specified.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-containerproperties
	//
	ContainerProperties interface{} `field:"optional" json:"containerProperties" yaml:"containerProperties"`
	// An object with various properties that are specific to Amazon EKS based jobs.
	//
	// Valid values are `containerProperties` , `eksProperties` , and `nodeProperties` . Only one can be specified.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-eksproperties
	//
	EksProperties interface{} `field:"optional" json:"eksProperties" yaml:"eksProperties"`
	// The name of the job definition.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-jobdefinitionname
	//
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// An object with various properties that are specific to multi-node parallel jobs.
	//
	// Valid values are `containerProperties` , `eksProperties` , and `nodeProperties` . Only one can be specified.
	//
	// > If the job runs on Fargate resources, don't specify `nodeProperties` . Use `containerProperties` instead.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-nodeproperties
	//
	NodeProperties interface{} `field:"optional" json:"nodeProperties" yaml:"nodeProperties"`
	// Default parameters or parameter substitution placeholders that are set in the job definition.
	//
	// Parameters are specified as a key-value pair mapping. Parameters in a `SubmitJob` request override any corresponding parameter defaults from the job definition. For more information about specifying parameters, see [Job definition parameters](https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-parameters
	//
	Parameters interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The platform capabilities required by the job definition.
	//
	// If no value is specified, it defaults to `EC2` . Jobs run on Fargate resources specify `FARGATE` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-platformcapabilities
	//
	PlatformCapabilities *[]*string `field:"optional" json:"platformCapabilities" yaml:"platformCapabilities"`
	// Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task.
	//
	// If no value is specified, the tags aren't propagated. Tags can only be propagated to the tasks when the tasks are created. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the `FAILED` state.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-propagatetags
	//
	PropagateTags interface{} `field:"optional" json:"propagateTags" yaml:"propagateTags"`
	// The retry strategy to use for failed jobs that are submitted with this job definition.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-retrystrategy
	//
	RetryStrategy interface{} `field:"optional" json:"retryStrategy" yaml:"retryStrategy"`
	// The scheduling priority of the job definition.
	//
	// This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-schedulingpriority
	//
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The tags that are applied to the job definition.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-tags
	//
	Tags interface{} `field:"optional" json:"tags" yaml:"tags"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes, AWS Batch terminates your jobs if they aren't finished.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html#cfn-batch-jobdefinition-timeout
	//
	Timeout interface{} `field:"optional" json:"timeout" yaml:"timeout"`
}

Properties for defining a `CfnJobDefinition`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var labels interface{}
var limits interface{}
var options interface{}
var parameters interface{}
var requests interface{}
var tags interface{}

cfnJobDefinitionProps := &CfnJobDefinitionProps{
	Type: jsii.String("type"),

	// the properties below are optional
	ContainerProperties: &ContainerPropertiesProperty{
		Image: jsii.String("image"),

		// the properties below are optional
		Command: []*string{
			jsii.String("command"),
		},
		Environment: []interface{}{
			&EnvironmentProperty{
				Name: jsii.String("name"),
				Value: jsii.String("value"),
			},
		},
		EphemeralStorage: &EphemeralStorageProperty{
			SizeInGiB: jsii.Number(123),
		},
		ExecutionRoleArn: jsii.String("executionRoleArn"),
		FargatePlatformConfiguration: &FargatePlatformConfigurationProperty{
			PlatformVersion: jsii.String("platformVersion"),
		},
		InstanceType: jsii.String("instanceType"),
		JobRoleArn: jsii.String("jobRoleArn"),
		LinuxParameters: &LinuxParametersProperty{
			Devices: []interface{}{
				&DeviceProperty{
					ContainerPath: jsii.String("containerPath"),
					HostPath: jsii.String("hostPath"),
					Permissions: []*string{
						jsii.String("permissions"),
					},
				},
			},
			InitProcessEnabled: jsii.Boolean(false),
			MaxSwap: jsii.Number(123),
			SharedMemorySize: jsii.Number(123),
			Swappiness: jsii.Number(123),
			Tmpfs: []interface{}{
				&TmpfsProperty{
					ContainerPath: jsii.String("containerPath"),
					Size: jsii.Number(123),

					// the properties below are optional
					MountOptions: []*string{
						jsii.String("mountOptions"),
					},
				},
			},
		},
		LogConfiguration: &LogConfigurationProperty{
			LogDriver: jsii.String("logDriver"),

			// the properties below are optional
			Options: options,
			SecretOptions: []interface{}{
				&SecretProperty{
					Name: jsii.String("name"),
					ValueFrom: jsii.String("valueFrom"),
				},
			},
		},
		Memory: jsii.Number(123),
		MountPoints: []interface{}{
			&MountPointsProperty{
				ContainerPath: jsii.String("containerPath"),
				ReadOnly: jsii.Boolean(false),
				SourceVolume: jsii.String("sourceVolume"),
			},
		},
		NetworkConfiguration: &NetworkConfigurationProperty{
			AssignPublicIp: jsii.String("assignPublicIp"),
		},
		Privileged: jsii.Boolean(false),
		ReadonlyRootFilesystem: jsii.Boolean(false),
		ResourceRequirements: []interface{}{
			&ResourceRequirementProperty{
				Type: jsii.String("type"),
				Value: jsii.String("value"),
			},
		},
		RuntimePlatform: &RuntimePlatformProperty{
			CpuArchitecture: jsii.String("cpuArchitecture"),
			OperatingSystemFamily: jsii.String("operatingSystemFamily"),
		},
		Secrets: []interface{}{
			&SecretProperty{
				Name: jsii.String("name"),
				ValueFrom: jsii.String("valueFrom"),
			},
		},
		Ulimits: []interface{}{
			&UlimitProperty{
				HardLimit: jsii.Number(123),
				Name: jsii.String("name"),
				SoftLimit: jsii.Number(123),
			},
		},
		User: jsii.String("user"),
		Vcpus: jsii.Number(123),
		Volumes: []interface{}{
			&VolumesProperty{
				EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
					FileSystemId: jsii.String("fileSystemId"),

					// the properties below are optional
					AuthorizationConfig: &AuthorizationConfigProperty{
						AccessPointId: jsii.String("accessPointId"),
						Iam: jsii.String("iam"),
					},
					RootDirectory: jsii.String("rootDirectory"),
					TransitEncryption: jsii.String("transitEncryption"),
					TransitEncryptionPort: jsii.Number(123),
				},
				Host: &VolumesHostProperty{
					SourcePath: jsii.String("sourcePath"),
				},
				Name: jsii.String("name"),
			},
		},
	},
	EksProperties: &EksPropertiesProperty{
		PodProperties: &PodPropertiesProperty{
			Containers: []interface{}{
				&EksContainerProperty{
					Image: jsii.String("image"),

					// the properties below are optional
					Args: []*string{
						jsii.String("args"),
					},
					Command: []*string{
						jsii.String("command"),
					},
					Env: []interface{}{
						&EksContainerEnvironmentVariableProperty{
							Name: jsii.String("name"),

							// the properties below are optional
							Value: jsii.String("value"),
						},
					},
					ImagePullPolicy: jsii.String("imagePullPolicy"),
					Name: jsii.String("name"),
					Resources: &ResourcesProperty{
						Limits: limits,
						Requests: requests,
					},
					SecurityContext: &SecurityContextProperty{
						Privileged: jsii.Boolean(false),
						ReadOnlyRootFilesystem: jsii.Boolean(false),
						RunAsGroup: jsii.Number(123),
						RunAsNonRoot: jsii.Boolean(false),
						RunAsUser: jsii.Number(123),
					},
					VolumeMounts: []interface{}{
						&EksContainerVolumeMountProperty{
							MountPath: jsii.String("mountPath"),
							Name: jsii.String("name"),
							ReadOnly: jsii.Boolean(false),
						},
					},
				},
			},
			DnsPolicy: jsii.String("dnsPolicy"),
			HostNetwork: jsii.Boolean(false),
			Metadata: &MetadataProperty{
				Labels: labels,
			},
			ServiceAccountName: jsii.String("serviceAccountName"),
			Volumes: []interface{}{
				&EksVolumeProperty{
					Name: jsii.String("name"),

					// the properties below are optional
					EmptyDir: &EmptyDirProperty{
						Medium: jsii.String("medium"),
						SizeLimit: jsii.String("sizeLimit"),
					},
					HostPath: &HostPathProperty{
						Path: jsii.String("path"),
					},
					Secret: &EksSecretProperty{
						SecretName: jsii.String("secretName"),

						// the properties below are optional
						Optional: jsii.Boolean(false),
					},
				},
			},
		},
	},
	JobDefinitionName: jsii.String("jobDefinitionName"),
	NodeProperties: &NodePropertiesProperty{
		MainNode: jsii.Number(123),
		NodeRangeProperties: []interface{}{
			&NodeRangePropertyProperty{
				TargetNodes: jsii.String("targetNodes"),

				// the properties below are optional
				Container: &ContainerPropertiesProperty{
					Image: jsii.String("image"),

					// the properties below are optional
					Command: []*string{
						jsii.String("command"),
					},
					Environment: []interface{}{
						&EnvironmentProperty{
							Name: jsii.String("name"),
							Value: jsii.String("value"),
						},
					},
					EphemeralStorage: &EphemeralStorageProperty{
						SizeInGiB: jsii.Number(123),
					},
					ExecutionRoleArn: jsii.String("executionRoleArn"),
					FargatePlatformConfiguration: &FargatePlatformConfigurationProperty{
						PlatformVersion: jsii.String("platformVersion"),
					},
					InstanceType: jsii.String("instanceType"),
					JobRoleArn: jsii.String("jobRoleArn"),
					LinuxParameters: &LinuxParametersProperty{
						Devices: []interface{}{
							&DeviceProperty{
								ContainerPath: jsii.String("containerPath"),
								HostPath: jsii.String("hostPath"),
								Permissions: []*string{
									jsii.String("permissions"),
								},
							},
						},
						InitProcessEnabled: jsii.Boolean(false),
						MaxSwap: jsii.Number(123),
						SharedMemorySize: jsii.Number(123),
						Swappiness: jsii.Number(123),
						Tmpfs: []interface{}{
							&TmpfsProperty{
								ContainerPath: jsii.String("containerPath"),
								Size: jsii.Number(123),

								// the properties below are optional
								MountOptions: []*string{
									jsii.String("mountOptions"),
								},
							},
						},
					},
					LogConfiguration: &LogConfigurationProperty{
						LogDriver: jsii.String("logDriver"),

						// the properties below are optional
						Options: options,
						SecretOptions: []interface{}{
							&SecretProperty{
								Name: jsii.String("name"),
								ValueFrom: jsii.String("valueFrom"),
							},
						},
					},
					Memory: jsii.Number(123),
					MountPoints: []interface{}{
						&MountPointsProperty{
							ContainerPath: jsii.String("containerPath"),
							ReadOnly: jsii.Boolean(false),
							SourceVolume: jsii.String("sourceVolume"),
						},
					},
					NetworkConfiguration: &NetworkConfigurationProperty{
						AssignPublicIp: jsii.String("assignPublicIp"),
					},
					Privileged: jsii.Boolean(false),
					ReadonlyRootFilesystem: jsii.Boolean(false),
					ResourceRequirements: []interface{}{
						&ResourceRequirementProperty{
							Type: jsii.String("type"),
							Value: jsii.String("value"),
						},
					},
					RuntimePlatform: &RuntimePlatformProperty{
						CpuArchitecture: jsii.String("cpuArchitecture"),
						OperatingSystemFamily: jsii.String("operatingSystemFamily"),
					},
					Secrets: []interface{}{
						&SecretProperty{
							Name: jsii.String("name"),
							ValueFrom: jsii.String("valueFrom"),
						},
					},
					Ulimits: []interface{}{
						&UlimitProperty{
							HardLimit: jsii.Number(123),
							Name: jsii.String("name"),
							SoftLimit: jsii.Number(123),
						},
					},
					User: jsii.String("user"),
					Vcpus: jsii.Number(123),
					Volumes: []interface{}{
						&VolumesProperty{
							EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
								FileSystemId: jsii.String("fileSystemId"),

								// the properties below are optional
								AuthorizationConfig: &AuthorizationConfigProperty{
									AccessPointId: jsii.String("accessPointId"),
									Iam: jsii.String("iam"),
								},
								RootDirectory: jsii.String("rootDirectory"),
								TransitEncryption: jsii.String("transitEncryption"),
								TransitEncryptionPort: jsii.Number(123),
							},
							Host: &VolumesHostProperty{
								SourcePath: jsii.String("sourcePath"),
							},
							Name: jsii.String("name"),
						},
					},
				},
			},
		},
		NumNodes: jsii.Number(123),
	},
	Parameters: parameters,
	PlatformCapabilities: []*string{
		jsii.String("platformCapabilities"),
	},
	PropagateTags: jsii.Boolean(false),
	RetryStrategy: &RetryStrategyProperty{
		Attempts: jsii.Number(123),
		EvaluateOnExit: []interface{}{
			&EvaluateOnExitProperty{
				Action: jsii.String("action"),

				// the properties below are optional
				OnExitCode: jsii.String("onExitCode"),
				OnReason: jsii.String("onReason"),
				OnStatusReason: jsii.String("onStatusReason"),
			},
		},
	},
	SchedulingPriority: jsii.Number(123),
	Tags: tags,
	Timeout: &TimeoutProperty{
		AttemptDurationSeconds: jsii.Number(123),
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobdefinition.html

type CfnJobDefinition_AuthorizationConfigProperty

type CfnJobDefinition_AuthorizationConfigProperty struct {
	// The Amazon EFS access point ID to use.
	//
	// If an access point is specified, the root directory value specified in the `EFSVolumeConfiguration` must either be omitted or set to `/` which enforces the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration` . For more information, see [Working with Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the *Amazon Elastic File System User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html#cfn-batch-jobdefinition-authorizationconfig-accesspointid
	//
	AccessPointId *string `field:"optional" json:"accessPointId" yaml:"accessPointId"`
	// Whether or not to use the AWS Batch job IAM role defined in a job definition when mounting the Amazon EFS file system.
	//
	// If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration` . If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Using Amazon EFS access points](https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints) in the *AWS Batch User Guide* . EFS IAM authorization requires that `TransitEncryption` be `ENABLED` and that a `JobRoleArn` is specified.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html#cfn-batch-jobdefinition-authorizationconfig-iam
	//
	Iam *string `field:"optional" json:"iam" yaml:"iam"`
}

The authorization configuration details for the Amazon EFS file system.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

authorizationConfigProperty := &AuthorizationConfigProperty{
	AccessPointId: jsii.String("accessPointId"),
	Iam: jsii.String("iam"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-authorizationconfig.html

type CfnJobDefinition_ContainerPropertiesProperty

type CfnJobDefinition_ContainerPropertiesProperty struct {
	// Required.
	//
	// The image used to start a container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with `*repository-url* / *image* : *tag*` . It can be 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), underscores (_), colons (:), periods (.), forward slashes (/), and number signs (#). This parameter maps to `Image` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `IMAGE` parameter of [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// > Docker image architecture must match the processor architecture of the compute resources that they're scheduled on. For example, ARM-based Docker images can only run on ARM-based compute resources.
	//
	// - Images in Amazon ECR Public repositories use the full `registry/repository[:tag]` or `registry/repository[@digest]` naming conventions. For example, `public.ecr.aws/ *registry_alias* / *my-web-app* : *latest*` .
	// - Images in Amazon ECR repositories use the full registry and repository URI (for example, `123456789012.dkr.ecr.<region-name>.amazonaws.com/<repository-name>` ).
	// - Images in official repositories on Docker Hub use a single name (for example, `ubuntu` or `mongo` ).
	// - Images in other repositories on Docker Hub are qualified with an organization name (for example, `amazon/amazon-ecs-agent` ).
	// - Images in other online repositories are qualified further by a domain name (for example, `quay.io/assemblyline/ubuntu` ).
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-image
	//
	Image *string `field:"required" json:"image" yaml:"image"`
	// The command that's passed to the container.
	//
	// This parameter maps to `Cmd` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `COMMAND` parameter to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . For more information, see [https://docs.docker.com/engine/reference/builder/#cmd](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/builder/#cmd) .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-command
	//
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// This parameter maps to `Env` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--env` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// > We don't recommend using plaintext environment variables for sensitive information, such as credential data. > Environment variables cannot start with " `AWS_BATCH` ". This naming convention is reserved for variables that AWS Batch sets.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-environment
	//
	Environment interface{} `field:"optional" json:"environment" yaml:"environment"`
	// The amount of ephemeral storage to allocate for the task.
	//
	// This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-ephemeralstorage
	//
	EphemeralStorage interface{} `field:"optional" json:"ephemeralStorage" yaml:"ephemeralStorage"`
	// The Amazon Resource Name (ARN) of the execution role that AWS Batch can assume.
	//
	// For jobs that run on Fargate resources, you must provide an execution role. For more information, see [AWS Batch execution IAM role](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-executionrolearn
	//
	ExecutionRoleArn *string `field:"optional" json:"executionRoleArn" yaml:"executionRoleArn"`
	// The platform configuration for jobs that are running on Fargate resources.
	//
	// Jobs that are running on EC2 resources must not specify this parameter.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-fargateplatformconfiguration
	//
	FargatePlatformConfiguration interface{} `field:"optional" json:"fargatePlatformConfiguration" yaml:"fargatePlatformConfiguration"`
	// The instance type to use for a multi-node parallel job.
	//
	// All node groups in a multi-node parallel job must use the same instance type.
	//
	// > This parameter isn't applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn't be provided.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-instancetype
	//
	InstanceType *string `field:"optional" json:"instanceType" yaml:"instanceType"`
	// The Amazon Resource Name (ARN) of the IAM role that the container can assume for AWS permissions.
	//
	// For more information, see [IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the *Amazon Elastic Container Service Developer Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-jobrolearn
	//
	JobRoleArn *string `field:"optional" json:"jobRoleArn" yaml:"jobRoleArn"`
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-linuxparameters
	//
	LinuxParameters interface{} `field:"optional" json:"linuxParameters" yaml:"linuxParameters"`
	// The log configuration specification for the container.
	//
	// This parameter maps to `LogConfig` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--log-driver` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . By default, containers use the same logging driver that the Docker daemon uses. However the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information on the options for different supported log drivers, see [Configure logging drivers](https://docs.aws.amazon.com/https://docs.docker.com/engine/admin/logging/overview/) in the Docker documentation.
	//
	// > AWS Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the `LogConfiguration` data type).
	//
	// This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
	//
	// > The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-logconfiguration
	//
	LogConfiguration interface{} `field:"optional" json:"logConfiguration" yaml:"logConfiguration"`
	// This parameter is deprecated, use `resourceRequirements` to specify the memory requirements for the job definition.
	//
	// It's not supported for jobs running on Fargate resources. For jobs that run on EC2 resources, it specifies the memory hard limit (in MiB) for a container. If your container attempts to exceed the specified number, it's terminated. You must specify at least 4 MiB of memory for a job using this parameter. The memory hard limit can be specified in several places. It must be specified for each node at least once.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-memory
	//
	Memory *float64 `field:"optional" json:"memory" yaml:"memory"`
	// The mount points for data volumes in your container.
	//
	// This parameter maps to `Volumes` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--volume` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-mountpoints
	//
	MountPoints interface{} `field:"optional" json:"mountPoints" yaml:"mountPoints"`
	// The network configuration for jobs that are running on Fargate resources.
	//
	// Jobs that are running on EC2 resources must not specify this parameter.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-networkconfiguration
	//
	NetworkConfiguration interface{} `field:"optional" json:"networkConfiguration" yaml:"networkConfiguration"`
	// When this parameter is true, the container is given elevated permissions on the host container instance (similar to the `root` user).
	//
	// This parameter maps to `Privileged` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--privileged` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . The default value is false.
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided, or specified as false.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-privileged
	//
	Privileged interface{} `field:"optional" json:"privileged" yaml:"privileged"`
	// When this parameter is true, the container is given read-only access to its root file system.
	//
	// This parameter maps to `ReadonlyRootfs` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--read-only` option to `docker run` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-readonlyrootfilesystem
	//
	ReadonlyRootFilesystem interface{} `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// The type and amount of resources to assign to a container.
	//
	// The supported resources include `GPU` , `MEMORY` , and `VCPU` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-resourcerequirements
	//
	ResourceRequirements interface{} `field:"optional" json:"resourceRequirements" yaml:"resourceRequirements"`
	// An object that represents the compute environment architecture for AWS Batch jobs on Fargate.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-runtimeplatform
	//
	RuntimePlatform interface{} `field:"optional" json:"runtimePlatform" yaml:"runtimePlatform"`
	// The secrets for the container.
	//
	// For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-secrets
	//
	Secrets interface{} `field:"optional" json:"secrets" yaml:"secrets"`
	// A list of `ulimits` to set in the container.
	//
	// This parameter maps to `Ulimits` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--ulimit` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-ulimits
	//
	Ulimits interface{} `field:"optional" json:"ulimits" yaml:"ulimits"`
	// The user name to use inside the container.
	//
	// This parameter maps to `User` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--user` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-user
	//
	User *string `field:"optional" json:"user" yaml:"user"`
	// This parameter is deprecated, use `resourceRequirements` to specify the vCPU requirements for the job definition.
	//
	// It's not supported for jobs running on Fargate resources. For jobs running on EC2 resources, it specifies the number of vCPUs reserved for the job.
	//
	// Each vCPU is equivalent to 1,024 CPU shares. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . The number of vCPUs must be specified but can be specified in several places. You must specify it at least once for each node.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-vcpus
	//
	Vcpus *float64 `field:"optional" json:"vcpus" yaml:"vcpus"`
	// A list of data volumes used in a job.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html#cfn-batch-jobdefinition-containerproperties-volumes
	//
	Volumes interface{} `field:"optional" json:"volumes" yaml:"volumes"`
}

Container properties are used for Amazon ECS based job definitions.

These properties to describe the container that's launched as part of a job.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var options interface{}

containerPropertiesProperty := &ContainerPropertiesProperty{
	Image: jsii.String("image"),

	// the properties below are optional
	Command: []*string{
		jsii.String("command"),
	},
	Environment: []interface{}{
		&EnvironmentProperty{
			Name: jsii.String("name"),
			Value: jsii.String("value"),
		},
	},
	EphemeralStorage: &EphemeralStorageProperty{
		SizeInGiB: jsii.Number(123),
	},
	ExecutionRoleArn: jsii.String("executionRoleArn"),
	FargatePlatformConfiguration: &FargatePlatformConfigurationProperty{
		PlatformVersion: jsii.String("platformVersion"),
	},
	InstanceType: jsii.String("instanceType"),
	JobRoleArn: jsii.String("jobRoleArn"),
	LinuxParameters: &LinuxParametersProperty{
		Devices: []interface{}{
			&DeviceProperty{
				ContainerPath: jsii.String("containerPath"),
				HostPath: jsii.String("hostPath"),
				Permissions: []*string{
					jsii.String("permissions"),
				},
			},
		},
		InitProcessEnabled: jsii.Boolean(false),
		MaxSwap: jsii.Number(123),
		SharedMemorySize: jsii.Number(123),
		Swappiness: jsii.Number(123),
		Tmpfs: []interface{}{
			&TmpfsProperty{
				ContainerPath: jsii.String("containerPath"),
				Size: jsii.Number(123),

				// the properties below are optional
				MountOptions: []*string{
					jsii.String("mountOptions"),
				},
			},
		},
	},
	LogConfiguration: &LogConfigurationProperty{
		LogDriver: jsii.String("logDriver"),

		// the properties below are optional
		Options: options,
		SecretOptions: []interface{}{
			&SecretProperty{
				Name: jsii.String("name"),
				ValueFrom: jsii.String("valueFrom"),
			},
		},
	},
	Memory: jsii.Number(123),
	MountPoints: []interface{}{
		&MountPointsProperty{
			ContainerPath: jsii.String("containerPath"),
			ReadOnly: jsii.Boolean(false),
			SourceVolume: jsii.String("sourceVolume"),
		},
	},
	NetworkConfiguration: &NetworkConfigurationProperty{
		AssignPublicIp: jsii.String("assignPublicIp"),
	},
	Privileged: jsii.Boolean(false),
	ReadonlyRootFilesystem: jsii.Boolean(false),
	ResourceRequirements: []interface{}{
		&ResourceRequirementProperty{
			Type: jsii.String("type"),
			Value: jsii.String("value"),
		},
	},
	RuntimePlatform: &RuntimePlatformProperty{
		CpuArchitecture: jsii.String("cpuArchitecture"),
		OperatingSystemFamily: jsii.String("operatingSystemFamily"),
	},
	Secrets: []interface{}{
		&SecretProperty{
			Name: jsii.String("name"),
			ValueFrom: jsii.String("valueFrom"),
		},
	},
	Ulimits: []interface{}{
		&UlimitProperty{
			HardLimit: jsii.Number(123),
			Name: jsii.String("name"),
			SoftLimit: jsii.Number(123),
		},
	},
	User: jsii.String("user"),
	Vcpus: jsii.Number(123),
	Volumes: []interface{}{
		&VolumesProperty{
			EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
				FileSystemId: jsii.String("fileSystemId"),

				// the properties below are optional
				AuthorizationConfig: &AuthorizationConfigProperty{
					AccessPointId: jsii.String("accessPointId"),
					Iam: jsii.String("iam"),
				},
				RootDirectory: jsii.String("rootDirectory"),
				TransitEncryption: jsii.String("transitEncryption"),
				TransitEncryptionPort: jsii.Number(123),
			},
			Host: &VolumesHostProperty{
				SourcePath: jsii.String("sourcePath"),
			},
			Name: jsii.String("name"),
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-containerproperties.html

type CfnJobDefinition_DeviceProperty

type CfnJobDefinition_DeviceProperty struct {
	// The path inside the container that's used to expose the host device.
	//
	// By default, the `hostPath` value is used.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-containerpath
	//
	ContainerPath *string `field:"optional" json:"containerPath" yaml:"containerPath"`
	// The path for the device on the host container instance.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-hostpath
	//
	HostPath *string `field:"optional" json:"hostPath" yaml:"hostPath"`
	// The explicit permissions to provide to the container for the device.
	//
	// By default, the container has permissions for `read` , `write` , and `mknod` for the device.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html#cfn-batch-jobdefinition-device-permissions
	//
	Permissions *[]*string `field:"optional" json:"permissions" yaml:"permissions"`
}

An object that represents a container instance host device.

> This object isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

deviceProperty := &DeviceProperty{
	ContainerPath: jsii.String("containerPath"),
	HostPath: jsii.String("hostPath"),
	Permissions: []*string{
		jsii.String("permissions"),
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-device.html

type CfnJobDefinition_EfsVolumeConfigurationProperty

type CfnJobDefinition_EfsVolumeConfigurationProperty struct {
	// The Amazon EFS file system ID to use.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-filesystemid
	//
	FileSystemId *string `field:"required" json:"fileSystemId" yaml:"fileSystemId"`
	// The authorization configuration details for the Amazon EFS file system.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-authorizationconfig
	//
	AuthorizationConfig interface{} `field:"optional" json:"authorizationConfig" yaml:"authorizationConfig"`
	// The directory within the Amazon EFS file system to mount as the root directory inside the host.
	//
	// If this parameter is omitted, the root of the Amazon EFS volume is used instead. Specifying `/` has the same effect as omitting this parameter. The maximum length is 4,096 characters.
	//
	// > If an EFS access point is specified in the `authorizationConfig` , the root directory parameter must either be omitted or set to `/` , which enforces the path set on the Amazon EFS access point.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-rootdirectory
	//
	RootDirectory *string `field:"optional" json:"rootDirectory" yaml:"rootDirectory"`
	// Determines whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server.
	//
	// Transit encryption must be enabled if Amazon EFS IAM authorization is used. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting data in transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the *Amazon Elastic File System User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-transitencryption
	//
	TransitEncryption *string `field:"optional" json:"transitEncryption" yaml:"transitEncryption"`
	// The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server.
	//
	// If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. The value must be between 0 and 65,535. For more information, see [EFS mount helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the *Amazon Elastic File System User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html#cfn-batch-jobdefinition-efsvolumeconfiguration-transitencryptionport
	//
	TransitEncryptionPort *float64 `field:"optional" json:"transitEncryptionPort" yaml:"transitEncryptionPort"`
}

This is used when you're using an Amazon Elastic File System file system for job storage.

For more information, see [Amazon EFS Volumes](https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html) in the *AWS Batch User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

efsVolumeConfigurationProperty := &EfsVolumeConfigurationProperty{
	FileSystemId: jsii.String("fileSystemId"),

	// the properties below are optional
	AuthorizationConfig: &AuthorizationConfigProperty{
		AccessPointId: jsii.String("accessPointId"),
		Iam: jsii.String("iam"),
	},
	RootDirectory: jsii.String("rootDirectory"),
	TransitEncryption: jsii.String("transitEncryption"),
	TransitEncryptionPort: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-efsvolumeconfiguration.html

type CfnJobDefinition_EksContainerEnvironmentVariableProperty added in v2.51.0

type CfnJobDefinition_EksContainerEnvironmentVariableProperty struct {
	// The name of the environment variable.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainerenvironmentvariable.html#cfn-batch-jobdefinition-ekscontainerenvironmentvariable-name
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// The value of the environment variable.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainerenvironmentvariable.html#cfn-batch-jobdefinition-ekscontainerenvironmentvariable-value
	//
	Value *string `field:"optional" json:"value" yaml:"value"`
}

An environment variable.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksContainerEnvironmentVariableProperty := &EksContainerEnvironmentVariableProperty{
	Name: jsii.String("name"),

	// the properties below are optional
	Value: jsii.String("value"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainerenvironmentvariable.html

type CfnJobDefinition_EksContainerProperty added in v2.51.0

type CfnJobDefinition_EksContainerProperty struct {
	// The Docker image used to start the container.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-image
	//
	Image *string `field:"required" json:"image" yaml:"image"`
	// An array of arguments to the entrypoint.
	//
	// If this isn't specified, the `CMD` of the container image is used. This corresponds to the `args` member in the [Entrypoint](https://docs.aws.amazon.com/https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) portion of the [Pod](https://docs.aws.amazon.com/https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/) in Kubernetes. Environment variable references are expanded using the container's environment.
	//
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to " `$(NAME1)` " and the `NAME1` environment variable doesn't exist, the command string will remain " `$(NAME1)` ." `$$` is replaced with `$` , and the resulting string isn't expanded. For example, `$$(VAR_NAME)` is passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. For more information, see [CMD](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/builder/#cmd) in the *Dockerfile reference* and [Define a command and arguments for a pod](https://docs.aws.amazon.com/https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-args
	//
	Args *[]*string `field:"optional" json:"args" yaml:"args"`
	// The entrypoint for the container.
	//
	// This isn't run within a shell. If this isn't specified, the `ENTRYPOINT` of the container image is used. Environment variable references are expanded using the container's environment.
	//
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed. For example, if the reference is to " `$(NAME1)` " and the `NAME1` environment variable doesn't exist, the command string will remain " `$(NAME1)` ." `$$` is replaced with `$` and the resulting string isn't expanded. For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists. The entrypoint can't be updated. For more information, see [ENTRYPOINT](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/builder/#entrypoint) in the *Dockerfile reference* and [Define a command and arguments for a container](https://docs.aws.amazon.com/https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) and [Entrypoint](https://docs.aws.amazon.com/https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-command
	//
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// > Environment variables cannot start with " `AWS_BATCH` ". This naming convention is reserved for variables that AWS Batch sets.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-env
	//
	Env interface{} `field:"optional" json:"env" yaml:"env"`
	// The image pull policy for the container.
	//
	// Supported values are `Always` , `IfNotPresent` , and `Never` . This parameter defaults to `IfNotPresent` . However, if the `:latest` tag is specified, it defaults to `Always` . For more information, see [Updating images](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/containers/images/#updating-images) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-imagepullpolicy
	//
	ImagePullPolicy *string `field:"optional" json:"imagePullPolicy" yaml:"imagePullPolicy"`
	// The name of the container.
	//
	// If the name isn't specified, the default name " `Default` " is used. Each container in a pod must have a unique name.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-name
	//
	Name *string `field:"optional" json:"name" yaml:"name"`
	// The type and amount of resources to assign to a container.
	//
	// The supported resources include `memory` , `cpu` , and `nvidia.com/gpu` . For more information, see [Resource management for pods and containers](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-resources
	//
	Resources interface{} `field:"optional" json:"resources" yaml:"resources"`
	// The security context for a job.
	//
	// For more information, see [Configure a security context for a pod or container](https://docs.aws.amazon.com/https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-securitycontext
	//
	SecurityContext interface{} `field:"optional" json:"securityContext" yaml:"securityContext"`
	// The volume mounts for the container.
	//
	// AWS Batch supports `emptyDir` , `hostPath` , and `secret` volume types. For more information about volumes and volume mounts in Kubernetes, see [Volumes](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/storage/volumes/) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html#cfn-batch-jobdefinition-ekscontainer-volumemounts
	//
	VolumeMounts interface{} `field:"optional" json:"volumeMounts" yaml:"volumeMounts"`
}

EKS container properties are used in job definitions for Amazon EKS based job definitions to describe the properties for a container node in the pod that's launched as part of a job.

This can't be specified for Amazon ECS based job definitions.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var limits interface{}
var requests interface{}

eksContainerProperty := &EksContainerProperty{
	Image: jsii.String("image"),

	// the properties below are optional
	Args: []*string{
		jsii.String("args"),
	},
	Command: []*string{
		jsii.String("command"),
	},
	Env: []interface{}{
		&EksContainerEnvironmentVariableProperty{
			Name: jsii.String("name"),

			// the properties below are optional
			Value: jsii.String("value"),
		},
	},
	ImagePullPolicy: jsii.String("imagePullPolicy"),
	Name: jsii.String("name"),
	Resources: &ResourcesProperty{
		Limits: limits,
		Requests: requests,
	},
	SecurityContext: &SecurityContextProperty{
		Privileged: jsii.Boolean(false),
		ReadOnlyRootFilesystem: jsii.Boolean(false),
		RunAsGroup: jsii.Number(123),
		RunAsNonRoot: jsii.Boolean(false),
		RunAsUser: jsii.Number(123),
	},
	VolumeMounts: []interface{}{
		&EksContainerVolumeMountProperty{
			MountPath: jsii.String("mountPath"),
			Name: jsii.String("name"),
			ReadOnly: jsii.Boolean(false),
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainer.html

type CfnJobDefinition_EksContainerVolumeMountProperty added in v2.51.0

type CfnJobDefinition_EksContainerVolumeMountProperty struct {
	// The path on the container where the volume is mounted.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainervolumemount.html#cfn-batch-jobdefinition-ekscontainervolumemount-mountpath
	//
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// The name the volume mount.
	//
	// This must match the name of one of the volumes in the pod.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainervolumemount.html#cfn-batch-jobdefinition-ekscontainervolumemount-name
	//
	Name *string `field:"optional" json:"name" yaml:"name"`
	// If this value is `true` , the container has read-only access to the volume.
	//
	// Otherwise, the container can write to the volume. The default value is `false` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainervolumemount.html#cfn-batch-jobdefinition-ekscontainervolumemount-readonly
	//
	ReadOnly interface{} `field:"optional" json:"readOnly" yaml:"readOnly"`
}

The volume mounts for a container for an Amazon EKS job.

For more information about volumes and volume mounts in Kubernetes, see [Volumes](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/storage/volumes/) in the *Kubernetes documentation* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksContainerVolumeMountProperty := &EksContainerVolumeMountProperty{
	MountPath: jsii.String("mountPath"),
	Name: jsii.String("name"),
	ReadOnly: jsii.Boolean(false),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekscontainervolumemount.html

type CfnJobDefinition_EksPropertiesProperty added in v2.51.0

type CfnJobDefinition_EksPropertiesProperty struct {
	// The properties for the Kubernetes pod resources of a job.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-eksproperties.html#cfn-batch-jobdefinition-eksproperties-podproperties
	//
	PodProperties interface{} `field:"optional" json:"podProperties" yaml:"podProperties"`
}

An object that contains the properties for the Kubernetes resources of a job.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var labels interface{}
var limits interface{}
var requests interface{}

eksPropertiesProperty := &EksPropertiesProperty{
	PodProperties: &PodPropertiesProperty{
		Containers: []interface{}{
			&EksContainerProperty{
				Image: jsii.String("image"),

				// the properties below are optional
				Args: []*string{
					jsii.String("args"),
				},
				Command: []*string{
					jsii.String("command"),
				},
				Env: []interface{}{
					&EksContainerEnvironmentVariableProperty{
						Name: jsii.String("name"),

						// the properties below are optional
						Value: jsii.String("value"),
					},
				},
				ImagePullPolicy: jsii.String("imagePullPolicy"),
				Name: jsii.String("name"),
				Resources: &ResourcesProperty{
					Limits: limits,
					Requests: requests,
				},
				SecurityContext: &SecurityContextProperty{
					Privileged: jsii.Boolean(false),
					ReadOnlyRootFilesystem: jsii.Boolean(false),
					RunAsGroup: jsii.Number(123),
					RunAsNonRoot: jsii.Boolean(false),
					RunAsUser: jsii.Number(123),
				},
				VolumeMounts: []interface{}{
					&EksContainerVolumeMountProperty{
						MountPath: jsii.String("mountPath"),
						Name: jsii.String("name"),
						ReadOnly: jsii.Boolean(false),
					},
				},
			},
		},
		DnsPolicy: jsii.String("dnsPolicy"),
		HostNetwork: jsii.Boolean(false),
		Metadata: &MetadataProperty{
			Labels: labels,
		},
		ServiceAccountName: jsii.String("serviceAccountName"),
		Volumes: []interface{}{
			&EksVolumeProperty{
				Name: jsii.String("name"),

				// the properties below are optional
				EmptyDir: &EmptyDirProperty{
					Medium: jsii.String("medium"),
					SizeLimit: jsii.String("sizeLimit"),
				},
				HostPath: &HostPathProperty{
					Path: jsii.String("path"),
				},
				Secret: &EksSecretProperty{
					SecretName: jsii.String("secretName"),

					// the properties below are optional
					Optional: jsii.Boolean(false),
				},
			},
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-eksproperties.html

type CfnJobDefinition_EksSecretProperty added in v2.78.0

type CfnJobDefinition_EksSecretProperty struct {
	// The name of the secret.
	//
	// The name must be allowed as a DNS subdomain name. For more information, see [DNS subdomain names](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekssecret.html#cfn-batch-jobdefinition-ekssecret-secretname
	//
	SecretName *string `field:"required" json:"secretName" yaml:"secretName"`
	// Specifies whether the secret or the secret's keys must be defined.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekssecret.html#cfn-batch-jobdefinition-ekssecret-optional
	//
	Optional interface{} `field:"optional" json:"optional" yaml:"optional"`
}

Specifies the configuration of a Kubernetes `secret` volume.

For more information, see [secret](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/storage/volumes/#secret) in the *Kubernetes documentation* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksSecretProperty := &EksSecretProperty{
	SecretName: jsii.String("secretName"),

	// the properties below are optional
	Optional: jsii.Boolean(false),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ekssecret.html

type CfnJobDefinition_EksVolumeProperty added in v2.51.0

type CfnJobDefinition_EksVolumeProperty struct {
	// The name of the volume.
	//
	// The name must be allowed as a DNS subdomain name. For more information, see [DNS subdomain names](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-eksvolume.html#cfn-batch-jobdefinition-eksvolume-name
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// Specifies the configuration of a Kubernetes `emptyDir` volume.
	//
	// For more information, see [emptyDir](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-eksvolume.html#cfn-batch-jobdefinition-eksvolume-emptydir
	//
	EmptyDir interface{} `field:"optional" json:"emptyDir" yaml:"emptyDir"`
	// Specifies the configuration of a Kubernetes `hostPath` volume.
	//
	// For more information, see [hostPath](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-eksvolume.html#cfn-batch-jobdefinition-eksvolume-hostpath
	//
	HostPath interface{} `field:"optional" json:"hostPath" yaml:"hostPath"`
	// Specifies the configuration of a Kubernetes `secret` volume.
	//
	// For more information, see [secret](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/storage/volumes/#secret) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-eksvolume.html#cfn-batch-jobdefinition-eksvolume-secret
	//
	Secret interface{} `field:"optional" json:"secret" yaml:"secret"`
}

Specifies an Amazon EKS volume for a job definition.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksVolumeProperty := &EksVolumeProperty{
	Name: jsii.String("name"),

	// the properties below are optional
	EmptyDir: &EmptyDirProperty{
		Medium: jsii.String("medium"),
		SizeLimit: jsii.String("sizeLimit"),
	},
	HostPath: &HostPathProperty{
		Path: jsii.String("path"),
	},
	Secret: &EksSecretProperty{
		SecretName: jsii.String("secretName"),

		// the properties below are optional
		Optional: jsii.Boolean(false),
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-eksvolume.html

type CfnJobDefinition_EmptyDirProperty added in v2.51.0

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

emptyDirProperty := &EmptyDirProperty{
	Medium: jsii.String("medium"),
	SizeLimit: jsii.String("sizeLimit"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-emptydir.html

type CfnJobDefinition_EnvironmentProperty

type CfnJobDefinition_EnvironmentProperty struct {
	// The name of the environment variable.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html#cfn-batch-jobdefinition-environment-name
	//
	Name *string `field:"optional" json:"name" yaml:"name"`
	// The value of the environment variable.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html#cfn-batch-jobdefinition-environment-value
	//
	Value *string `field:"optional" json:"value" yaml:"value"`
}

The Environment property type specifies environment variables to use in a job definition.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

environmentProperty := &EnvironmentProperty{
	Name: jsii.String("name"),
	Value: jsii.String("value"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-environment.html

type CfnJobDefinition_EphemeralStorageProperty added in v2.78.0

type CfnJobDefinition_EphemeralStorageProperty struct {
	// The total amount, in GiB, of ephemeral storage to set for the task.
	//
	// The minimum supported value is `21` GiB and the maximum supported value is `200` GiB.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ephemeralstorage.html#cfn-batch-jobdefinition-ephemeralstorage-sizeingib
	//
	SizeInGiB *float64 `field:"required" json:"sizeInGiB" yaml:"sizeInGiB"`
}

The amount of ephemeral storage to allocate for the task.

This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

ephemeralStorageProperty := &EphemeralStorageProperty{
	SizeInGiB: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ephemeralstorage.html

type CfnJobDefinition_EvaluateOnExitProperty

type CfnJobDefinition_EvaluateOnExitProperty struct {
	// Specifies the action to take if all of the specified conditions ( `onStatusReason` , `onReason` , and `onExitCode` ) are met.
	//
	// The values aren't case sensitive.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-action
	//
	Action *string `field:"required" json:"action" yaml:"action"`
	// Contains a glob pattern to match against the decimal representation of the `ExitCode` returned for a job.
	//
	// The pattern can be up to 512 characters long. It can contain only numbers, and can end with an asterisk (*) so that only the start of the string needs to be an exact match.
	//
	// The string can contain up to 512 characters.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onexitcode
	//
	OnExitCode *string `field:"optional" json:"onExitCode" yaml:"onExitCode"`
	// Contains a glob pattern to match against the `Reason` returned for a job.
	//
	// The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.), colons (:), and white space (including spaces and tabs). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onreason
	//
	OnReason *string `field:"optional" json:"onReason" yaml:"onReason"`
	// Contains a glob pattern to match against the `StatusReason` returned for a job.
	//
	// The pattern can contain up to 512 characters. It can contain letters, numbers, periods (.), colons (:), and white spaces (including spaces or tabs). It can optionally end with an asterisk (*) so that only the start of the string needs to be an exact match.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html#cfn-batch-jobdefinition-evaluateonexit-onstatusreason
	//
	OnStatusReason *string `field:"optional" json:"onStatusReason" yaml:"onStatusReason"`
}

Specifies an array of up to 5 conditions to be met, and an action to take ( `RETRY` or `EXIT` ) if all conditions are met.

If none of the `EvaluateOnExit` conditions in a `RetryStrategy` match, then the job is retried.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

evaluateOnExitProperty := &EvaluateOnExitProperty{
	Action: jsii.String("action"),

	// the properties below are optional
	OnExitCode: jsii.String("onExitCode"),
	OnReason: jsii.String("onReason"),
	OnStatusReason: jsii.String("onStatusReason"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-evaluateonexit.html

type CfnJobDefinition_FargatePlatformConfigurationProperty

type CfnJobDefinition_FargatePlatformConfigurationProperty struct {
	// The AWS Fargate platform version where the jobs are running.
	//
	// A platform version is specified only for jobs that are running on Fargate resources. If one isn't specified, the `LATEST` platform version is used by default. This uses a recent, approved version of the AWS Fargate platform for compute resources. For more information, see [AWS Fargate platform versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) in the *Amazon Elastic Container Service Developer Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-fargateplatformconfiguration.html#cfn-batch-jobdefinition-fargateplatformconfiguration-platformversion
	//
	PlatformVersion *string `field:"optional" json:"platformVersion" yaml:"platformVersion"`
}

The platform configuration for jobs that are running on Fargate resources.

Jobs that run on EC2 resources must not specify this parameter.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

fargatePlatformConfigurationProperty := &FargatePlatformConfigurationProperty{
	PlatformVersion: jsii.String("platformVersion"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-fargateplatformconfiguration.html

type CfnJobDefinition_HostPathProperty added in v2.51.0

type CfnJobDefinition_HostPathProperty struct {
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-hostpath.html#cfn-batch-jobdefinition-hostpath-path
	//
	Path *string `field:"optional" json:"path" yaml:"path"`
}

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

hostPathProperty := &HostPathProperty{
	Path: jsii.String("path"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-hostpath.html

type CfnJobDefinition_LinuxParametersProperty

type CfnJobDefinition_LinuxParametersProperty struct {
	// Any of the host devices to expose to the container.
	//
	// This parameter maps to `Devices` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--device` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-linuxparameters.html#cfn-batch-jobdefinition-linuxparameters-devices
	//
	Devices interface{} `field:"optional" json:"devices" yaml:"devices"`
	// If true, run an `init` process inside the container that forwards signals and reaps processes.
	//
	// This parameter maps to the `--init` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-linuxparameters.html#cfn-batch-jobdefinition-linuxparameters-initprocessenabled
	//
	InitProcessEnabled interface{} `field:"optional" json:"initProcessEnabled" yaml:"initProcessEnabled"`
	// The total amount of swap memory (in MiB) a container can use.
	//
	// This parameter is translated to the `--memory-swap` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) where the value is the sum of the container memory plus the `maxSwap` value. For more information, see [`--memory-swap` details](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details) in the Docker documentation.
	//
	// If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container doesn't use the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-linuxparameters.html#cfn-batch-jobdefinition-linuxparameters-maxswap
	//
	MaxSwap *float64 `field:"optional" json:"maxSwap" yaml:"maxSwap"`
	// The value for the size (in MiB) of the `/dev/shm` volume.
	//
	// This parameter maps to the `--shm-size` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-linuxparameters.html#cfn-batch-jobdefinition-linuxparameters-sharedmemorysize
	//
	SharedMemorySize *float64 `field:"optional" json:"sharedMemorySize" yaml:"sharedMemorySize"`
	// You can use this parameter to tune a container's memory swappiness behavior.
	//
	// A `swappiness` value of `0` causes swapping to not occur unless absolutely necessary. A `swappiness` value of `100` causes pages to be swapped aggressively. Valid values are whole numbers between `0` and `100` . If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap` , then this parameter is ignored. If `maxSwap` is set to 0, the container doesn't use swap. This parameter maps to the `--memory-swappiness` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// Consider the following when you use a per-container swap configuration.
	//
	// - Swap space must be enabled and allocated on the container instance for the containers to use.
	//
	// > By default, the Amazon ECS optimized AMIs don't have swap enabled. You must enable swap on the instance to use this feature. For more information, see [Instance store swap volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-swap-volumes.html) in the *Amazon EC2 User Guide for Linux Instances* or [How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?](https://docs.aws.amazon.com/premiumsupport/knowledge-center/ec2-memory-swap-file/)
	// - The swap space parameters are only supported for job definitions using EC2 resources.
	// - If the `maxSwap` and `swappiness` parameters are omitted from a job definition, each container has a default `swappiness` value of 60. Moreover, the total swap usage is limited to two times the memory reservation of the container.
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide it for these jobs.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-linuxparameters.html#cfn-batch-jobdefinition-linuxparameters-swappiness
	//
	Swappiness *float64 `field:"optional" json:"swappiness" yaml:"swappiness"`
	// The container path, mount options, and size (in MiB) of the `tmpfs` mount.
	//
	// This parameter maps to the `--tmpfs` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources. Don't provide this parameter for this resource type.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-linuxparameters.html#cfn-batch-jobdefinition-linuxparameters-tmpfs
	//
	Tmpfs interface{} `field:"optional" json:"tmpfs" yaml:"tmpfs"`
}

Linux-specific modifications that are applied to the container, such as details for device mappings.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

linuxParametersProperty := &LinuxParametersProperty{
	Devices: []interface{}{
		&DeviceProperty{
			ContainerPath: jsii.String("containerPath"),
			HostPath: jsii.String("hostPath"),
			Permissions: []*string{
				jsii.String("permissions"),
			},
		},
	},
	InitProcessEnabled: jsii.Boolean(false),
	MaxSwap: jsii.Number(123),
	SharedMemorySize: jsii.Number(123),
	Swappiness: jsii.Number(123),
	Tmpfs: []interface{}{
		&TmpfsProperty{
			ContainerPath: jsii.String("containerPath"),
			Size: jsii.Number(123),

			// the properties below are optional
			MountOptions: []*string{
				jsii.String("mountOptions"),
			},
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-linuxparameters.html

type CfnJobDefinition_LogConfigurationProperty

type CfnJobDefinition_LogConfigurationProperty struct {
	// The log driver to use for the container.
	//
	// The valid values that are listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.
	//
	// The supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `logentries` , `syslog` , and `splunk` .
	//
	// > Jobs that are running on Fargate resources are restricted to the `awslogs` and `splunk` log drivers.
	//
	// - **awslogs** - Specifies the Amazon CloudWatch Logs logging driver. For more information, see [Using the awslogs log driver](https://docs.aws.amazon.com/batch/latest/userguide/using_awslogs.html) in the *AWS Batch User Guide* and [Amazon CloudWatch Logs logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/) in the Docker documentation.
	// - **fluentd** - Specifies the Fluentd logging driver. For more information including usage and options, see [Fluentd logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/fluentd/) in the *Docker documentation* .
	// - **gelf** - Specifies the Graylog Extended Format (GELF) logging driver. For more information including usage and options, see [Graylog Extended Format logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/gelf/) in the *Docker documentation* .
	// - **journald** - Specifies the journald logging driver. For more information including usage and options, see [Journald logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/journald/) in the *Docker documentation* .
	// - **json-file** - Specifies the JSON file logging driver. For more information including usage and options, see [JSON File logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/json-file/) in the *Docker documentation* .
	// - **splunk** - Specifies the Splunk logging driver. For more information including usage and options, see [Splunk logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/splunk/) in the *Docker documentation* .
	// - **syslog** - Specifies the syslog logging driver. For more information including usage and options, see [Syslog logging driver](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/syslog/) in the *Docker documentation* .
	//
	// > If you have a custom driver that's not listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software.
	//
	// This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-logconfiguration.html#cfn-batch-jobdefinition-logconfiguration-logdriver
	//
	LogDriver *string `field:"required" json:"logDriver" yaml:"logDriver"`
	// The configuration options to send to the log driver.
	//
	// This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version | grep "Server API version"`
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-logconfiguration.html#cfn-batch-jobdefinition-logconfiguration-options
	//
	Options interface{} `field:"optional" json:"options" yaml:"options"`
	// The secrets to pass to the log configuration.
	//
	// For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-logconfiguration.html#cfn-batch-jobdefinition-logconfiguration-secretoptions
	//
	SecretOptions interface{} `field:"optional" json:"secretOptions" yaml:"secretOptions"`
}

Log configuration options to send to a custom log driver for the container.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var options interface{}

logConfigurationProperty := &LogConfigurationProperty{
	LogDriver: jsii.String("logDriver"),

	// the properties below are optional
	Options: options,
	SecretOptions: []interface{}{
		&SecretProperty{
			Name: jsii.String("name"),
			ValueFrom: jsii.String("valueFrom"),
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-logconfiguration.html

type CfnJobDefinition_MetadataProperty added in v2.78.0

type CfnJobDefinition_MetadataProperty struct {
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-metadata.html#cfn-batch-jobdefinition-metadata-labels
	//
	Labels interface{} `field:"optional" json:"labels" yaml:"labels"`
}

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var labels interface{}

metadataProperty := &MetadataProperty{
	Labels: labels,
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-metadata.html

type CfnJobDefinition_MountPointsProperty

type CfnJobDefinition_MountPointsProperty struct {
	// The path on the container where the host volume is mounted.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-containerpath
	//
	ContainerPath *string `field:"optional" json:"containerPath" yaml:"containerPath"`
	// If this value is `true` , the container has read-only access to the volume.
	//
	// Otherwise, the container can write to the volume. The default value is `false` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-readonly
	//
	ReadOnly interface{} `field:"optional" json:"readOnly" yaml:"readOnly"`
	// The name of the volume to mount.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html#cfn-batch-jobdefinition-mountpoints-sourcevolume
	//
	SourceVolume *string `field:"optional" json:"sourceVolume" yaml:"sourceVolume"`
}

Details for a Docker volume mount point that's used in a job's container properties.

This parameter maps to `Volumes` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.43/#tag/Container/operation/ContainerCreate) section of the *Docker Remote API* and the `--volume` option to docker run.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

mountPointsProperty := &MountPointsProperty{
	ContainerPath: jsii.String("containerPath"),
	ReadOnly: jsii.Boolean(false),
	SourceVolume: jsii.String("sourceVolume"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-mountpoints.html

type CfnJobDefinition_NetworkConfigurationProperty

type CfnJobDefinition_NetworkConfigurationProperty struct {
	// Indicates whether the job has a public IP address.
	//
	// For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet. For more information, see [Amazon ECS task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) in the *Amazon Elastic Container Service Developer Guide* . The default value is " `DISABLED` ".
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-networkconfiguration.html#cfn-batch-jobdefinition-networkconfiguration-assignpublicip
	//
	AssignPublicIp *string `field:"optional" json:"assignPublicIp" yaml:"assignPublicIp"`
}

The network configuration for jobs that are running on Fargate resources.

Jobs that are running on EC2 resources must not specify this parameter.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

networkConfigurationProperty := &NetworkConfigurationProperty{
	AssignPublicIp: jsii.String("assignPublicIp"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-networkconfiguration.html

type CfnJobDefinition_NodePropertiesProperty

type CfnJobDefinition_NodePropertiesProperty struct {
	// Specifies the node index for the main node of a multi-node parallel job.
	//
	// This node index value must be fewer than the number of nodes.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-mainnode
	//
	MainNode *float64 `field:"required" json:"mainNode" yaml:"mainNode"`
	// A list of node ranges and their properties that are associated with a multi-node parallel job.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-noderangeproperties
	//
	NodeRangeProperties interface{} `field:"required" json:"nodeRangeProperties" yaml:"nodeRangeProperties"`
	// The number of nodes that are associated with a multi-node parallel job.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html#cfn-batch-jobdefinition-nodeproperties-numnodes
	//
	NumNodes *float64 `field:"required" json:"numNodes" yaml:"numNodes"`
}

An object that represents the node properties of a multi-node parallel job.

> Node properties can't be specified for Amazon EKS based job definitions.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var options interface{}

nodePropertiesProperty := &NodePropertiesProperty{
	MainNode: jsii.Number(123),
	NodeRangeProperties: []interface{}{
		&NodeRangePropertyProperty{
			TargetNodes: jsii.String("targetNodes"),

			// the properties below are optional
			Container: &ContainerPropertiesProperty{
				Image: jsii.String("image"),

				// the properties below are optional
				Command: []*string{
					jsii.String("command"),
				},
				Environment: []interface{}{
					&EnvironmentProperty{
						Name: jsii.String("name"),
						Value: jsii.String("value"),
					},
				},
				EphemeralStorage: &EphemeralStorageProperty{
					SizeInGiB: jsii.Number(123),
				},
				ExecutionRoleArn: jsii.String("executionRoleArn"),
				FargatePlatformConfiguration: &FargatePlatformConfigurationProperty{
					PlatformVersion: jsii.String("platformVersion"),
				},
				InstanceType: jsii.String("instanceType"),
				JobRoleArn: jsii.String("jobRoleArn"),
				LinuxParameters: &LinuxParametersProperty{
					Devices: []interface{}{
						&DeviceProperty{
							ContainerPath: jsii.String("containerPath"),
							HostPath: jsii.String("hostPath"),
							Permissions: []*string{
								jsii.String("permissions"),
							},
						},
					},
					InitProcessEnabled: jsii.Boolean(false),
					MaxSwap: jsii.Number(123),
					SharedMemorySize: jsii.Number(123),
					Swappiness: jsii.Number(123),
					Tmpfs: []interface{}{
						&TmpfsProperty{
							ContainerPath: jsii.String("containerPath"),
							Size: jsii.Number(123),

							// the properties below are optional
							MountOptions: []*string{
								jsii.String("mountOptions"),
							},
						},
					},
				},
				LogConfiguration: &LogConfigurationProperty{
					LogDriver: jsii.String("logDriver"),

					// the properties below are optional
					Options: options,
					SecretOptions: []interface{}{
						&SecretProperty{
							Name: jsii.String("name"),
							ValueFrom: jsii.String("valueFrom"),
						},
					},
				},
				Memory: jsii.Number(123),
				MountPoints: []interface{}{
					&MountPointsProperty{
						ContainerPath: jsii.String("containerPath"),
						ReadOnly: jsii.Boolean(false),
						SourceVolume: jsii.String("sourceVolume"),
					},
				},
				NetworkConfiguration: &NetworkConfigurationProperty{
					AssignPublicIp: jsii.String("assignPublicIp"),
				},
				Privileged: jsii.Boolean(false),
				ReadonlyRootFilesystem: jsii.Boolean(false),
				ResourceRequirements: []interface{}{
					&ResourceRequirementProperty{
						Type: jsii.String("type"),
						Value: jsii.String("value"),
					},
				},
				RuntimePlatform: &RuntimePlatformProperty{
					CpuArchitecture: jsii.String("cpuArchitecture"),
					OperatingSystemFamily: jsii.String("operatingSystemFamily"),
				},
				Secrets: []interface{}{
					&SecretProperty{
						Name: jsii.String("name"),
						ValueFrom: jsii.String("valueFrom"),
					},
				},
				Ulimits: []interface{}{
					&UlimitProperty{
						HardLimit: jsii.Number(123),
						Name: jsii.String("name"),
						SoftLimit: jsii.Number(123),
					},
				},
				User: jsii.String("user"),
				Vcpus: jsii.Number(123),
				Volumes: []interface{}{
					&VolumesProperty{
						EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
							FileSystemId: jsii.String("fileSystemId"),

							// the properties below are optional
							AuthorizationConfig: &AuthorizationConfigProperty{
								AccessPointId: jsii.String("accessPointId"),
								Iam: jsii.String("iam"),
							},
							RootDirectory: jsii.String("rootDirectory"),
							TransitEncryption: jsii.String("transitEncryption"),
							TransitEncryptionPort: jsii.Number(123),
						},
						Host: &VolumesHostProperty{
							SourcePath: jsii.String("sourcePath"),
						},
						Name: jsii.String("name"),
					},
				},
			},
		},
	},
	NumNodes: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-nodeproperties.html

type CfnJobDefinition_NodeRangePropertyProperty

type CfnJobDefinition_NodeRangePropertyProperty struct {
	// The range of nodes, using node index values.
	//
	// A range of `0:3` indicates nodes with index values of `0` through `3` . If the starting range value is omitted ( `:n` ), then `0` is used to start the range. If the ending range value is omitted ( `n:` ), then the highest possible node index is used to end the range. Your accumulative node ranges must account for all nodes ( `0:n` ). You can nest node ranges (for example, `0:10` and `4:5` ). In this case, the `4:5` range properties override the `0:10` properties.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html#cfn-batch-jobdefinition-noderangeproperty-targetnodes
	//
	TargetNodes *string `field:"required" json:"targetNodes" yaml:"targetNodes"`
	// The container details for the node range.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html#cfn-batch-jobdefinition-noderangeproperty-container
	//
	Container interface{} `field:"optional" json:"container" yaml:"container"`
}

An object that represents the properties of the node range for a multi-node parallel job.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var options interface{}

nodeRangePropertyProperty := &NodeRangePropertyProperty{
	TargetNodes: jsii.String("targetNodes"),

	// the properties below are optional
	Container: &ContainerPropertiesProperty{
		Image: jsii.String("image"),

		// the properties below are optional
		Command: []*string{
			jsii.String("command"),
		},
		Environment: []interface{}{
			&EnvironmentProperty{
				Name: jsii.String("name"),
				Value: jsii.String("value"),
			},
		},
		EphemeralStorage: &EphemeralStorageProperty{
			SizeInGiB: jsii.Number(123),
		},
		ExecutionRoleArn: jsii.String("executionRoleArn"),
		FargatePlatformConfiguration: &FargatePlatformConfigurationProperty{
			PlatformVersion: jsii.String("platformVersion"),
		},
		InstanceType: jsii.String("instanceType"),
		JobRoleArn: jsii.String("jobRoleArn"),
		LinuxParameters: &LinuxParametersProperty{
			Devices: []interface{}{
				&DeviceProperty{
					ContainerPath: jsii.String("containerPath"),
					HostPath: jsii.String("hostPath"),
					Permissions: []*string{
						jsii.String("permissions"),
					},
				},
			},
			InitProcessEnabled: jsii.Boolean(false),
			MaxSwap: jsii.Number(123),
			SharedMemorySize: jsii.Number(123),
			Swappiness: jsii.Number(123),
			Tmpfs: []interface{}{
				&TmpfsProperty{
					ContainerPath: jsii.String("containerPath"),
					Size: jsii.Number(123),

					// the properties below are optional
					MountOptions: []*string{
						jsii.String("mountOptions"),
					},
				},
			},
		},
		LogConfiguration: &LogConfigurationProperty{
			LogDriver: jsii.String("logDriver"),

			// the properties below are optional
			Options: options,
			SecretOptions: []interface{}{
				&SecretProperty{
					Name: jsii.String("name"),
					ValueFrom: jsii.String("valueFrom"),
				},
			},
		},
		Memory: jsii.Number(123),
		MountPoints: []interface{}{
			&MountPointsProperty{
				ContainerPath: jsii.String("containerPath"),
				ReadOnly: jsii.Boolean(false),
				SourceVolume: jsii.String("sourceVolume"),
			},
		},
		NetworkConfiguration: &NetworkConfigurationProperty{
			AssignPublicIp: jsii.String("assignPublicIp"),
		},
		Privileged: jsii.Boolean(false),
		ReadonlyRootFilesystem: jsii.Boolean(false),
		ResourceRequirements: []interface{}{
			&ResourceRequirementProperty{
				Type: jsii.String("type"),
				Value: jsii.String("value"),
			},
		},
		RuntimePlatform: &RuntimePlatformProperty{
			CpuArchitecture: jsii.String("cpuArchitecture"),
			OperatingSystemFamily: jsii.String("operatingSystemFamily"),
		},
		Secrets: []interface{}{
			&SecretProperty{
				Name: jsii.String("name"),
				ValueFrom: jsii.String("valueFrom"),
			},
		},
		Ulimits: []interface{}{
			&UlimitProperty{
				HardLimit: jsii.Number(123),
				Name: jsii.String("name"),
				SoftLimit: jsii.Number(123),
			},
		},
		User: jsii.String("user"),
		Vcpus: jsii.Number(123),
		Volumes: []interface{}{
			&VolumesProperty{
				EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
					FileSystemId: jsii.String("fileSystemId"),

					// the properties below are optional
					AuthorizationConfig: &AuthorizationConfigProperty{
						AccessPointId: jsii.String("accessPointId"),
						Iam: jsii.String("iam"),
					},
					RootDirectory: jsii.String("rootDirectory"),
					TransitEncryption: jsii.String("transitEncryption"),
					TransitEncryptionPort: jsii.Number(123),
				},
				Host: &VolumesHostProperty{
					SourcePath: jsii.String("sourcePath"),
				},
				Name: jsii.String("name"),
			},
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-noderangeproperty.html

type CfnJobDefinition_PodPropertiesProperty added in v2.51.0

type CfnJobDefinition_PodPropertiesProperty struct {
	// The properties of the container that's used on the Amazon EKS pod.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-podproperties.html#cfn-batch-jobdefinition-podproperties-containers
	//
	Containers interface{} `field:"optional" json:"containers" yaml:"containers"`
	// The DNS policy for the pod.
	//
	// The default value is `ClusterFirst` . If the `hostNetwork` parameter is not specified, the default is `ClusterFirstWithHostNet` . `ClusterFirst` indicates that any DNS query that does not match the configured cluster domain suffix is forwarded to the upstream nameserver inherited from the node. If no value was specified for `dnsPolicy` in the [RegisterJobDefinition](https://docs.aws.amazon.com/batch/latest/APIReference/API_RegisterJobDefinition.html) API operation, then no value will be returned for `dnsPolicy` by either of [DescribeJobDefinitions](https://docs.aws.amazon.com/batch/latest/APIReference/API_DescribeJobDefinitions.html) or [DescribeJobs](https://docs.aws.amazon.com/batch/latest/APIReference/API_DescribeJobs.html) API operations. The pod spec setting will contain either `ClusterFirst` or `ClusterFirstWithHostNet` , depending on the value of the `hostNetwork` parameter. For more information, see [Pod's DNS policy](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) in the *Kubernetes documentation* .
	//
	// Valid values: `Default` | `ClusterFirst` | `ClusterFirstWithHostNet`.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-podproperties.html#cfn-batch-jobdefinition-podproperties-dnspolicy
	//
	DnsPolicy *string `field:"optional" json:"dnsPolicy" yaml:"dnsPolicy"`
	// Indicates if the pod uses the hosts' network IP address.
	//
	// The default value is `true` . Setting this to `false` enables the Kubernetes pod networking model. Most AWS Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections. For more information, see [Host namespaces](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/security/pod-security-policy/#host-namespaces) and [Pod networking](https://docs.aws.amazon.com/https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-podproperties.html#cfn-batch-jobdefinition-podproperties-hostnetwork
	//
	HostNetwork interface{} `field:"optional" json:"hostNetwork" yaml:"hostNetwork"`
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-podproperties.html#cfn-batch-jobdefinition-podproperties-metadata
	//
	Metadata interface{} `field:"optional" json:"metadata" yaml:"metadata"`
	// The name of the service account that's used to run the pod.
	//
	// For more information, see [Kubernetes service accounts](https://docs.aws.amazon.com/eks/latest/userguide/service-accounts.html) and [Configure a Kubernetes service account to assume an IAM role](https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html) in the *Amazon EKS User Guide* and [Configure service accounts for pods](https://docs.aws.amazon.com/https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in the *Kubernetes documentation* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-podproperties.html#cfn-batch-jobdefinition-podproperties-serviceaccountname
	//
	ServiceAccountName *string `field:"optional" json:"serviceAccountName" yaml:"serviceAccountName"`
	// Specifies the volumes for a job definition that uses Amazon EKS resources.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-podproperties.html#cfn-batch-jobdefinition-podproperties-volumes
	//
	Volumes interface{} `field:"optional" json:"volumes" yaml:"volumes"`
}

The properties for the pod.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var labels interface{}
var limits interface{}
var requests interface{}

podPropertiesProperty := &PodPropertiesProperty{
	Containers: []interface{}{
		&EksContainerProperty{
			Image: jsii.String("image"),

			// the properties below are optional
			Args: []*string{
				jsii.String("args"),
			},
			Command: []*string{
				jsii.String("command"),
			},
			Env: []interface{}{
				&EksContainerEnvironmentVariableProperty{
					Name: jsii.String("name"),

					// the properties below are optional
					Value: jsii.String("value"),
				},
			},
			ImagePullPolicy: jsii.String("imagePullPolicy"),
			Name: jsii.String("name"),
			Resources: &ResourcesProperty{
				Limits: limits,
				Requests: requests,
			},
			SecurityContext: &SecurityContextProperty{
				Privileged: jsii.Boolean(false),
				ReadOnlyRootFilesystem: jsii.Boolean(false),
				RunAsGroup: jsii.Number(123),
				RunAsNonRoot: jsii.Boolean(false),
				RunAsUser: jsii.Number(123),
			},
			VolumeMounts: []interface{}{
				&EksContainerVolumeMountProperty{
					MountPath: jsii.String("mountPath"),
					Name: jsii.String("name"),
					ReadOnly: jsii.Boolean(false),
				},
			},
		},
	},
	DnsPolicy: jsii.String("dnsPolicy"),
	HostNetwork: jsii.Boolean(false),
	Metadata: &MetadataProperty{
		Labels: labels,
	},
	ServiceAccountName: jsii.String("serviceAccountName"),
	Volumes: []interface{}{
		&EksVolumeProperty{
			Name: jsii.String("name"),

			// the properties below are optional
			EmptyDir: &EmptyDirProperty{
				Medium: jsii.String("medium"),
				SizeLimit: jsii.String("sizeLimit"),
			},
			HostPath: &HostPathProperty{
				Path: jsii.String("path"),
			},
			Secret: &EksSecretProperty{
				SecretName: jsii.String("secretName"),

				// the properties below are optional
				Optional: jsii.Boolean(false),
			},
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-podproperties.html

type CfnJobDefinition_ResourceRequirementProperty

type CfnJobDefinition_ResourceRequirementProperty struct {
	// The type of resource to assign to a container.
	//
	// The supported resources include `GPU` , `MEMORY` , and `VCPU` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html#cfn-batch-jobdefinition-resourcerequirement-type
	//
	Type *string `field:"optional" json:"type" yaml:"type"`
	// The quantity of the specified resource to reserve for the container. The values vary based on the `type` specified.
	//
	// - **type="GPU"** - The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.
	//
	// > GPUs aren't available for jobs that are running on Fargate resources.
	// - **type="MEMORY"** - The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to `Memory` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--memory` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to `Memory` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--memory` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .
	//
	// > If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the *AWS Batch User Guide* .
	//
	// For jobs that are running on Fargate resources, then `value` is the hard limit (in MiB), and must match one of the supported values and the `VCPU` values must be one of the values supported for that memory value.
	//
	// - **value = 512** - `VCPU` = 0.25
	// - **value = 1024** - `VCPU` = 0.25 or 0.5
	// - **value = 2048** - `VCPU` = 0.25, 0.5, or 1
	// - **value = 3072** - `VCPU` = 0.5, or 1
	// - **value = 4096** - `VCPU` = 0.5, 1, or 2
	// - **value = 5120, 6144, or 7168** - `VCPU` = 1 or 2
	// - **value = 8192** - `VCPU` = 1, 2, or 4
	// - **value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360** - `VCPU` = 2 or 4
	// - **value = 16384** - `VCPU` = 2, 4, or 8
	// - **value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720** - `VCPU` = 4
	// - **value = 20480, 24576, or 28672** - `VCPU` = 4 or 8
	// - **value = 36864, 45056, 53248, or 61440** - `VCPU` = 8
	// - **value = 32768, 40960, 49152, or 57344** - `VCPU` = 8 or 16
	// - **value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880** - `VCPU` = 16
	// - **type="VCPU"** - The number of vCPUs reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . Each vCPU is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.
	//
	// The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see [AWS Fargate quotas](https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate) in the *AWS General Reference* .
	//
	// For jobs that are running on Fargate resources, then `value` must match one of the supported values and the `MEMORY` values must be one of the values supported for that `VCPU` value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
	//
	// - **value = 0.25** - `MEMORY` = 512, 1024, or 2048
	// - **value = 0.5** - `MEMORY` = 1024, 2048, 3072, or 4096
	// - **value = 1** - `MEMORY` = 2048, 3072, 4096, 5120, 6144, 7168, or 8192
	// - **value = 2** - `MEMORY` = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384
	// - **value = 4** - `MEMORY` = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720
	// - **value = 8** - `MEMORY` = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440
	// - **value = 16** - `MEMORY` = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html#cfn-batch-jobdefinition-resourcerequirement-value
	//
	Value *string `field:"optional" json:"value" yaml:"value"`
}

The type and amount of a resource to assign to a container.

The supported resources include `GPU` , `MEMORY` , and `VCPU` .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

resourceRequirementProperty := &ResourceRequirementProperty{
	Type: jsii.String("type"),
	Value: jsii.String("value"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resourcerequirement.html

type CfnJobDefinition_ResourcesProperty added in v2.51.0

type CfnJobDefinition_ResourcesProperty struct {
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resources.html#cfn-batch-jobdefinition-resources-limits
	//
	Limits interface{} `field:"optional" json:"limits" yaml:"limits"`
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resources.html#cfn-batch-jobdefinition-resources-requests
	//
	Requests interface{} `field:"optional" json:"requests" yaml:"requests"`
}

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var limits interface{}
var requests interface{}

resourcesProperty := &ResourcesProperty{
	Limits: limits,
	Requests: requests,
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-resources.html

type CfnJobDefinition_RetryStrategyProperty

type CfnJobDefinition_RetryStrategyProperty struct {
	// The number of times to move a job to the `RUNNABLE` status.
	//
	// You can specify between 1 and 10 attempts. If the value of `attempts` is greater than one, the job is retried on failure the same number of attempts as the value.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html#cfn-batch-jobdefinition-retrystrategy-attempts
	//
	Attempts *float64 `field:"optional" json:"attempts" yaml:"attempts"`
	// Array of up to 5 objects that specify the conditions where jobs are retried or failed.
	//
	// If this parameter is specified, then the `attempts` parameter must also be specified. If none of the listed conditions match, then the job is retried.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html#cfn-batch-jobdefinition-retrystrategy-evaluateonexit
	//
	EvaluateOnExit interface{} `field:"optional" json:"evaluateOnExit" yaml:"evaluateOnExit"`
}

The retry strategy that's associated with a job.

For more information, see [Automated job retries](https://docs.aws.amazon.com/batch/latest/userguide/job_retries.html) in the *AWS Batch User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

retryStrategyProperty := &RetryStrategyProperty{
	Attempts: jsii.Number(123),
	EvaluateOnExit: []interface{}{
		&EvaluateOnExitProperty{
			Action: jsii.String("action"),

			// the properties below are optional
			OnExitCode: jsii.String("onExitCode"),
			OnReason: jsii.String("onReason"),
			OnStatusReason: jsii.String("onStatusReason"),
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-retrystrategy.html

type CfnJobDefinition_RuntimePlatformProperty added in v2.90.0

type CfnJobDefinition_RuntimePlatformProperty struct {
	// The vCPU architecture. The default value is `X86_64` . Valid values are `X86_64` and `ARM64` .
	//
	// > This parameter must be set to `X86_64` for Windows containers. > Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-runtimeplatform.html#cfn-batch-jobdefinition-runtimeplatform-cpuarchitecture
	//
	CpuArchitecture *string `field:"optional" json:"cpuArchitecture" yaml:"cpuArchitecture"`
	// The operating system for the compute environment.
	//
	// Valid values are: `LINUX` (default), `WINDOWS_SERVER_2019_CORE` , `WINDOWS_SERVER_2019_FULL` , `WINDOWS_SERVER_2022_CORE` , and `WINDOWS_SERVER_2022_FULL` .
	//
	// > The following parameters can’t be set for Windows containers: `linuxParameters` , `privileged` , `user` , `ulimits` , `readonlyRootFilesystem` , and `efsVolumeConfiguration` . > The AWS Batch Scheduler checks the compute environments that are attached to the job queue before registering a task definition with Fargate. In this scenario, the job queue is where the job is submitted. If the job requires a Windows container and the first compute environment is `LINUX` , the compute environment is skipped and the next compute environment is checked until a Windows-based compute environment is found. > Fargate Spot is not supported for `ARM64` and Windows-based containers on Fargate. A job queue will be blocked if a Fargate `ARM64` or Windows job is submitted to a job queue with only Fargate Spot compute environments. However, you can attach both `FARGATE` and `FARGATE_SPOT` compute environments to the same job queue.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-runtimeplatform.html#cfn-batch-jobdefinition-runtimeplatform-operatingsystemfamily
	//
	OperatingSystemFamily *string `field:"optional" json:"operatingSystemFamily" yaml:"operatingSystemFamily"`
}

An object that represents the compute environment architecture for AWS Batch jobs on Fargate.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

runtimePlatformProperty := &RuntimePlatformProperty{
	CpuArchitecture: jsii.String("cpuArchitecture"),
	OperatingSystemFamily: jsii.String("operatingSystemFamily"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-runtimeplatform.html

type CfnJobDefinition_SecretProperty

type CfnJobDefinition_SecretProperty struct {
	// The name of the secret.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html#cfn-batch-jobdefinition-secret-name
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// The secret to expose to the container.
	//
	// The supported values are either the full Amazon Resource Name (ARN) of the AWS Secrets Manager secret or the full ARN of the parameter in the AWS Systems Manager Parameter Store.
	//
	// > If the AWS Systems Manager Parameter Store parameter exists in the same Region as the job you're launching, then you can use either the full Amazon Resource Name (ARN) or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html#cfn-batch-jobdefinition-secret-valuefrom
	//
	ValueFrom *string `field:"required" json:"valueFrom" yaml:"valueFrom"`
}

An object that represents the secret to expose to your container.

Secrets can be exposed to a container in the following ways:

- To inject sensitive data into your containers as environment variables, use the `secrets` container definition parameter. - To reference sensitive information in the log configuration of a container, use the `secretOptions` container definition parameter.

For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html) in the *AWS Batch User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

secretProperty := &SecretProperty{
	Name: jsii.String("name"),
	ValueFrom: jsii.String("valueFrom"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-secret.html

type CfnJobDefinition_SecurityContextProperty added in v2.51.0

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

securityContextProperty := &SecurityContextProperty{
	Privileged: jsii.Boolean(false),
	ReadOnlyRootFilesystem: jsii.Boolean(false),
	RunAsGroup: jsii.Number(123),
	RunAsNonRoot: jsii.Boolean(false),
	RunAsUser: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-securitycontext.html

type CfnJobDefinition_TimeoutProperty

type CfnJobDefinition_TimeoutProperty struct {
	// The job timeout time (in seconds) that's measured from the job attempt's `startedAt` timestamp.
	//
	// After this time passes, AWS Batch terminates your jobs if they aren't finished. The minimum value for the timeout is 60 seconds.
	//
	// For array jobs, the timeout applies to the child jobs, not to the parent array job.
	//
	// For multi-node parallel (MNP) jobs, the timeout applies to the whole job, not to the individual nodes.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-timeout.html#cfn-batch-jobdefinition-timeout-attemptdurationseconds
	//
	AttemptDurationSeconds *float64 `field:"optional" json:"attemptDurationSeconds" yaml:"attemptDurationSeconds"`
}

An object that represents a job timeout configuration.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

timeoutProperty := &TimeoutProperty{
	AttemptDurationSeconds: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-timeout.html

type CfnJobDefinition_TmpfsProperty

type CfnJobDefinition_TmpfsProperty struct {
	// The absolute file path in the container where the `tmpfs` volume is mounted.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-containerpath
	//
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// The size (in MiB) of the `tmpfs` volume.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-size
	//
	Size *float64 `field:"required" json:"size" yaml:"size"`
	// The list of `tmpfs` volume mount options.
	//
	// Valid values: " `defaults` " | " `ro` " | " `rw` " | " `suid` " | " `nosuid` " | " `dev` " | " `nodev` " | " `exec` " | " `noexec` " | " `sync` " | " `async` " | " `dirsync` " | " `remount` " | " `mand` " | " `nomand` " | " `atime` " | " `noatime` " | " `diratime` " | " `nodiratime` " | " `bind` " | " `rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime` " | " `norelatime` " | " `strictatime` " | " `nostrictatime` " | " `mode` " | " `uid` " | " `gid` " | " `nr_inodes` " | " `nr_blocks` " | " `mpol` ".
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html#cfn-batch-jobdefinition-tmpfs-mountoptions
	//
	MountOptions *[]*string `field:"optional" json:"mountOptions" yaml:"mountOptions"`
}

The container path, mount options, and size of the `tmpfs` mount.

> This object isn't applicable to jobs that are running on Fargate resources.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

tmpfsProperty := &TmpfsProperty{
	ContainerPath: jsii.String("containerPath"),
	Size: jsii.Number(123),

	// the properties below are optional
	MountOptions: []*string{
		jsii.String("mountOptions"),
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-tmpfs.html

type CfnJobDefinition_UlimitProperty

type CfnJobDefinition_UlimitProperty struct {
	// The hard limit for the `ulimit` type.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-hardlimit
	//
	HardLimit *float64 `field:"required" json:"hardLimit" yaml:"hardLimit"`
	// The `type` of the `ulimit` .
	//
	// Valid values are: `core` | `cpu` | `data` | `fsize` | `locks` | `memlock` | `msgqueue` | `nice` | `nofile` | `nproc` | `rss` | `rtprio` | `rttime` | `sigpending` | `stack` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-name
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// The soft limit for the `ulimit` type.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html#cfn-batch-jobdefinition-ulimit-softlimit
	//
	SoftLimit *float64 `field:"required" json:"softLimit" yaml:"softLimit"`
}

The `ulimit` settings to pass to the container. For more information, see Ulimit(https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Ulimit.html) .

> This object isn't applicable to jobs that are running on Fargate resources.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

ulimitProperty := &UlimitProperty{
	HardLimit: jsii.Number(123),
	Name: jsii.String("name"),
	SoftLimit: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-ulimit.html

type CfnJobDefinition_VolumesHostProperty

type CfnJobDefinition_VolumesHostProperty struct {
	// The path on the host container instance that's presented to the container.
	//
	// If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
	//
	// > This parameter isn't applicable to jobs that run on Fargate resources. Don't provide this for these jobs.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumeshost.html#cfn-batch-jobdefinition-volumeshost-sourcepath
	//
	SourcePath *string `field:"optional" json:"sourcePath" yaml:"sourcePath"`
}

Determine whether your data volume persists on the host container instance and where it's stored.

If this parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

volumesHostProperty := &VolumesHostProperty{
	SourcePath: jsii.String("sourcePath"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumeshost.html

type CfnJobDefinition_VolumesProperty

type CfnJobDefinition_VolumesProperty struct {
	// This is used when you're using an Amazon Elastic File System file system for job storage.
	//
	// For more information, see [Amazon EFS Volumes](https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html) in the *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-efsvolumeconfiguration
	//
	EfsVolumeConfiguration interface{} `field:"optional" json:"efsVolumeConfiguration" yaml:"efsVolumeConfiguration"`
	// The contents of the `host` parameter determine whether your data volume persists on the host container instance and where it's stored.
	//
	// If the host parameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running.
	//
	// > This parameter isn't applicable to jobs that are running on Fargate resources and shouldn't be provided.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-host
	//
	Host interface{} `field:"optional" json:"host" yaml:"host"`
	// The name of the volume.
	//
	// It can be up to 255 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). This name is referenced in the `sourceVolume` parameter of container definition `mountPoints` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html#cfn-batch-jobdefinition-volumes-name
	//
	Name *string `field:"optional" json:"name" yaml:"name"`
}

A list of volumes that are associated with the job.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

volumesProperty := &VolumesProperty{
	EfsVolumeConfiguration: &EfsVolumeConfigurationProperty{
		FileSystemId: jsii.String("fileSystemId"),

		// the properties below are optional
		AuthorizationConfig: &AuthorizationConfigProperty{
			AccessPointId: jsii.String("accessPointId"),
			Iam: jsii.String("iam"),
		},
		RootDirectory: jsii.String("rootDirectory"),
		TransitEncryption: jsii.String("transitEncryption"),
		TransitEncryptionPort: jsii.Number(123),
	},
	Host: &VolumesHostProperty{
		SourcePath: jsii.String("sourcePath"),
	},
	Name: jsii.String("name"),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobdefinition-volumes.html

type CfnJobQueue

type CfnJobQueue interface {
	awscdk.CfnResource
	awscdk.IInspectable
	awscdk.ITaggable
	// Returns the job queue ARN, such as `batch: *us-east-1* : *111122223333* :job-queue/ *JobQueueName*` .
	AttrJobQueueArn() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// The set of compute environments mapped to a job queue and their order relative to each other.
	ComputeEnvironmentOrder() interface{}
	SetComputeEnvironmentOrder(val interface{})
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The name of the job queue.
	JobQueueName() *string
	SetJobQueueName(val *string)
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The tree node.
	Node() constructs.Node
	// The priority of the job queue.
	Priority() *float64
	SetPriority(val *float64)
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// The Amazon Resource Name (ARN) of the scheduling policy.
	SchedulingPolicyArn() *string
	SetSchedulingPolicyArn(val *string)
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// The state of the job queue.
	State() *string
	SetState(val *string)
	// Tag Manager which manages the tags for this resource.
	Tags() awscdk.TagManager
	// The tags that are applied to the job queue.
	TagsRaw() *map[string]*string
	SetTagsRaw(val *map[string]*string)
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependency(target awscdk.CfnResource)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	// Deprecated: use addDependency.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//   "GlobalSecondaryIndexes": [
	//     {
	//       "Projection": {
	//         "NonKeyAttributes": [ "myattribute" ]
	//         ...
	//       }
	//       ...
	//     },
	//     {
	//       "ProjectionType": "INCLUDE"
	//       ...
	//     },
	//   ]
	//   ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string, typeHint awscdk.ResolutionTypeHint) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Retrieves an array of resources this resource depends on.
	//
	// This assembles dependencies on resources across stacks (including nested stacks)
	// automatically.
	ObtainDependencies() *[]interface{}
	// Get a shallow copy of dependencies between this resource and other resources in the same stack.
	ObtainResourceDependencies() *[]awscdk.CfnResource
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	// Indicates that this resource no longer depends on another resource.
	//
	// This can be used for resources across stacks (including nested stacks)
	// and the dependency will automatically be removed from the relevant scope.
	RemoveDependency(target awscdk.CfnResource)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Replaces one dependency with another.
	ReplaceDependency(target awscdk.CfnResource, newTarget awscdk.CfnResource)
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

The `AWS::Batch::JobQueue` resource specifies the parameters for an AWS Batch job queue definition.

For more information, see [Job Queues](https://docs.aws.amazon.com/batch/latest/userguide/job_queues.html) in the ** .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnJobQueue := awscdk.Aws_batch.NewCfnJobQueue(this, jsii.String("MyCfnJobQueue"), &CfnJobQueueProps{
	ComputeEnvironmentOrder: []interface{}{
		&ComputeEnvironmentOrderProperty{
			ComputeEnvironment: jsii.String("computeEnvironment"),
			Order: jsii.Number(123),
		},
	},
	Priority: jsii.Number(123),

	// the properties below are optional
	JobQueueName: jsii.String("jobQueueName"),
	SchedulingPolicyArn: jsii.String("schedulingPolicyArn"),
	State: jsii.String("state"),
	Tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
})

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html

func NewCfnJobQueue

func NewCfnJobQueue(scope constructs.Construct, id *string, props *CfnJobQueueProps) CfnJobQueue

type CfnJobQueueProps

type CfnJobQueueProps struct {
	// The set of compute environments mapped to a job queue and their order relative to each other.
	//
	// The job scheduler uses this parameter to determine which compute environment runs a specific job. Compute environments must be in the `VALID` state before you can associate them with a job queue. You can associate up to three compute environments with a job queue. All of the compute environments must be either EC2 ( `EC2` or `SPOT` ) or Fargate ( `FARGATE` or `FARGATE_SPOT` ); EC2 and Fargate compute environments can't be mixed.
	//
	// > All compute environments that are associated with a job queue must share the same architecture. AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-computeenvironmentorder
	//
	ComputeEnvironmentOrder interface{} `field:"required" json:"computeEnvironmentOrder" yaml:"computeEnvironmentOrder"`
	// The priority of the job queue.
	//
	// Job queues with a higher priority (or a higher integer value for the `priority` parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of `10` is given scheduling preference over a job queue with a priority value of `1` . All of the compute environments must be either EC2 ( `EC2` or `SPOT` ) or Fargate ( `FARGATE` or `FARGATE_SPOT` ); EC2 and Fargate compute environments can't be mixed.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-priority
	//
	Priority *float64 `field:"required" json:"priority" yaml:"priority"`
	// The name of the job queue.
	//
	// It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-jobqueuename
	//
	JobQueueName *string `field:"optional" json:"jobQueueName" yaml:"jobQueueName"`
	// The Amazon Resource Name (ARN) of the scheduling policy.
	//
	// The format is `aws: *Partition* :batch: *Region* : *Account* :scheduling-policy/ *Name*` . For example, `aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-schedulingpolicyarn
	//
	SchedulingPolicyArn *string `field:"optional" json:"schedulingPolicyArn" yaml:"schedulingPolicyArn"`
	// The state of the job queue.
	//
	// If the job queue state is `ENABLED` , it is able to accept jobs. If the job queue state is `DISABLED` , new jobs can't be added to the queue, but jobs already in the queue can finish.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-state
	//
	State *string `field:"optional" json:"state" yaml:"state"`
	// The tags that are applied to the job queue.
	//
	// For more information, see [Tagging your AWS Batch resources](https://docs.aws.amazon.com/batch/latest/userguide/using-tags.html) in *AWS Batch User Guide* .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html#cfn-batch-jobqueue-tags
	//
	Tags *map[string]*string `field:"optional" json:"tags" yaml:"tags"`
}

Properties for defining a `CfnJobQueue`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnJobQueueProps := &CfnJobQueueProps{
	ComputeEnvironmentOrder: []interface{}{
		&ComputeEnvironmentOrderProperty{
			ComputeEnvironment: jsii.String("computeEnvironment"),
			Order: jsii.Number(123),
		},
	},
	Priority: jsii.Number(123),

	// the properties below are optional
	JobQueueName: jsii.String("jobQueueName"),
	SchedulingPolicyArn: jsii.String("schedulingPolicyArn"),
	State: jsii.String("state"),
	Tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-jobqueue.html

type CfnJobQueue_ComputeEnvironmentOrderProperty

type CfnJobQueue_ComputeEnvironmentOrderProperty struct {
	// The Amazon Resource Name (ARN) of the compute environment.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html#cfn-batch-jobqueue-computeenvironmentorder-computeenvironment
	//
	ComputeEnvironment *string `field:"required" json:"computeEnvironment" yaml:"computeEnvironment"`
	// The order of the compute environment.
	//
	// Compute environments are tried in ascending order. For example, if two compute environments are associated with a job queue, the compute environment with a lower `order` integer value is tried for job placement first.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html#cfn-batch-jobqueue-computeenvironmentorder-order
	//
	Order *float64 `field:"required" json:"order" yaml:"order"`
}

The order that compute environments are tried in for job placement within a queue.

Compute environments are tried in ascending order. For example, if two compute environments are associated with a job queue, the compute environment with a lower order integer value is tried for job placement first. Compute environments must be in the `VALID` state before you can associate them with a job queue. All of the compute environments must be either EC2 ( `EC2` or `SPOT` ) or Fargate ( `FARGATE` or `FARGATE_SPOT` ); EC2 and Fargate compute environments can't be mixed.

> All compute environments that are associated with a job queue must share the same architecture. AWS Batch doesn't support mixing compute environment architecture types in a single job queue.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

computeEnvironmentOrderProperty := &ComputeEnvironmentOrderProperty{
	ComputeEnvironment: jsii.String("computeEnvironment"),
	Order: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-jobqueue-computeenvironmentorder.html

type CfnSchedulingPolicy

type CfnSchedulingPolicy interface {
	awscdk.CfnResource
	awscdk.IInspectable
	awscdk.ITaggable
	// Returns the scheduling policy ARN, such as `batch: *us-east-1* : *111122223333* :scheduling-policy/ *HighPriority*` .
	AttrArn() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The fair share policy of the scheduling policy.
	FairsharePolicy() interface{}
	SetFairsharePolicy(val interface{})
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The name of the scheduling policy.
	Name() *string
	SetName(val *string)
	// The tree node.
	Node() constructs.Node
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// Tag Manager which manages the tags for this resource.
	Tags() awscdk.TagManager
	// The tags that you apply to the scheduling policy to help you categorize and organize your resources.
	TagsRaw() *map[string]*string
	SetTagsRaw(val *map[string]*string)
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependency(target awscdk.CfnResource)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	// Deprecated: use addDependency.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//   "GlobalSecondaryIndexes": [
	//     {
	//       "Projection": {
	//         "NonKeyAttributes": [ "myattribute" ]
	//         ...
	//       }
	//       ...
	//     },
	//     {
	//       "ProjectionType": "INCLUDE"
	//       ...
	//     },
	//   ]
	//   ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string, typeHint awscdk.ResolutionTypeHint) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Retrieves an array of resources this resource depends on.
	//
	// This assembles dependencies on resources across stacks (including nested stacks)
	// automatically.
	ObtainDependencies() *[]interface{}
	// Get a shallow copy of dependencies between this resource and other resources in the same stack.
	ObtainResourceDependencies() *[]awscdk.CfnResource
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	// Indicates that this resource no longer depends on another resource.
	//
	// This can be used for resources across stacks (including nested stacks)
	// and the dependency will automatically be removed from the relevant scope.
	RemoveDependency(target awscdk.CfnResource)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Replaces one dependency with another.
	ReplaceDependency(target awscdk.CfnResource, newTarget awscdk.CfnResource)
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

The `AWS::Batch::SchedulingPolicy` resource specifies the parameters for an AWS Batch scheduling policy.

For more information, see [Scheduling Policies](https://docs.aws.amazon.com/batch/latest/userguide/scheduling_policies.html) in the ** .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnSchedulingPolicy := awscdk.Aws_batch.NewCfnSchedulingPolicy(this, jsii.String("MyCfnSchedulingPolicy"), &CfnSchedulingPolicyProps{
	FairsharePolicy: &FairsharePolicyProperty{
		ComputeReservation: jsii.Number(123),
		ShareDecaySeconds: jsii.Number(123),
		ShareDistribution: []interface{}{
			&ShareAttributesProperty{
				ShareIdentifier: jsii.String("shareIdentifier"),
				WeightFactor: jsii.Number(123),
			},
		},
	},
	Name: jsii.String("name"),
	Tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
})

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-schedulingpolicy.html

func NewCfnSchedulingPolicy

func NewCfnSchedulingPolicy(scope constructs.Construct, id *string, props *CfnSchedulingPolicyProps) CfnSchedulingPolicy

type CfnSchedulingPolicyProps

type CfnSchedulingPolicyProps struct {
	// The fair share policy of the scheduling policy.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-schedulingpolicy.html#cfn-batch-schedulingpolicy-fairsharepolicy
	//
	FairsharePolicy interface{} `field:"optional" json:"fairsharePolicy" yaml:"fairsharePolicy"`
	// The name of the scheduling policy.
	//
	// It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-schedulingpolicy.html#cfn-batch-schedulingpolicy-name
	//
	Name *string `field:"optional" json:"name" yaml:"name"`
	// The tags that you apply to the scheduling policy to help you categorize and organize your resources.
	//
	// Each tag consists of a key and an optional value. For more information, see [Tagging AWS Resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) in *AWS General Reference* .
	//
	// These tags can be updated or removed using the [TagResource](https://docs.aws.amazon.com/batch/latest/APIReference/API_TagResource.html) and [UntagResource](https://docs.aws.amazon.com/batch/latest/APIReference/API_UntagResource.html) API operations.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-schedulingpolicy.html#cfn-batch-schedulingpolicy-tags
	//
	Tags *map[string]*string `field:"optional" json:"tags" yaml:"tags"`
}

Properties for defining a `CfnSchedulingPolicy`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnSchedulingPolicyProps := &CfnSchedulingPolicyProps{
	FairsharePolicy: &FairsharePolicyProperty{
		ComputeReservation: jsii.Number(123),
		ShareDecaySeconds: jsii.Number(123),
		ShareDistribution: []interface{}{
			&ShareAttributesProperty{
				ShareIdentifier: jsii.String("shareIdentifier"),
				WeightFactor: jsii.Number(123),
			},
		},
	},
	Name: jsii.String("name"),
	Tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-batch-schedulingpolicy.html

type CfnSchedulingPolicy_FairsharePolicyProperty

type CfnSchedulingPolicy_FairsharePolicyProperty struct {
	// A value used to reserve some of the available maximum vCPU for fair share identifiers that aren't already used.
	//
	// The reserved ratio is `( *computeReservation* /100)^ *ActiveFairShares*` where `*ActiveFairShares*` is the number of active fair share identifiers.
	//
	// For example, a `computeReservation` value of 50 indicates that AWS Batch reserves 50% of the maximum available vCPU if there's only one fair share identifier. It reserves 25% if there are two fair share identifiers. It reserves 12.5% if there are three fair share identifiers. A `computeReservation` value of 25 indicates that AWS Batch should reserve 25% of the maximum available vCPU if there's only one fair share identifier, 6.25% if there are two fair share identifiers, and 1.56% if there are three fair share identifiers.
	//
	// The minimum value is 0 and the maximum value is 99.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-schedulingpolicy-fairsharepolicy.html#cfn-batch-schedulingpolicy-fairsharepolicy-computereservation
	//
	ComputeReservation *float64 `field:"optional" json:"computeReservation" yaml:"computeReservation"`
	// The amount of time (in seconds) to use to calculate a fair share percentage for each fair share identifier in use.
	//
	// A value of zero (0) indicates that only current usage is measured. The decay allows for more recently run jobs to have more weight than jobs that ran earlier. The maximum supported value is 604800 (1 week).
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-schedulingpolicy-fairsharepolicy.html#cfn-batch-schedulingpolicy-fairsharepolicy-sharedecayseconds
	//
	ShareDecaySeconds *float64 `field:"optional" json:"shareDecaySeconds" yaml:"shareDecaySeconds"`
	// An array of `SharedIdentifier` objects that contain the weights for the fair share identifiers for the fair share policy.
	//
	// Fair share identifiers that aren't included have a default weight of `1.0` .
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-schedulingpolicy-fairsharepolicy.html#cfn-batch-schedulingpolicy-fairsharepolicy-sharedistribution
	//
	ShareDistribution interface{} `field:"optional" json:"shareDistribution" yaml:"shareDistribution"`
}

The fair share policy for a scheduling policy.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

fairsharePolicyProperty := &FairsharePolicyProperty{
	ComputeReservation: jsii.Number(123),
	ShareDecaySeconds: jsii.Number(123),
	ShareDistribution: []interface{}{
		&ShareAttributesProperty{
			ShareIdentifier: jsii.String("shareIdentifier"),
			WeightFactor: jsii.Number(123),
		},
	},
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-schedulingpolicy-fairsharepolicy.html

type CfnSchedulingPolicy_ShareAttributesProperty

type CfnSchedulingPolicy_ShareAttributesProperty struct {
	// A fair share identifier or fair share identifier prefix.
	//
	// If the string ends with an asterisk (*), this entry specifies the weight factor to use for fair share identifiers that start with that prefix. The list of fair share identifiers in a fair share policy can't overlap. For example, you can't have one that specifies a `shareIdentifier` of `UserA*` and another that specifies a `shareIdentifier` of `UserA-1` .
	//
	// There can be no more than 500 fair share identifiers active in a job queue.
	//
	// The string is limited to 255 alphanumeric characters, and can be followed by an asterisk (*).
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-schedulingpolicy-shareattributes.html#cfn-batch-schedulingpolicy-shareattributes-shareidentifier
	//
	ShareIdentifier *string `field:"optional" json:"shareIdentifier" yaml:"shareIdentifier"`
	// The weight factor for the fair share identifier.
	//
	// The default value is 1.0. A lower value has a higher priority for compute resources. For example, jobs that use a share identifier with a weight factor of 0.125 (1/8) get 8 times the compute resources of jobs that use a share identifier with a weight factor of 1.
	//
	// The smallest supported value is 0.0001, and the largest supported value is 999.9999.
	// See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-schedulingpolicy-shareattributes.html#cfn-batch-schedulingpolicy-shareattributes-weightfactor
	//
	WeightFactor *float64 `field:"optional" json:"weightFactor" yaml:"weightFactor"`
}

Specifies the weights for the fair share identifiers for the fair share policy.

Fair share identifiers that aren't included have a default weight of `1.0` .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

shareAttributesProperty := &ShareAttributesProperty{
	ShareIdentifier: jsii.String("shareIdentifier"),
	WeightFactor: jsii.Number(123),
}

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-batch-schedulingpolicy-shareattributes.html

type ComputeEnvironmentProps added in v2.96.0

type ComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
}

Props common to all ComputeEnvironments.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var role role

computeEnvironmentProps := &ComputeEnvironmentProps{
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	ServiceRole: role,
}

type CustomReason added in v2.96.0

type CustomReason struct {
	// A glob string that will match on the job exit code.
	//
	// For example, `'40*'` will match 400, 404, 40123456789012.
	// Default: - will not match on the exit code.
	//
	OnExitCode *string `field:"optional" json:"onExitCode" yaml:"onExitCode"`
	// A glob string that will match on the reason returned by the exiting job For example, `'CannotPullContainerError*'` indicates that container needed to start the job could not be pulled.
	// Default: - will not match on the reason.
	//
	OnReason *string `field:"optional" json:"onReason" yaml:"onReason"`
	// A glob string that will match on the statusReason returned by the exiting job.
	//
	// For example, `'Host EC2*'` indicates that the spot instance has been reclaimed.
	// Default: - will not match on the status reason.
	//
	OnStatusReason *string `field:"optional" json:"onStatusReason" yaml:"onStatusReason"`
}

The corresponding Action will only be taken if *all* of the conditions specified here are met.

Example:

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

type Device added in v2.96.0

type Device struct {
	// The path for the device on the host container instance.
	HostPath *string `field:"required" json:"hostPath" yaml:"hostPath"`
	// The path inside the container at which to expose the host device.
	// Default: Same path as the host.
	//
	ContainerPath *string `field:"optional" json:"containerPath" yaml:"containerPath"`
	// The explicit permissions to provide to the container for the device.
	//
	// By default, the container has permissions for read, write, and mknod for the device.
	// Default: Readonly.
	//
	Permissions *[]DevicePermission `field:"optional" json:"permissions" yaml:"permissions"`
}

A container instance host device.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

device := &Device{
	HostPath: jsii.String("hostPath"),

	// the properties below are optional
	ContainerPath: jsii.String("containerPath"),
	Permissions: []devicePermission{
		awscdk.Aws_batch.*devicePermission_READ,
	},
}

type DevicePermission added in v2.96.0

type DevicePermission string

Permissions for device access.

const (
	// Read.
	DevicePermission_READ DevicePermission = "READ"
	// Write.
	DevicePermission_WRITE DevicePermission = "WRITE"
	// Make a node.
	DevicePermission_MKNOD DevicePermission = "MKNOD"
)

type DnsPolicy added in v2.96.0

type DnsPolicy string

The DNS Policy for the pod used by the Job Definition. See: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

const (
	// The Pod inherits the name resolution configuration from the node that the Pods run on.
	DnsPolicy_DEFAULT DnsPolicy = "DEFAULT"
	// Any DNS query that does not match the configured cluster domain suffix, such as `"www.kubernetes.io"`, is forwarded to an upstream nameserver by the DNS server. Cluster administrators may have extra stub-domain and upstream DNS servers configured.
	DnsPolicy_CLUSTER_FIRST DnsPolicy = "CLUSTER_FIRST"
	// For Pods running with `hostNetwork`, you should explicitly set its DNS policy to `CLUSTER_FIRST_WITH_HOST_NET`.
	//
	// Otherwise, Pods running with `hostNetwork` and `CLUSTER_FIRST` will fallback to the behavior of the `DEFAULT` policy.
	DnsPolicy_CLUSTER_FIRST_WITH_HOST_NET DnsPolicy = "CLUSTER_FIRST_WITH_HOST_NET"
)

type EcsContainerDefinitionProps added in v2.96.0

type EcsContainerDefinitionProps struct {
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	Cpu *float64 `field:"required" json:"cpu" yaml:"cpu"`
	// The image that this container will run.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	Memory awscdk.Size `field:"required" json:"memory" yaml:"memory"`
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	// Default: - no command.
	//
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	Environment *map[string]*string `field:"optional" json:"environment" yaml:"environment"`
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	// Default: - a Role will be created.
	//
	ExecutionRole awsiam.IRole `field:"optional" json:"executionRole" yaml:"executionRole"`
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no job role.
	//
	JobRole awsiam.IRole `field:"optional" json:"jobRole" yaml:"jobRole"`
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	LinuxParameters LinuxParameters `field:"optional" json:"linuxParameters" yaml:"linuxParameters"`
	// The loging configuration for this Job.
	// Default: - the log configuration of the Docker daemon.
	//
	Logging awsecs.LogDriver `field:"optional" json:"logging" yaml:"logging"`
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	Secrets *map[string]Secret `field:"optional" json:"secrets" yaml:"secrets"`
	// The user name to use inside the container.
	// Default: - no user.
	//
	User *string `field:"optional" json:"user" yaml:"user"`
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	Volumes *[]EcsVolume `field:"optional" json:"volumes" yaml:"volumes"`
}

Props to configure an EcsContainerDefinition.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var containerImage containerImage
var ecsVolume ecsVolume
var linuxParameters linuxParameters
var logDriver logDriver
var role role
var secret secret
var size size

ecsContainerDefinitionProps := &EcsContainerDefinitionProps{
	Cpu: jsii.Number(123),
	Image: containerImage,
	Memory: size,

	// the properties below are optional
	Command: []*string{
		jsii.String("command"),
	},
	Environment: map[string]*string{
		"environmentKey": jsii.String("environment"),
	},
	ExecutionRole: role,
	JobRole: role,
	LinuxParameters: linuxParameters,
	Logging: logDriver,
	ReadonlyRootFilesystem: jsii.Boolean(false),
	Secrets: map[string]*secret{
		"secretsKey": secret,
	},
	User: jsii.String("user"),
	Volumes: []*ecsVolume{
		ecsVolume,
	},
}

type EcsEc2ContainerDefinition added in v2.96.0

type EcsEc2ContainerDefinition interface {
	constructs.Construct
	IEcsContainerDefinition
	IEcsEc2ContainerDefinition
	// The command that's passed to the container.
	Command() *[]*string
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	Cpu() *float64
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	Environment() *map[string]*string
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	ExecutionRole() awsiam.IRole
	// The number of physical GPUs to reserve for the container.
	//
	// Make sure that the number of GPUs reserved for all containers in a job doesn't exceed
	// the number of available GPUs on the compute resource that the job is launched on.
	Gpu() *float64
	// The image that this container will run.
	Image() awsecs.ContainerImage
	// The role that the container can assume.
	JobRole() awsiam.IRole
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	LinuxParameters() LinuxParameters
	// The configuration of the log driver.
	LogDriverConfig() *awsecs.LogDriverConfig
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	Memory() awscdk.Size
	// The tree node.
	Node() constructs.Node
	// When this parameter is true, the container is given elevated permissions on the host container instance (similar to the root user).
	Privileged() *bool
	// Gives the container readonly access to its root filesystem.
	ReadonlyRootFilesystem() *bool
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	Secrets() *map[string]Secret
	// Limits to set for the user this docker container will run as.
	Ulimits() *[]*Ulimit
	// The user name to use inside the container.
	User() *string
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	Volumes() *[]EcsVolume
	// Add a ulimit to this container.
	AddUlimit(ulimit *Ulimit)
	// Add a Volume to this container.
	AddVolume(volume EcsVolume)
	// Returns a string representation of this construct.
	ToString() *string
}

A container orchestrated by ECS that uses EC2 resources.

Example:

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

func NewEcsEc2ContainerDefinition added in v2.96.0

func NewEcsEc2ContainerDefinition(scope constructs.Construct, id *string, props *EcsEc2ContainerDefinitionProps) EcsEc2ContainerDefinition

type EcsEc2ContainerDefinitionProps added in v2.96.0

type EcsEc2ContainerDefinitionProps struct {
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	Cpu *float64 `field:"required" json:"cpu" yaml:"cpu"`
	// The image that this container will run.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	Memory awscdk.Size `field:"required" json:"memory" yaml:"memory"`
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	// Default: - no command.
	//
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	Environment *map[string]*string `field:"optional" json:"environment" yaml:"environment"`
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	// Default: - a Role will be created.
	//
	ExecutionRole awsiam.IRole `field:"optional" json:"executionRole" yaml:"executionRole"`
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no job role.
	//
	JobRole awsiam.IRole `field:"optional" json:"jobRole" yaml:"jobRole"`
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	LinuxParameters LinuxParameters `field:"optional" json:"linuxParameters" yaml:"linuxParameters"`
	// The loging configuration for this Job.
	// Default: - the log configuration of the Docker daemon.
	//
	Logging awsecs.LogDriver `field:"optional" json:"logging" yaml:"logging"`
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	Secrets *map[string]Secret `field:"optional" json:"secrets" yaml:"secrets"`
	// The user name to use inside the container.
	// Default: - no user.
	//
	User *string `field:"optional" json:"user" yaml:"user"`
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	Volumes *[]EcsVolume `field:"optional" json:"volumes" yaml:"volumes"`
	// The number of physical GPUs to reserve for the container.
	//
	// Make sure that the number of GPUs reserved for all containers in a job doesn't exceed
	// the number of available GPUs on the compute resource that the job is launched on.
	// Default: - no gpus.
	//
	Gpu *float64 `field:"optional" json:"gpu" yaml:"gpu"`
	// When this parameter is true, the container is given elevated permissions on the host container instance (similar to the root user).
	// Default: false.
	//
	Privileged *bool `field:"optional" json:"privileged" yaml:"privileged"`
	// Limits to set for the user this docker container will run as.
	// Default: - no ulimits.
	//
	Ulimits *[]*Ulimit `field:"optional" json:"ulimits" yaml:"ulimits"`
}

Props to configure an EcsEc2ContainerDefinition.

Example:

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

type EcsFargateContainerDefinition added in v2.96.0

type EcsFargateContainerDefinition interface {
	constructs.Construct
	IEcsContainerDefinition
	IEcsFargateContainerDefinition
	// Indicates whether the job has a public IP address.
	//
	// For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet
	// (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet.
	AssignPublicIp() *bool
	// The command that's passed to the container.
	Command() *[]*string
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	Cpu() *float64
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	Environment() *map[string]*string
	// The size for ephemeral storage.
	EphemeralStorageSize() awscdk.Size
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	ExecutionRole() awsiam.IRole
	// Which version of Fargate to use when running this container.
	FargatePlatformVersion() awsecs.FargatePlatformVersion
	// The image that this container will run.
	Image() awsecs.ContainerImage
	// The role that the container can assume.
	JobRole() awsiam.IRole
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	LinuxParameters() LinuxParameters
	// The configuration of the log driver.
	LogDriverConfig() *awsecs.LogDriverConfig
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	Memory() awscdk.Size
	// The tree node.
	Node() constructs.Node
	// Gives the container readonly access to its root filesystem.
	ReadonlyRootFilesystem() *bool
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	Secrets() *map[string]Secret
	// The user name to use inside the container.
	User() *string
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	Volumes() *[]EcsVolume
	// Add a Volume to this container.
	AddVolume(volume EcsVolume)
	// Returns a string representation of this construct.
	ToString() *string
}

A container orchestrated by ECS that uses Fargate resources.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var containerImage containerImage
var ecsVolume ecsVolume
var linuxParameters linuxParameters
var logDriver logDriver
var role role
var secret secret
var size size

ecsFargateContainerDefinition := awscdk.Aws_batch.NewEcsFargateContainerDefinition(this, jsii.String("MyEcsFargateContainerDefinition"), &EcsFargateContainerDefinitionProps{
	Cpu: jsii.Number(123),
	Image: containerImage,
	Memory: size,

	// the properties below are optional
	AssignPublicIp: jsii.Boolean(false),
	Command: []*string{
		jsii.String("command"),
	},
	Environment: map[string]*string{
		"environmentKey": jsii.String("environment"),
	},
	EphemeralStorageSize: size,
	ExecutionRole: role,
	FargatePlatformVersion: awscdk.Aws_ecs.FargatePlatformVersion_LATEST,
	JobRole: role,
	LinuxParameters: linuxParameters,
	Logging: logDriver,
	ReadonlyRootFilesystem: jsii.Boolean(false),
	Secrets: map[string]*secret{
		"secretsKey": secret,
	},
	User: jsii.String("user"),
	Volumes: []*ecsVolume{
		ecsVolume,
	},
})

func NewEcsFargateContainerDefinition added in v2.96.0

func NewEcsFargateContainerDefinition(scope constructs.Construct, id *string, props *EcsFargateContainerDefinitionProps) EcsFargateContainerDefinition

type EcsFargateContainerDefinitionProps added in v2.96.0

type EcsFargateContainerDefinitionProps struct {
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	Cpu *float64 `field:"required" json:"cpu" yaml:"cpu"`
	// The image that this container will run.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	Memory awscdk.Size `field:"required" json:"memory" yaml:"memory"`
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	// Default: - no command.
	//
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	Environment *map[string]*string `field:"optional" json:"environment" yaml:"environment"`
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	// Default: - a Role will be created.
	//
	ExecutionRole awsiam.IRole `field:"optional" json:"executionRole" yaml:"executionRole"`
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no job role.
	//
	JobRole awsiam.IRole `field:"optional" json:"jobRole" yaml:"jobRole"`
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	LinuxParameters LinuxParameters `field:"optional" json:"linuxParameters" yaml:"linuxParameters"`
	// The loging configuration for this Job.
	// Default: - the log configuration of the Docker daemon.
	//
	Logging awsecs.LogDriver `field:"optional" json:"logging" yaml:"logging"`
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	Secrets *map[string]Secret `field:"optional" json:"secrets" yaml:"secrets"`
	// The user name to use inside the container.
	// Default: - no user.
	//
	User *string `field:"optional" json:"user" yaml:"user"`
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	Volumes *[]EcsVolume `field:"optional" json:"volumes" yaml:"volumes"`
	// Indicates whether the job has a public IP address.
	//
	// For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet
	// (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
	//
	// Default: false.
	//
	AssignPublicIp *bool `field:"optional" json:"assignPublicIp" yaml:"assignPublicIp"`
	// The size for ephemeral storage.
	// Default: - 20 GiB.
	//
	EphemeralStorageSize awscdk.Size `field:"optional" json:"ephemeralStorageSize" yaml:"ephemeralStorageSize"`
	// Which version of Fargate to use when running this container.
	// Default: LATEST.
	//
	FargatePlatformVersion awsecs.FargatePlatformVersion `field:"optional" json:"fargatePlatformVersion" yaml:"fargatePlatformVersion"`
}

Props to configure an EcsFargateContainerDefinition.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var containerImage containerImage
var ecsVolume ecsVolume
var linuxParameters linuxParameters
var logDriver logDriver
var role role
var secret secret
var size size

ecsFargateContainerDefinitionProps := &EcsFargateContainerDefinitionProps{
	Cpu: jsii.Number(123),
	Image: containerImage,
	Memory: size,

	// the properties below are optional
	AssignPublicIp: jsii.Boolean(false),
	Command: []*string{
		jsii.String("command"),
	},
	Environment: map[string]*string{
		"environmentKey": jsii.String("environment"),
	},
	EphemeralStorageSize: size,
	ExecutionRole: role,
	FargatePlatformVersion: awscdk.Aws_ecs.FargatePlatformVersion_LATEST,
	JobRole: role,
	LinuxParameters: linuxParameters,
	Logging: logDriver,
	ReadonlyRootFilesystem: jsii.Boolean(false),
	Secrets: map[string]*secret{
		"secretsKey": secret,
	},
	User: jsii.String("user"),
	Volumes: []*ecsVolume{
		ecsVolume,
	},
}

type EcsJobDefinition added in v2.96.0

type EcsJobDefinition interface {
	awscdk.Resource
	IJobDefinition
	// The container that this job will run.
	Container() IEcsContainerDefinition
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The ARN of this job definition.
	JobDefinitionArn() *string
	// The name of this job definition.
	JobDefinitionName() *string
	// The tree node.
	Node() constructs.Node
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	Parameters() *map[string]interface{}
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	PropagateTags() *bool
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	SchedulingPriority() *float64
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	Timeout() awscdk.Duration
	// Add a RetryStrategy to this JobDefinition.
	AddRetryStrategy(strategy RetryStrategy)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Grants the `batch:submitJob` permission to the identity on both this job definition and the `queue`.
	GrantSubmitJob(identity awsiam.IGrantable, queue IJobQueue)
	// Returns a string representation of this construct.
	ToString() *string
}

A JobDefinition that uses ECS orchestration.

Example:

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

func NewEcsJobDefinition added in v2.96.0

func NewEcsJobDefinition(scope constructs.Construct, id *string, props *EcsJobDefinitionProps) EcsJobDefinition

type EcsJobDefinitionProps added in v2.96.0

type EcsJobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The container that this job will run.
	Container IEcsContainerDefinition `field:"required" json:"container" yaml:"container"`
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	// Default: false.
	//
	PropagateTags *bool `field:"optional" json:"propagateTags" yaml:"propagateTags"`
}

Props for EcsJobDefinition.

Example:

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

type EcsMachineImage added in v2.96.0

type EcsMachineImage struct {
	// The machine image to use.
	// Default: - chosen by batch.
	//
	Image awsec2.IMachineImage `field:"optional" json:"image" yaml:"image"`
	// Tells Batch which instance type to launch this image on.
	// Default: - 'ECS_AL2' for non-gpu instances, 'ECS_AL2_NVIDIA' for gpu instances.
	//
	ImageType EcsMachineImageType `field:"optional" json:"imageType" yaml:"imageType"`
}

A Batch MachineImage that is compatible with ECS.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var machineImage iMachineImage

ecsMachineImage := &EcsMachineImage{
	Image: machineImage,
	ImageType: awscdk.Aws_batch.EcsMachineImageType_ECS_AL2,
}

type EcsMachineImageType added in v2.96.0

type EcsMachineImageType string

Maps the image to instance types.

const (
	// Tells Batch that this machine image runs on non-GPU instances.
	EcsMachineImageType_ECS_AL2 EcsMachineImageType = "ECS_AL2"
	// Tells Batch that this machine image runs on GPU instances.
	EcsMachineImageType_ECS_AL2_NVIDIA EcsMachineImageType = "ECS_AL2_NVIDIA"
)

type EcsVolume added in v2.96.0

type EcsVolume interface {
	// The path on the container that this volume will be mounted to.
	ContainerPath() *string
	// The name of this volume.
	Name() *string
	// Whether or not the container has readonly access to this volume.
	// Default: false.
	//
	Readonly() *bool
}

Represents a Volume that can be mounted to a container that uses ECS.

Example:

var myFileSystem iFileSystem
var myJobRole role

myFileSystem.GrantRead(myJobRole)

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Volumes: []ecsVolume{
			batch.*ecsVolume_Efs(&EfsVolumeOptions{
				Name: jsii.String("myVolume"),
				FileSystem: myFileSystem,
				ContainerPath: jsii.String("/Volumes/myVolume"),
				UseJobRole: jsii.Boolean(true),
			}),
		},
		JobRole: myJobRole,
	}),
})

type EcsVolumeOptions added in v2.96.0

type EcsVolumeOptions struct {
	// the path on the container where this volume is mounted.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// the name of this volume.
	Name *string `field:"required" json:"name" yaml:"name"`
	// if set, the container will have readonly access to the volume.
	// Default: false.
	//
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
}

Options to configure an EcsVolume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

ecsVolumeOptions := &EcsVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	Readonly: jsii.Boolean(false),
}

type EfsVolume added in v2.96.0

type EfsVolume interface {
	EcsVolume
	// The Amazon EFS access point ID to use.
	//
	// If an access point is specified, `rootDirectory` must either be omitted or set to `/`
	// which enforces the path set on the EFS access point.
	// If an access point is used, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html
	//
	// Default: - no accessPointId.
	//
	AccessPointId() *string
	// The path on the container that this volume will be mounted to.
	ContainerPath() *string
	// Enables encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server.
	// See: https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html
	//
	// Default: false.
	//
	EnableTransitEncryption() *bool
	// The EFS File System that supports this volume.
	FileSystem() awsefs.IFileSystem
	// The name of this volume.
	Name() *string
	// Whether or not the container has readonly access to this volume.
	// Default: false.
	//
	Readonly() *bool
	// The directory within the Amazon EFS file system to mount as the root directory inside the host.
	//
	// If this parameter is omitted, the root of the Amazon EFS volume is used instead.
	// Specifying `/` has the same effect as omitting this parameter.
	// The maximum length is 4,096 characters.
	// Default: - root of the EFS File System.
	//
	RootDirectory() *string
	// The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server.
	//
	// The value must be between 0 and 65,535.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html
	//
	// Default: - chosen by the EFS Mount Helper.
	//
	TransitEncryptionPort() *float64
	// Whether or not to use the AWS Batch job IAM role defined in a job definition when mounting the Amazon EFS file system.
	//
	// If specified, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints
	//
	// Default: false.
	//
	UseJobRole() *bool
}

A Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var fileSystem fileSystem

efsVolume := awscdk.Aws_batch.NewEfsVolume(&EfsVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	FileSystem: fileSystem,
	Name: jsii.String("name"),

	// the properties below are optional
	AccessPointId: jsii.String("accessPointId"),
	EnableTransitEncryption: jsii.Boolean(false),
	Readonly: jsii.Boolean(false),
	RootDirectory: jsii.String("rootDirectory"),
	TransitEncryptionPort: jsii.Number(123),
	UseJobRole: jsii.Boolean(false),
})

func EcsVolume_Efs added in v2.96.0

func EcsVolume_Efs(options *EfsVolumeOptions) EfsVolume

Creates a Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed. See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html

func EfsVolume_Efs added in v2.96.0

func EfsVolume_Efs(options *EfsVolumeOptions) EfsVolume

Creates a Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed. See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html

func HostVolume_Efs added in v2.96.0

func HostVolume_Efs(options *EfsVolumeOptions) EfsVolume

Creates a Volume that uses an AWS Elastic File System (EFS);

this volume can grow and shrink as needed. See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html

func NewEfsVolume added in v2.96.0

func NewEfsVolume(options *EfsVolumeOptions) EfsVolume

type EfsVolumeOptions added in v2.96.0

type EfsVolumeOptions struct {
	// the path on the container where this volume is mounted.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// the name of this volume.
	Name *string `field:"required" json:"name" yaml:"name"`
	// if set, the container will have readonly access to the volume.
	// Default: false.
	//
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The EFS File System that supports this volume.
	FileSystem awsefs.IFileSystem `field:"required" json:"fileSystem" yaml:"fileSystem"`
	// The Amazon EFS access point ID to use.
	//
	// If an access point is specified, `rootDirectory` must either be omitted or set to `/`
	// which enforces the path set on the EFS access point.
	// If an access point is used, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html
	//
	// Default: - no accessPointId.
	//
	AccessPointId *string `field:"optional" json:"accessPointId" yaml:"accessPointId"`
	// Enables encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server.
	// See: https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html
	//
	// Default: false.
	//
	EnableTransitEncryption *bool `field:"optional" json:"enableTransitEncryption" yaml:"enableTransitEncryption"`
	// The directory within the Amazon EFS file system to mount as the root directory inside the host.
	//
	// If this parameter is omitted, the root of the Amazon EFS volume is used instead.
	// Specifying `/` has the same effect as omitting this parameter.
	// The maximum length is 4,096 characters.
	// Default: - root of the EFS File System.
	//
	RootDirectory *string `field:"optional" json:"rootDirectory" yaml:"rootDirectory"`
	// The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server.
	//
	// The value must be between 0 and 65,535.
	// See: https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html
	//
	// Default: - chosen by the EFS Mount Helper.
	//
	TransitEncryptionPort *float64 `field:"optional" json:"transitEncryptionPort" yaml:"transitEncryptionPort"`
	// Whether or not to use the AWS Batch job IAM role defined in a job definition when mounting the Amazon EFS file system.
	//
	// If specified, `enableTransitEncryption` must be `true`.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/efs-volumes.html#efs-volume-accesspoints
	//
	// Default: false.
	//
	UseJobRole *bool `field:"optional" json:"useJobRole" yaml:"useJobRole"`
}

Options for configuring an EfsVolume.

Example:

var myFileSystem iFileSystem
var myJobRole role

myFileSystem.GrantRead(myJobRole)

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Volumes: []ecsVolume{
			batch.*ecsVolume_Efs(&EfsVolumeOptions{
				Name: jsii.String("myVolume"),
				FileSystem: myFileSystem,
				ContainerPath: jsii.String("/Volumes/myVolume"),
				UseJobRole: jsii.Boolean(true),
			}),
		},
		JobRole: myJobRole,
	}),
})

type EksContainerDefinition added in v2.96.0

type EksContainerDefinition interface {
	constructs.Construct
	IEksContainerDefinition
	// An array of arguments to the entrypoint.
	//
	// If this isn't specified, the CMD of the container image is used.
	// This corresponds to the args member in the Entrypoint portion of the Pod in Kubernetes.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to "$(NAME1)" and the NAME1 environment variable doesn't exist,
	// the command string will remain "$(NAME1)." $$ is replaced with $, and the resulting string isn't expanded.
	// or example, $$(VAR_NAME) is passed as $(VAR_NAME) whether or not the VAR_NAME environment variable exists.
	Args() *[]*string
	// The entrypoint for the container.
	//
	// This isn't run within a shell.
	// If this isn't specified, the `ENTRYPOINT` of the container image is used.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to `"$(NAME1)"` and the `NAME1` environment variable doesn't exist,
	// the command string will remain `"$(NAME1)."` `$$` is replaced with `$` and the resulting string isn't expanded.
	// For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists.
	//
	// The entrypoint can't be updated.
	Command() *[]*string
	// The hard limit of CPUs to present to this container. Must be an even multiple of 0.25.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	CpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	CpuReservation() *float64
	// The environment variables to pass to this container.
	//
	// *Note*: Environment variables cannot start with "AWS_BATCH".
	// This naming convention is reserved for variables that AWS Batch sets.
	Env() *map[string]*string
	// The hard limit of GPUs to present to this container.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	GpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	GpuReservation() *float64
	// The image that this container will run.
	Image() awsecs.ContainerImage
	// The image pull policy for this container.
	ImagePullPolicy() ImagePullPolicy
	// The amount (in MiB) of memory to present to the container.
	//
	// If your container attempts to exceed the allocated memory, it will be terminated.
	//
	// Must be larger that 4 MiB
	//
	// At least one of `memoryLimit` and `memoryReservation` is required
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	MemoryLimit() awscdk.Size
	// The soft limit (in MiB) of memory to reserve for the container.
	//
	// Your container will be given at least this much memory, but may consume more.
	//
	// Must be larger that 4 MiB
	//
	// When system memory is under heavy contention, Docker attempts to keep the
	// container memory to this soft limit. However, your container can consume more
	// memory when it needs to, up to either the hard limit specified with the memory
	// parameter (if applicable), or all of the available memory on the container
	// instance, whichever comes first.
	//
	// At least one of `memoryLimit` and `memoryReservation` is required.
	// If both are specified, then `memoryLimit` must be equal to `memoryReservation`
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	MemoryReservation() awscdk.Size
	// The name of this container.
	Name() *string
	// The tree node.
	Node() constructs.Node
	// If specified, gives this container elevated permissions on the host container instance.
	//
	// The level of permissions are similar to the root user permissions.
	//
	// This parameter maps to `privileged` policy in the Privileged pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	Privileged() *bool
	// If specified, gives this container readonly access to its root file system.
	//
	// This parameter maps to `ReadOnlyRootFilesystem` policy in the Volumes and file systems pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	ReadonlyRootFilesystem() *bool
	// If specified, the container is run as the specified group ID (`gid`).
	//
	// If this parameter isn't specified, the default is the group that's specified in the image metadata.
	// This parameter maps to `RunAsGroup` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	RunAsGroup() *float64
	// If specified, the container is run as a user with a `uid` other than 0.
	//
	// Otherwise, no such rule is enforced.
	// This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	RunAsRoot() *bool
	// If specified, this container is run as the specified user ID (`uid`).
	//
	// This parameter maps to `RunAsUser` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	RunAsUser() *float64
	// The Volumes to mount to this container.
	//
	// Automatically added to the Pod.
	Volumes() *[]EksVolume
	// Mount a Volume to this container.
	//
	// Automatically added to the Pod.
	AddVolume(volume EksVolume)
	// Returns a string representation of this construct.
	ToString() *string
}

A container that can be run with EKS orchestration on EC2 resources.

Example:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

func NewEksContainerDefinition added in v2.96.0

func NewEksContainerDefinition(scope constructs.Construct, id *string, props *EksContainerDefinitionProps) EksContainerDefinition

type EksContainerDefinitionProps added in v2.96.0

type EksContainerDefinitionProps struct {
	// The image that this container will run.
	Image awsecs.ContainerImage `field:"required" json:"image" yaml:"image"`
	// An array of arguments to the entrypoint.
	//
	// If this isn't specified, the CMD of the container image is used.
	// This corresponds to the args member in the Entrypoint portion of the Pod in Kubernetes.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to "$(NAME1)" and the NAME1 environment variable doesn't exist,
	// the command string will remain "$(NAME1)." $$ is replaced with $, and the resulting string isn't expanded.
	// or example, $$(VAR_NAME) is passed as $(VAR_NAME) whether or not the VAR_NAME environment variable exists.
	// See: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
	//
	// Default: - no args.
	//
	Args *[]*string `field:"optional" json:"args" yaml:"args"`
	// The entrypoint for the container.
	//
	// This isn't run within a shell.
	// If this isn't specified, the `ENTRYPOINT` of the container image is used.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to `"$(NAME1)"` and the `NAME1` environment variable doesn't exist,
	// the command string will remain `"$(NAME1)."` `$$` is replaced with `$` and the resulting string isn't expanded.
	// For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists.
	//
	// The entrypoint can't be updated.
	// See: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint
	//
	// Default: - no command.
	//
	Command *[]*string `field:"optional" json:"command" yaml:"command"`
	// The hard limit of CPUs to present to this container. Must be an even multiple of 0.25.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPU limit.
	//
	CpuLimit *float64 `field:"optional" json:"cpuLimit" yaml:"cpuLimit"`
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPUs reserved.
	//
	CpuReservation *float64 `field:"optional" json:"cpuReservation" yaml:"cpuReservation"`
	// The environment variables to pass to this container.
	//
	// *Note*: Environment variables cannot start with "AWS_BATCH".
	// This naming convention is reserved for variables that AWS Batch sets.
	// Default: - no environment variables.
	//
	Env *map[string]*string `field:"optional" json:"env" yaml:"env"`
	// The hard limit of GPUs to present to this container.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPU limit.
	//
	GpuLimit *float64 `field:"optional" json:"gpuLimit" yaml:"gpuLimit"`
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPUs reserved.
	//
	GpuReservation *float64 `field:"optional" json:"gpuReservation" yaml:"gpuReservation"`
	// The image pull policy for this container.
	// See: https://kubernetes.io/docs/concepts/containers/images/#updating-images
	//
	// Default: - `ALWAYS` if the `:latest` tag is specified, `IF_NOT_PRESENT` otherwise.
	//
	ImagePullPolicy ImagePullPolicy `field:"optional" json:"imagePullPolicy" yaml:"imagePullPolicy"`
	// The amount (in MiB) of memory to present to the container.
	//
	// If your container attempts to exceed the allocated memory, it will be terminated.
	//
	// Must be larger that 4 MiB
	//
	// At least one of `memoryLimit` and `memoryReservation` is required
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory limit.
	//
	MemoryLimit awscdk.Size `field:"optional" json:"memoryLimit" yaml:"memoryLimit"`
	// The soft limit (in MiB) of memory to reserve for the container.
	//
	// Your container will be given at least this much memory, but may consume more.
	//
	// Must be larger that 4 MiB
	//
	// When system memory is under heavy contention, Docker attempts to keep the
	// container memory to this soft limit. However, your container can consume more
	// memory when it needs to, up to either the hard limit specified with the memory
	// parameter (if applicable), or all of the available memory on the container
	// instance, whichever comes first.
	//
	// At least one of `memoryLimit` and `memoryReservation` is required.
	// If both are specified, then `memoryLimit` must be equal to `memoryReservation`
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory reserved.
	//
	MemoryReservation awscdk.Size `field:"optional" json:"memoryReservation" yaml:"memoryReservation"`
	// The name of this container.
	// Default: : `'Default'`.
	//
	Name *string `field:"optional" json:"name" yaml:"name"`
	// If specified, gives this container elevated permissions on the host container instance.
	//
	// The level of permissions are similar to the root user permissions.
	//
	// This parameter maps to `privileged` policy in the Privileged pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	Privileged *bool `field:"optional" json:"privileged" yaml:"privileged"`
	// If specified, gives this container readonly access to its root file system.
	//
	// This parameter maps to `ReadOnlyRootFilesystem` policy in the Volumes and file systems pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	ReadonlyRootFilesystem *bool `field:"optional" json:"readonlyRootFilesystem" yaml:"readonlyRootFilesystem"`
	// If specified, the container is run as the specified group ID (`gid`).
	//
	// If this parameter isn't specified, the default is the group that's specified in the image metadata.
	// This parameter maps to `RunAsGroup` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: none.
	//
	RunAsGroup *float64 `field:"optional" json:"runAsGroup" yaml:"runAsGroup"`
	// If specified, the container is run as a user with a `uid` other than 0.
	//
	// Otherwise, no such rule is enforced.
	// This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the container is *not* required to run as a non-root user.
	//
	RunAsRoot *bool `field:"optional" json:"runAsRoot" yaml:"runAsRoot"`
	// If specified, this container is run as the specified user ID (`uid`).
	//
	// This parameter maps to `RunAsUser` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the user that is specified in the image metadata.
	//
	RunAsUser *float64 `field:"optional" json:"runAsUser" yaml:"runAsUser"`
	// The Volumes to mount to this container.
	//
	// Automatically added to the Pod.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/
	//
	// Default: - no volumes.
	//
	Volumes *[]EksVolume `field:"optional" json:"volumes" yaml:"volumes"`
}

Props to configure an EksContainerDefinition.

Example:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

type EksJobDefinition added in v2.96.0

type EksJobDefinition interface {
	awscdk.Resource
	IEksJobDefinition
	IJobDefinition
	// The container this Job Definition will run.
	Container() EksContainerDefinition
	// The DNS Policy of the pod used by this Job Definition.
	DnsPolicy() DnsPolicy
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The ARN of this job definition.
	JobDefinitionArn() *string
	// The name of this job definition.
	JobDefinitionName() *string
	// The tree node.
	Node() constructs.Node
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	Parameters() *map[string]interface{}
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	SchedulingPriority() *float64
	// The name of the service account that's used to run the container.
	//
	// service accounts are Kubernetes method of identification and authentication,
	// roughly analogous to IAM users.
	ServiceAccount() *string
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	Timeout() awscdk.Duration
	// If specified, the Pod used by this Job Definition will use the host's network IP address.
	//
	// Otherwise, the Kubernetes pod networking model is enabled.
	// Most AWS Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections.
	UseHostNetwork() *bool
	// Add a RetryStrategy to this JobDefinition.
	AddRetryStrategy(strategy RetryStrategy)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

A JobDefinition that uses Eks orchestration.

Example:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

func NewEksJobDefinition added in v2.96.0

func NewEksJobDefinition(scope constructs.Construct, id *string, props *EksJobDefinitionProps) EksJobDefinition

type EksJobDefinitionProps added in v2.96.0

type EksJobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The container this Job Definition will run.
	Container EksContainerDefinition `field:"required" json:"container" yaml:"container"`
	// The DNS Policy of the pod used by this Job Definition.
	// See: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
	//
	// Default: `DnsPolicy.CLUSTER_FIRST`
	//
	DnsPolicy DnsPolicy `field:"optional" json:"dnsPolicy" yaml:"dnsPolicy"`
	// The name of the service account that's used to run the container.
	//
	// service accounts are Kubernetes method of identification and authentication,
	// roughly analogous to IAM users.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html
	//
	// Default: - the default service account of the container.
	//
	ServiceAccount *string `field:"optional" json:"serviceAccount" yaml:"serviceAccount"`
	// If specified, the Pod used by this Job Definition will use the host's network IP address.
	//
	// Otherwise, the Kubernetes pod networking model is enabled.
	// Most AWS Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections.
	// See: https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking
	//
	// Default: true.
	//
	UseHostNetwork *bool `field:"optional" json:"useHostNetwork" yaml:"useHostNetwork"`
}

Props for EksJobDefinition.

Example:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

type EksMachineImage added in v2.96.0

type EksMachineImage struct {
	// The machine image to use.
	// Default: - chosen by batch.
	//
	Image awsec2.IMachineImage `field:"optional" json:"image" yaml:"image"`
	// Tells Batch which instance type to launch this image on.
	// Default: - 'EKS_AL2' for non-gpu instances, 'EKS_AL2_NVIDIA' for gpu instances.
	//
	ImageType EksMachineImageType `field:"optional" json:"imageType" yaml:"imageType"`
}

A Batch MachineImage that is compatible with EKS.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var machineImage iMachineImage

eksMachineImage := &EksMachineImage{
	Image: machineImage,
	ImageType: awscdk.Aws_batch.EksMachineImageType_EKS_AL2,
}

type EksMachineImageType added in v2.96.0

type EksMachineImageType string

Maps the image to instance types.

const (
	// Tells Batch that this machine image runs on non-GPU instances.
	EksMachineImageType_EKS_AL2 EksMachineImageType = "EKS_AL2"
	// Tells Batch that this machine image runs on GPU instances.
	EksMachineImageType_EKS_AL2_NVIDIA EksMachineImageType = "EKS_AL2_NVIDIA"
)

type EksVolume added in v2.96.0

type EksVolume interface {
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	ContainerPath() *string
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name() *string
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly() *bool
}

A Volume that can be mounted to a container supported by EKS.

Example:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

type EksVolumeOptions added in v2.96.0

type EksVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
}

Options to configure an EksVolume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksVolumeOptions := &EksVolumeOptions{
	Name: jsii.String("name"),

	// the properties below are optional
	MountPath: jsii.String("mountPath"),
	Readonly: jsii.Boolean(false),
}

type EmptyDirMediumType added in v2.96.0

type EmptyDirMediumType string

What medium the volume will live in.

Example:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})
const (
	// Use the disk storage of the node.
	//
	// Items written here will survive node reboots.
	EmptyDirMediumType_DISK EmptyDirMediumType = "DISK"
	// Use the `tmpfs` volume that is backed by RAM of the node.
	//
	// Items written here will *not* survive node reboots.
	EmptyDirMediumType_MEMORY EmptyDirMediumType = "MEMORY"
)

type EmptyDirVolume added in v2.96.0

type EmptyDirVolume interface {
	EksVolume
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	ContainerPath() *string
	// The storage type to use for this Volume.
	// Default: `EmptyDirMediumType.DISK`
	//
	Medium() EmptyDirMediumType
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name() *string
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly() *bool
	// The maximum size for this Volume.
	// Default: - no size limit.
	//
	SizeLimit() awscdk.Size
}

A Kubernetes EmptyDir volume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var size size

emptyDirVolume := awscdk.Aws_batch.NewEmptyDirVolume(&EmptyDirVolumeOptions{
	Name: jsii.String("name"),

	// the properties below are optional
	Medium: awscdk.*Aws_batch.EmptyDirMediumType_DISK,
	MountPath: jsii.String("mountPath"),
	Readonly: jsii.Boolean(false),
	SizeLimit: size,
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

func EksVolume_EmptyDir added in v2.96.0

func EksVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

func EmptyDirVolume_EmptyDir added in v2.96.0

func EmptyDirVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

func HostPathVolume_EmptyDir added in v2.96.0

func HostPathVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

func NewEmptyDirVolume added in v2.96.0

func NewEmptyDirVolume(options *EmptyDirVolumeOptions) EmptyDirVolume

func SecretPathVolume_EmptyDir added in v2.96.0

func SecretPathVolume_EmptyDir(options *EmptyDirVolumeOptions) EmptyDirVolume

Creates a Kubernetes EmptyDir volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

type EmptyDirVolumeOptions added in v2.96.0

type EmptyDirVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The storage type to use for this Volume.
	// Default: `EmptyDirMediumType.DISK`
	//
	Medium EmptyDirMediumType `field:"optional" json:"medium" yaml:"medium"`
	// The maximum size for this Volume.
	// Default: - no size limit.
	//
	SizeLimit awscdk.Size `field:"optional" json:"sizeLimit" yaml:"sizeLimit"`
}

Options for a Kubernetes EmptyDir volume.

Example:

jobDefn := batch.NewEksJobDefinition(this, jsii.String("eksf2"), &EksJobDefinitionProps{
	Container: batch.NewEksContainerDefinition(this, jsii.String("container"), &EksContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Volumes: []eksVolume{
			batch.*eksVolume_EmptyDir(&EmptyDirVolumeOptions{
				Name: jsii.String("myEmptyDirVolume"),
				MountPath: jsii.String("/mount/path"),
				Medium: batch.EmptyDirMediumType_MEMORY,
				Readonly: jsii.Boolean(true),
				SizeLimit: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
		},
	}),
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir

type FairshareSchedulingPolicy added in v2.96.0

type FairshareSchedulingPolicy interface {
	awscdk.Resource
	IFairshareSchedulingPolicy
	ISchedulingPolicy
	// Used to calculate the percentage of the maximum available vCPU to reserve for share identifiers not present in the Queue.
	//
	// The percentage reserved is defined by the Scheduler as:
	// `(computeReservation/100)^ActiveFairShares` where `ActiveFairShares` is the number of active fair share identifiers.
	//
	// For example, a computeReservation value of 50 indicates that AWS Batch reserves 50% of the
	// maximum available vCPU if there's only one fair share identifier.
	// It reserves 25% if there are two fair share identifiers.
	// It reserves 12.5% if there are three fair share identifiers.
	//
	// A computeReservation value of 25 indicates that AWS Batch should reserve 25% of the
	// maximum available vCPU if there's only one fair share identifier,
	// 6.25% if there are two fair share identifiers,
	// and 1.56% if there are three fair share identifiers.
	ComputeReservation() *float64
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The tree node.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// The arn of this scheduling policy.
	SchedulingPolicyArn() *string
	// The name of this scheduling policy.
	SchedulingPolicyName() *string
	// The amount of time to use to measure the usage of each job.
	//
	// The usage is used to calculate a fair share percentage for each fair share identifier currently in the Queue.
	// A value of zero (0) indicates that only current usage is measured.
	// The decay is linear and gives preference to newer jobs.
	//
	// The maximum supported value is 604800 seconds (1 week).
	ShareDecay() awscdk.Duration
	// The shares that this Scheduling Policy applies to.
	//
	// *Note*: It is possible to submit Jobs to the queue with Share Identifiers that
	// are not recognized by the Scheduling Policy.
	Shares() *[]*Share
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// Add a share this to this Fairshare SchedulingPolicy.
	AddShare(share *Share)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

Represents a Fairshare Scheduling Policy. Instructs the scheduler to allocate ComputeEnvironment vCPUs based on Job shareIdentifiers.

The Faireshare Scheduling Policy ensures that each share gets a certain amount of vCPUs. The scheduler does this by deciding how many Jobs of each share to schedule *relative to how many jobs of each share are currently being executed by the ComputeEnvironment*. The weight factors associated with each share determine the ratio of vCPUs allocated; see the readme for a more in-depth discussion of fairshare policies.

Example:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"))

fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("A"),
	WeightFactor: jsii.Number(1),
})
fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("B"),
	WeightFactor: jsii.Number(1),
})
batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	SchedulingPolicy: fairsharePolicy,
})

func NewFairshareSchedulingPolicy added in v2.96.0

func NewFairshareSchedulingPolicy(scope constructs.Construct, id *string, props *FairshareSchedulingPolicyProps) FairshareSchedulingPolicy

type FairshareSchedulingPolicyProps added in v2.96.0

type FairshareSchedulingPolicyProps struct {
	// Used to calculate the percentage of the maximum available vCPU to reserve for share identifiers not present in the Queue.
	//
	// The percentage reserved is defined by the Scheduler as:
	// `(computeReservation/100)^ActiveFairShares` where `ActiveFairShares` is the number of active fair share identifiers.
	//
	// For example, a computeReservation value of 50 indicates that AWS Batch reserves 50% of the
	// maximum available vCPU if there's only one fair share identifier.
	// It reserves 25% if there are two fair share identifiers.
	// It reserves 12.5% if there are three fair share identifiers.
	//
	// A computeReservation value of 25 indicates that AWS Batch should reserve 25% of the
	// maximum available vCPU if there's only one fair share identifier,
	// 6.25% if there are two fair share identifiers,
	// and 1.56% if there are three fair share identifiers.
	// Default: - no vCPU is reserved.
	//
	ComputeReservation *float64 `field:"optional" json:"computeReservation" yaml:"computeReservation"`
	// The name of this SchedulingPolicy.
	// Default: - generated by CloudFormation.
	//
	SchedulingPolicyName *string `field:"optional" json:"schedulingPolicyName" yaml:"schedulingPolicyName"`
	// The amount of time to use to measure the usage of each job.
	//
	// The usage is used to calculate a fair share percentage for each fair share identifier currently in the Queue.
	// A value of zero (0) indicates that only current usage is measured.
	// The decay is linear and gives preference to newer jobs.
	//
	// The maximum supported value is 604800 seconds (1 week).
	// Default: - 0: only the current job usage is considered.
	//
	ShareDecay awscdk.Duration `field:"optional" json:"shareDecay" yaml:"shareDecay"`
	// The shares that this Scheduling Policy applies to.
	//
	// *Note*: It is possible to submit Jobs to the queue with Share Identifiers that
	// are not recognized by the Scheduling Policy.
	// Default: - no shares.
	//
	Shares *[]*Share `field:"optional" json:"shares" yaml:"shares"`
}

Fairshare SchedulingPolicy configuration.

Example:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"), &FairshareSchedulingPolicyProps{
	ShareDecay: cdk.Duration_Minutes(jsii.Number(5)),
})

type FargateComputeEnvironment added in v2.96.0

type FargateComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IFargateComputeEnvironment
	IManagedComputeEnvironment
	// The ARN of this compute environment.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	ComputeEnvironmentName() *string
	// The network connections associated with this resource.
	Connections() awsec2.Connections
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	MaxvCpus() *float64
	// The tree node.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	ServiceRole() awsiam.IRole
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	Spot() *bool
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// TagManager to set, remove and format tags.
	Tags() awscdk.TagManager
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	//
	// Note: the CDK will never set this value by default, `false` will set by CFN.
	// This is to avoid a deployment failure that occurs when this value is set.
	UpdateToLatestImageVersion() *bool
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

A ManagedComputeEnvironment that uses ECS orchestration on Fargate instances.

Example:

var vpc iVpc

sharedComputeEnv := batch.NewFargateComputeEnvironment(this, jsii.String("spotEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})
lowPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(1),
})
highPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(10),
})
lowPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
highPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))

func NewFargateComputeEnvironment added in v2.96.0

func NewFargateComputeEnvironment(scope constructs.Construct, id *string, props *FargateComputeEnvironmentProps) FargateComputeEnvironment

type FargateComputeEnvironmentProps added in v2.96.0

type FargateComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
}

Props for a FargateComputeEnvironment.

Example:

var vpc iVpc

sharedComputeEnv := batch.NewFargateComputeEnvironment(this, jsii.String("spotEnv"), &FargateComputeEnvironmentProps{
	Vpc: Vpc,
	Spot: jsii.Boolean(true),
})
lowPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(1),
})
highPriorityQueue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	Priority: jsii.Number(10),
})
lowPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))
highPriorityQueue.AddComputeEnvironment(sharedComputeEnv, jsii.Number(1))

type HostPathVolume added in v2.96.0

type HostPathVolume interface {
	EksVolume
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	ContainerPath() *string
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name() *string
	// The path of the file or directory on the host to mount into containers on the pod.
	//
	// *Note*: HothPath Volumes present many security risks, and should be avoided when possible.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
	//
	Path() *string
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly() *bool
}

A Kubernetes HostPath volume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

hostPathVolume := awscdk.Aws_batch.NewHostPathVolume(&HostPathVolumeOptions{
	HostPath: jsii.String("hostPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	MountPath: jsii.String("mountPath"),
	Readonly: jsii.Boolean(false),
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

func EksVolume_HostPath added in v2.96.0

func EksVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

func EmptyDirVolume_HostPath added in v2.96.0

func EmptyDirVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

func HostPathVolume_HostPath added in v2.96.0

func HostPathVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

func NewHostPathVolume added in v2.96.0

func NewHostPathVolume(options *HostPathVolumeOptions) HostPathVolume

func SecretPathVolume_HostPath added in v2.96.0

func SecretPathVolume_HostPath(options *HostPathVolumeOptions) HostPathVolume

Creates a Kubernetes HostPath volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

type HostPathVolumeOptions added in v2.96.0

type HostPathVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The path of the file or directory on the host to mount into containers on the pod.
	//
	// *Note*: HothPath Volumes present many security risks, and should be avoided when possible.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
	//
	HostPath *string `field:"required" json:"hostPath" yaml:"hostPath"`
}

Options for a kubernetes HostPath volume.

Example:

var jobDefn eksJobDefinition

jobDefn.Container.AddVolume(batch.EksVolume_EmptyDir(&EmptyDirVolumeOptions{
	Name: jsii.String("emptyDir"),
	MountPath: jsii.String("/Volumes/emptyDir"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_HostPath(&HostPathVolumeOptions{
	Name: jsii.String("hostPath"),
	HostPath: jsii.String("/sys"),
	MountPath: jsii.String("/Volumes/hostPath"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_Secret(&SecretPathVolumeOptions{
	Name: jsii.String("secret"),
	Optional: jsii.Boolean(true),
	MountPath: jsii.String("/Volumes/secret"),
	SecretName: jsii.String("mySecret"),
}))

See: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

type HostVolume added in v2.96.0

type HostVolume interface {
	EcsVolume
	// The path on the container that this volume will be mounted to.
	ContainerPath() *string
	// The path on the host machine this container will have access to.
	HostPath() *string
	// The name of this volume.
	Name() *string
	// Whether or not the container has readonly access to this volume.
	// Default: false.
	//
	Readonly() *bool
}

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

hostVolume := awscdk.Aws_batch.NewHostVolume(&HostVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	HostPath: jsii.String("hostPath"),
	Readonly: jsii.Boolean(false),
})

func EcsVolume_Host added in v2.96.0

func EcsVolume_Host(options *HostVolumeOptions) HostVolume

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running.

func EfsVolume_Host added in v2.96.0

func EfsVolume_Host(options *HostVolumeOptions) HostVolume

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running.

func HostVolume_Host added in v2.96.0

func HostVolume_Host(options *HostVolumeOptions) HostVolume

Creates a Host volume.

This volume will persist on the host at the specified `hostPath`. If the `hostPath` is not specified, Docker will choose the host path. In this case, the data may not persist after the containers that use it stop running.

func NewHostVolume added in v2.96.0

func NewHostVolume(options *HostVolumeOptions) HostVolume

type HostVolumeOptions added in v2.96.0

type HostVolumeOptions struct {
	// the path on the container where this volume is mounted.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// the name of this volume.
	Name *string `field:"required" json:"name" yaml:"name"`
	// if set, the container will have readonly access to the volume.
	// Default: false.
	//
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The path on the host machine this container will have access to.
	// Default: - Docker will choose the host path.
	// The data may not persist after the containers that use it stop running.
	//
	HostPath *string `field:"optional" json:"hostPath" yaml:"hostPath"`
}

Options for configuring an ECS HostVolume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

hostVolumeOptions := &HostVolumeOptions{
	ContainerPath: jsii.String("containerPath"),
	Name: jsii.String("name"),

	// the properties below are optional
	HostPath: jsii.String("hostPath"),
	Readonly: jsii.Boolean(false),
}

type IComputeEnvironment added in v2.96.0

type IComputeEnvironment interface {
	awscdk.IResource
	// The ARN of this compute environment.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	ComputeEnvironmentName() *string
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	Enabled() *bool
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	ServiceRole() awsiam.IRole
}

Represents a ComputeEnvironment.

type IEcsContainerDefinition added in v2.96.0

type IEcsContainerDefinition interface {
	constructs.IConstruct
	// Add a Volume to this container.
	AddVolume(volume EcsVolume)
	// The command that's passed to the container.
	// See: https://docs.docker.com/engine/reference/builder/#cmd
	//
	Command() *[]*string
	// The number of vCPUs reserved for the container.
	//
	// Each vCPU is equivalent to 1,024 CPU shares.
	// For containers running on EC2 resources, you must specify at least one vCPU.
	Cpu() *float64
	// The environment variables to pass to a container.
	//
	// Cannot start with `AWS_BATCH`.
	// We don't recommend using plaintext environment variables for sensitive information, such as credential data.
	// Default: - no environment variables.
	//
	Environment() *map[string]*string
	// The role used by Amazon ECS container and AWS Fargate agents to make AWS API calls on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html
	//
	ExecutionRole() awsiam.IRole
	// The image that this container will run.
	Image() awsecs.ContainerImage
	// The role that the container can assume.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	//
	// Default: - no jobRole.
	//
	JobRole() awsiam.IRole
	// Linux-specific modifications that are applied to the container, such as details for device mappings.
	// Default: none.
	//
	LinuxParameters() LinuxParameters
	// The configuration of the log driver.
	LogDriverConfig() *awsecs.LogDriverConfig
	// The memory hard limit present to the container.
	//
	// If your container attempts to exceed the memory specified, the container is terminated.
	// You must specify at least 4 MiB of memory for a job.
	Memory() awscdk.Size
	// Gives the container readonly access to its root filesystem.
	// Default: false.
	//
	ReadonlyRootFilesystem() *bool
	// A map from environment variable names to the secrets for the container.
	//
	// Allows your job definitions
	// to reference the secret by the environment variable name defined in this property.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/specifying-sensitive-data.html
	//
	// Default: - no secrets.
	//
	Secrets() *map[string]Secret
	// The user name to use inside the container.
	// Default: - no user.
	//
	User() *string
	// The volumes to mount to this container.
	//
	// Automatically added to the job definition.
	// Default: - no volumes.
	//
	Volumes() *[]EcsVolume
}

A container that can be run with ECS orchestration.

type IEcsEc2ContainerDefinition added in v2.96.0

type IEcsEc2ContainerDefinition interface {
	IEcsContainerDefinition
	// Add a ulimit to this container.
	AddUlimit(ulimit *Ulimit)
	// The number of physical GPUs to reserve for the container.
	//
	// Make sure that the number of GPUs reserved for all containers in a job doesn't exceed
	// the number of available GPUs on the compute resource that the job is launched on.
	// Default: - no gpus.
	//
	Gpu() *float64
	// When this parameter is true, the container is given elevated permissions on the host container instance (similar to the root user).
	// Default: false.
	//
	Privileged() *bool
	// Limits to set for the user this docker container will run as.
	Ulimits() *[]*Ulimit
}

A container orchestrated by ECS that uses EC2 resources.

type IEcsFargateContainerDefinition added in v2.96.0

type IEcsFargateContainerDefinition interface {
	IEcsContainerDefinition
	// Indicates whether the job has a public IP address.
	//
	// For a job that's running on Fargate resources in a private subnet to send outbound traffic to the internet
	// (for example, to pull container images), the private subnet requires a NAT gateway be attached to route requests to the internet.
	// See: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
	//
	// Default: false.
	//
	AssignPublicIp() *bool
	// The size for ephemeral storage.
	// Default: - 20 GiB.
	//
	EphemeralStorageSize() awscdk.Size
	// Which version of Fargate to use when running this container.
	// Default: LATEST.
	//
	FargatePlatformVersion() awsecs.FargatePlatformVersion
}

A container orchestrated by ECS that uses Fargate resources and is orchestrated by ECS.

type IEksContainerDefinition added in v2.96.0

type IEksContainerDefinition interface {
	constructs.IConstruct
	// Mount a Volume to this container.
	//
	// Automatically added to the Pod.
	AddVolume(volume EksVolume)
	// An array of arguments to the entrypoint.
	//
	// If this isn't specified, the CMD of the container image is used.
	// This corresponds to the args member in the Entrypoint portion of the Pod in Kubernetes.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to "$(NAME1)" and the NAME1 environment variable doesn't exist,
	// the command string will remain "$(NAME1)." $$ is replaced with $, and the resulting string isn't expanded.
	// or example, $$(VAR_NAME) is passed as $(VAR_NAME) whether or not the VAR_NAME environment variable exists.
	// See: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
	//
	Args() *[]*string
	// The entrypoint for the container.
	//
	// This isn't run within a shell.
	// If this isn't specified, the `ENTRYPOINT` of the container image is used.
	// Environment variable references are expanded using the container's environment.
	// If the referenced environment variable doesn't exist, the reference in the command isn't changed.
	// For example, if the reference is to `"$(NAME1)"` and the `NAME1` environment variable doesn't exist,
	// the command string will remain `"$(NAME1)."` `$$` is replaced with `$` and the resulting string isn't expanded.
	// For example, `$$(VAR_NAME)` will be passed as `$(VAR_NAME)` whether or not the `VAR_NAME` environment variable exists.
	//
	// The entrypoint can't be updated.
	// See: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint
	//
	Command() *[]*string
	// The hard limit of CPUs to present to this container. Must be an even multiple of 0.25.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPU limit.
	//
	CpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// At least one of `cpuReservation` and `cpuLimit` is required.
	// If both are specified, then `cpuLimit` must be at least as large as `cpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No CPUs reserved.
	//
	CpuReservation() *float64
	// The environment variables to pass to this container.
	//
	// *Note*: Environment variables cannot start with "AWS_BATCH".
	// This naming convention is reserved for variables that AWS Batch sets.
	Env() *map[string]*string
	// The hard limit of GPUs to present to this container.
	//
	// If your container attempts to exceed this limit, it will be terminated.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPU limit.
	//
	GpuLimit() *float64
	// The soft limit of CPUs to reserve for the container Must be an even multiple of 0.25.
	//
	// The container will given at least this many CPUs, but may consume more.
	//
	// If both `gpuReservation` and `gpuLimit` are specified, then `gpuLimit` must be equal to `gpuReservation`.
	// See: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
	//
	// Default: - No GPUs reserved.
	//
	GpuReservation() *float64
	// The image that this container will run.
	Image() awsecs.ContainerImage
	// The image pull policy for this container.
	// See: https://kubernetes.io/docs/concepts/containers/images/#updating-images
	//
	// Default: - `ALWAYS` if the `:latest` tag is specified, `IF_NOT_PRESENT` otherwise.
	//
	ImagePullPolicy() ImagePullPolicy
	// The amount (in MiB) of memory to present to the container.
	//
	// If your container attempts to exceed the allocated memory, it will be terminated.
	//
	// Must be larger that 4 MiB
	//
	// At least one of `memoryLimit` and `memoryReservation` is required
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory limit.
	//
	MemoryLimit() awscdk.Size
	// The soft limit (in MiB) of memory to reserve for the container.
	//
	// Your container will be given at least this much memory, but may consume more.
	//
	// Must be larger that 4 MiB
	//
	// When system memory is under heavy contention, Docker attempts to keep the
	// container memory to this soft limit. However, your container can consume more
	// memory when it needs to, up to either the hard limit specified with the memory
	// parameter (if applicable), or all of the available memory on the container
	// instance, whichever comes first.
	//
	// At least one of `memoryLimit` and `memoryReservation` is required.
	// If both are specified, then `memoryLimit` must be equal to `memoryReservation`
	//
	// *Note*: To maximize your resource utilization, provide your jobs with as much memory as possible
	// for the specific instance type that you are using.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	//
	// Default: - No memory reserved.
	//
	MemoryReservation() awscdk.Size
	// The name of this container.
	// Default: : `'Default'`.
	//
	Name() *string
	// If specified, gives this container elevated permissions on the host container instance.
	//
	// The level of permissions are similar to the root user permissions.
	//
	// This parameter maps to `privileged` policy in the Privileged pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	Privileged() *bool
	// If specified, gives this container readonly access to its root file system.
	//
	// This parameter maps to `ReadOnlyRootFilesystem` policy in the Volumes and file systems pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#volumes-and-file-systems
	//
	// Default: false.
	//
	ReadonlyRootFilesystem() *bool
	// If specified, the container is run as the specified group ID (`gid`).
	//
	// If this parameter isn't specified, the default is the group that's specified in the image metadata.
	// This parameter maps to `RunAsGroup` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: none.
	//
	RunAsGroup() *float64
	// If specified, the container is run as a user with a `uid` other than 0.
	//
	// Otherwise, no such rule is enforced.
	// This parameter maps to `RunAsUser` and `MustRunAsNonRoot` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the container is *not* required to run as a non-root user.
	//
	RunAsRoot() *bool
	// If specified, this container is run as the specified user ID (`uid`).
	//
	// This parameter maps to `RunAsUser` and `MustRunAs` policy in the Users and groups pod security policies in the Kubernetes documentation.
	//
	// *Note*: this is only compatible with Kubernetes < v1.25
	// See: https://kubernetes.io/docs/concepts/security/pod-security-policy/#users-and-groups
	//
	// Default: - the user that is specified in the image metadata.
	//
	RunAsUser() *float64
	// The Volumes to mount to this container.
	//
	// Automatically added to the Pod.
	// See: https://kubernetes.io/docs/concepts/storage/volumes/
	//
	Volumes() *[]EksVolume
}

A container that can be run with EKS orchestration on EC2 resources.

type IEksJobDefinition added in v2.96.0

type IEksJobDefinition interface {
	IJobDefinition
	// The container this Job Definition will run.
	Container() EksContainerDefinition
	// The DNS Policy of the pod used by this Job Definition.
	// See: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
	//
	// Default: `DnsPolicy.CLUSTER_FIRST`
	//
	DnsPolicy() DnsPolicy
	// The name of the service account that's used to run the container.
	//
	// service accounts are Kubernetes method of identification and authentication,
	// roughly analogous to IAM users.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html
	//
	// Default: - the default service account of the container.
	//
	ServiceAccount() *string
	// If specified, the Pod used by this Job Definition will use the host's network IP address.
	//
	// Otherwise, the Kubernetes pod networking model is enabled.
	// Most AWS Batch workloads are egress-only and don't require the overhead of IP allocation for each pod for incoming connections.
	// See: https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking
	//
	// Default: true.
	//
	UseHostNetwork() *bool
}

A JobDefinition that uses Eks orchestration.

func EksJobDefinition_FromEksJobDefinitionArn added in v2.96.0

func EksJobDefinition_FromEksJobDefinitionArn(scope constructs.Construct, id *string, eksJobDefinitionArn *string) IEksJobDefinition

Import an EksJobDefinition by its arn.

type IFairshareSchedulingPolicy added in v2.96.0

type IFairshareSchedulingPolicy interface {
	ISchedulingPolicy
	// Used to calculate the percentage of the maximum available vCPU to reserve for share identifiers not present in the Queue.
	//
	// The percentage reserved is defined by the Scheduler as:
	// `(computeReservation/100)^ActiveFairShares` where `ActiveFairShares` is the number of active fair share identifiers.
	//
	// For example, a computeReservation value of 50 indicates that AWS Batch reserves 50% of the
	// maximum available vCPU if there's only one fair share identifier.
	// It reserves 25% if there are two fair share identifiers.
	// It reserves 12.5% if there are three fair share identifiers.
	//
	// A computeReservation value of 25 indicates that AWS Batch should reserve 25% of the
	// maximum available vCPU if there's only one fair share identifier,
	// 6.25% if there are two fair share identifiers,
	// and 1.56% if there are three fair share identifiers.
	// Default: - no vCPU is reserved.
	//
	ComputeReservation() *float64
	// The amount of time to use to measure the usage of each job.
	//
	// The usage is used to calculate a fair share percentage for each fair share identifier currently in the Queue.
	// A value of zero (0) indicates that only current usage is measured.
	// The decay is linear and gives preference to newer jobs.
	//
	// The maximum supported value is 604800 seconds (1 week).
	// Default: - 0: only the current job usage is considered.
	//
	ShareDecay() awscdk.Duration
	// The shares that this Scheduling Policy applies to.
	//
	// *Note*: It is possible to submit Jobs to the queue with Share Identifiers that
	// are not recognized by the Scheduling Policy.
	Shares() *[]*Share
}

Represents a Fairshare Scheduling Policy. Instructs the scheduler to allocate ComputeEnvironment vCPUs based on Job shareIdentifiers.

The Faireshare Scheduling Policy ensures that each share gets a certain amount of vCPUs. It does this by deciding how many Jobs of each share to schedule *relative to how many jobs of each share are currently being executed by the ComputeEnvironment*. The weight factors associated with each share determine the ratio of vCPUs allocated; see the readme for a more in-depth discussion of fairshare policies.

func FairshareSchedulingPolicy_FromFairshareSchedulingPolicyArn added in v2.96.0

func FairshareSchedulingPolicy_FromFairshareSchedulingPolicyArn(scope constructs.Construct, id *string, fairshareSchedulingPolicyArn *string) IFairshareSchedulingPolicy

Reference an exisiting Scheduling Policy by its ARN.

type IFargateComputeEnvironment added in v2.96.0

type IFargateComputeEnvironment interface {
	IManagedComputeEnvironment
}

A ManagedComputeEnvironment that uses ECS orchestration on Fargate instances.

func FargateComputeEnvironment_FromFargateComputeEnvironmentArn added in v2.96.0

func FargateComputeEnvironment_FromFargateComputeEnvironmentArn(scope constructs.Construct, id *string, fargateComputeEnvironmentArn *string) IFargateComputeEnvironment

Reference an existing FargateComputeEnvironment by its arn.

type IJobDefinition added in v2.96.0

type IJobDefinition interface {
	awscdk.IResource
	// Add a RetryStrategy to this JobDefinition.
	AddRetryStrategy(strategy RetryStrategy)
	// The ARN of this job definition.
	JobDefinitionArn() *string
	// The name of this job definition.
	JobDefinitionName() *string
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	Parameters() *map[string]interface{}
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	SchedulingPriority() *float64
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	Timeout() awscdk.Duration
}

Represents a JobDefinition.

func EcsJobDefinition_FromJobDefinitionArn added in v2.96.0

func EcsJobDefinition_FromJobDefinitionArn(scope constructs.Construct, id *string, jobDefinitionArn *string) IJobDefinition

Import a JobDefinition by its arn.

func MultiNodeJobDefinition_FromJobDefinitionArn added in v2.96.0

func MultiNodeJobDefinition_FromJobDefinitionArn(scope constructs.Construct, id *string, jobDefinitionArn *string) IJobDefinition

refer to an existing JobDefinition by its arn.

type IJobQueue added in v2.96.0

type IJobQueue interface {
	awscdk.IResource
	// Add a `ComputeEnvironment` to this Queue.
	//
	// The Queue will prefer lower-order `ComputeEnvironment`s.
	AddComputeEnvironment(computeEnvironment IComputeEnvironment, order *float64)
	// The set of compute environments mapped to a job queue and their order relative to each other.
	//
	// The job scheduler uses this parameter to determine which compute environment runs a specific job.
	// Compute environments must be in the VALID state before you can associate them with a job queue.
	// You can associate up to three compute environments with a job queue.
	// All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT);
	// EC2 and Fargate compute environments can't be mixed.
	//
	// *Note*: All compute environments that are associated with a job queue must share the same architecture.
	// AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
	ComputeEnvironments() *[]*OrderedComputeEnvironment
	// If the job queue is enabled, it is able to accept jobs.
	//
	// Otherwise, new jobs can't be added to the queue, but jobs already in the queue can finish.
	// Default: true.
	//
	Enabled() *bool
	// The ARN of this job queue.
	JobQueueArn() *string
	// The name of the job queue.
	//
	// It can be up to 128 letters long.
	// It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	JobQueueName() *string
	// The priority of the job queue.
	//
	// Job queues with a higher priority are evaluated first when associated with the same compute environment.
	// Priority is determined in descending order.
	// For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1.
	Priority() *float64
	// The SchedulingPolicy for this JobQueue.
	//
	// Instructs the Scheduler how to schedule different jobs.
	// Default: - no scheduling policy.
	//
	SchedulingPolicy() ISchedulingPolicy
}

Represents a JobQueue.

func JobQueue_FromJobQueueArn added in v2.96.0

func JobQueue_FromJobQueueArn(scope constructs.Construct, id *string, jobQueueArn *string) IJobQueue

refer to an existing JobQueue by its arn.

type IManagedComputeEnvironment added in v2.96.0

type IManagedComputeEnvironment interface {
	IComputeEnvironment
	awsec2.IConnectable
	awscdk.ITaggable
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	MaxvCpus() *float64
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	Spot() *bool
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	//
	// Note: the CDK will never set this value by default, `false` will set by CFN.
	// This is to avoid a deployment failure that occurs when this value is set.
	// See: https://github.com/aws/aws-cdk/issues/27054
	//
	// Default: false.
	//
	UpdateToLatestImageVersion() *bool
	// The VPC Subnets this Compute Environment will launch instances in.
	VpcSubnets() *awsec2.SubnetSelection
}

Represents a Managed ComputeEnvironment.

Batch will provision EC2 Instances to meet the requirements of the jobs executing in this ComputeEnvironment.

type IManagedEc2EcsComputeEnvironment added in v2.96.0

type IManagedEc2EcsComputeEnvironment interface {
	IManagedComputeEnvironment
	// Add an instance class to this compute environment.
	AddInstanceClass(instanceClass awsec2.InstanceClass)
	// Add an instance type to this compute environment.
	AddInstanceType(instanceType awsec2.InstanceType)
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Default: - `BEST_FIT_PROGRESSIVE` if not using Spot instances,
	// `SPOT_CAPACITY_OPTIMIZED` if using Spot instances.
	//
	AllocationStrategy() AllocationStrategy
	// Configure which AMIs this Compute Environment can launch.
	//
	// Leave this `undefined` to allow Batch to choose the latest AMIs it supports for each instance that it launches.
	// Default: - ECS_AL2 compatible AMI ids for non-GPU instances, ECS_AL2_NVIDIA compatible AMI ids for GPU instances.
	//
	Images() *[]*EcsMachineImage
	// The instance classes that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the size.
	InstanceClasses() *[]awsec2.InstanceClass
	// The execution Role that instances launched by this Compute Environment will use.
	// Default: - a role will be created.
	//
	InstanceRole() awsiam.IRole
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	InstanceTypes() *[]awsec2.InstanceType
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	// Default: no launch template.
	//
	LaunchTemplate() awsec2.ILaunchTemplate
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Default: 0.
	//
	MinvCpus() *float64
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
	//
	// Default: - no placement group.
	//
	PlacementGroup() awsec2.IPlacementGroup
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	// Default: - 100%.
	//
	SpotBidPercentage() *float64
	// The service-linked role that Spot Fleet needs to launch instances on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html
	//
	// Default: - a new Role will be created.
	//
	SpotFleetRole() awsiam.IRole
	// Whether or not to use batch's optimal instance type.
	//
	// The optimal instance type is equivalent to adding the
	// C4, M4, and R4 instance classes. You can specify other instance classes
	// (of the same architecture) in addition to the optimal instance classes.
	// Default: true.
	//
	UseOptimalInstanceClasses() *bool
}

A ManagedComputeEnvironment that uses ECS orchestration on EC2 instances.

func ManagedEc2EcsComputeEnvironment_FromManagedEc2EcsComputeEnvironmentArn added in v2.96.0

func ManagedEc2EcsComputeEnvironment_FromManagedEc2EcsComputeEnvironmentArn(scope constructs.Construct, id *string, managedEc2EcsComputeEnvironmentArn *string) IManagedEc2EcsComputeEnvironment

refer to an existing ComputeEnvironment by its arn.

type ISchedulingPolicy added in v2.96.0

type ISchedulingPolicy interface {
	awscdk.IResource
	// The arn of this scheduling policy.
	SchedulingPolicyArn() *string
	// The name of this scheduling policy.
	SchedulingPolicyName() *string
}

Represents a Scheduling Policy.

Scheduling Policies tell the Batch Job Scheduler how to schedule incoming jobs.

type IUnmanagedComputeEnvironment added in v2.96.0

type IUnmanagedComputeEnvironment interface {
	IComputeEnvironment
	// The vCPUs this Compute Environment provides. Used only by the scheduler to schedule jobs in `Queue`s that use `FairshareSchedulingPolicy`s.
	//
	// **If this parameter is not provided on a fairshare queue, no capacity is reserved**;
	// that is, the `FairshareSchedulingPolicy` is ignored.
	UnmanagedvCPUs() *float64
}

Represents an UnmanagedComputeEnvironment.

Batch will not provision instances on your behalf in this ComputeEvironment.

func UnmanagedComputeEnvironment_FromUnmanagedComputeEnvironmentArn added in v2.96.0

func UnmanagedComputeEnvironment_FromUnmanagedComputeEnvironmentArn(scope constructs.Construct, id *string, unmanagedComputeEnvironmentArn *string) IUnmanagedComputeEnvironment

Import an UnmanagedComputeEnvironment by its arn.

type ImagePullPolicy added in v2.96.0

type ImagePullPolicy string

Determines when the image is pulled from the registry to launch a container.

const (
	// Every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest.
	//
	// If the kubelet has a container image with that exact digest cached locally,
	// the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest,
	// and uses that image to launch the container.
	// See: https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier
	//
	ImagePullPolicy_ALWAYS ImagePullPolicy = "ALWAYS"
	// The image is pulled only if it is not already present locally.
	ImagePullPolicy_IF_NOT_PRESENT ImagePullPolicy = "IF_NOT_PRESENT"
	// The kubelet does not try fetching the image.
	//
	// If the image is somehow already present locally,
	// the kubelet attempts to start the container; otherwise, startup fails.
	// See pre-pulled images for more details.
	// See: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
	//
	ImagePullPolicy_NEVER ImagePullPolicy = "NEVER"
)

type JobDefinitionProps added in v2.96.0

type JobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
}

Props common to all JobDefinitions.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var parameters interface{}
var retryStrategy retryStrategy

jobDefinitionProps := &JobDefinitionProps{
	JobDefinitionName: jsii.String("jobDefinitionName"),
	Parameters: map[string]interface{}{
		"parametersKey": parameters,
	},
	RetryAttempts: jsii.Number(123),
	RetryStrategies: []*retryStrategy{
		retryStrategy,
	},
	SchedulingPriority: jsii.Number(123),
	Timeout: cdk.Duration_Minutes(jsii.Number(30)),
}

type JobQueue added in v2.96.0

type JobQueue interface {
	awscdk.Resource
	IJobQueue
	// The set of compute environments mapped to a job queue and their order relative to each other.
	//
	// The job scheduler uses this parameter to determine which compute environment runs a specific job.
	// Compute environments must be in the VALID state before you can associate them with a job queue.
	// You can associate up to three compute environments with a job queue.
	// All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT);
	// EC2 and Fargate compute environments can't be mixed.
	//
	// *Note*: All compute environments that are associated with a job queue must share the same architecture.
	// AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
	ComputeEnvironments() *[]*OrderedComputeEnvironment
	// If the job queue is enabled, it is able to accept jobs.
	//
	// Otherwise, new jobs can't be added to the queue, but jobs already in the queue can finish.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The ARN of this job queue.
	JobQueueArn() *string
	// The name of the job queue.
	//
	// It can be up to 128 letters long.
	// It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	JobQueueName() *string
	// The tree node.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// The priority of the job queue.
	//
	// Job queues with a higher priority are evaluated first when associated with the same compute environment.
	// Priority is determined in descending order.
	// For example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1.
	Priority() *float64
	// The SchedulingPolicy for this JobQueue.
	//
	// Instructs the Scheduler how to schedule different jobs.
	SchedulingPolicy() ISchedulingPolicy
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// Add a `ComputeEnvironment` to this Queue.
	//
	// The Queue will prefer lower-order `ComputeEnvironment`s.
	AddComputeEnvironment(computeEnvironment IComputeEnvironment, order *float64)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

JobQueues can receive Jobs, which are removed from the queue when sent to the linked ComputeEnvironment(s) to be executed.

Jobs exit the queue in FIFO order unless a `SchedulingPolicy` is linked.

Example:

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

func NewJobQueue added in v2.96.0

func NewJobQueue(scope constructs.Construct, id *string, props *JobQueueProps) JobQueue

type JobQueueProps added in v2.96.0

type JobQueueProps struct {
	// The set of compute environments mapped to a job queue and their order relative to each other.
	//
	// The job scheduler uses this parameter to determine which compute environment runs a specific job.
	// Compute environments must be in the VALID state before you can associate them with a job queue.
	// You can associate up to three compute environments with a job queue.
	// All of the compute environments must be either EC2 (EC2 or SPOT) or Fargate (FARGATE or FARGATE_SPOT);
	// EC2 and Fargate compute environments can't be mixed.
	//
	// *Note*: All compute environments that are associated with a job queue must share the same architecture.
	// AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
	// Default: none.
	//
	ComputeEnvironments *[]*OrderedComputeEnvironment `field:"optional" json:"computeEnvironments" yaml:"computeEnvironments"`
	// If the job queue is enabled, it is able to accept jobs.
	//
	// Otherwise, new jobs can't be added to the queue, but jobs already in the queue can finish.
	// Default: true.
	//
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The name of the job queue.
	//
	// It can be up to 128 letters long.
	// It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
	// Default: - no name.
	//
	JobQueueName *string `field:"optional" json:"jobQueueName" yaml:"jobQueueName"`
	// The priority of the job queue.
	//
	// Job queues with a higher priority are evaluated first when associated with the same compute environment.
	// Priority is determined in descending order.
	// For example, a job queue with a priority of 10 is given scheduling preference over a job queue with a priority of 1.
	// Default: 1.
	//
	Priority *float64 `field:"optional" json:"priority" yaml:"priority"`
	// The SchedulingPolicy for this JobQueue.
	//
	// Instructs the Scheduler how to schedule different jobs.
	// Default: - no scheduling policy.
	//
	SchedulingPolicy ISchedulingPolicy `field:"optional" json:"schedulingPolicy" yaml:"schedulingPolicy"`
}

Props to configure a JobQueue.

Example:

var vpc iVpc

ecsJob := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
})

queue := batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	ComputeEnvironments: []orderedComputeEnvironment{
		&orderedComputeEnvironment{
			ComputeEnvironment: batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("managedEc2CE"), &ManagedEc2EcsComputeEnvironmentProps{
				Vpc: *Vpc,
			}),
			Order: jsii.Number(1),
		},
	},
	Priority: jsii.Number(10),
})

user := iam.NewUser(this, jsii.String("MyUser"))
ecsJob.GrantSubmitJob(user, queue)

type LinuxParameters added in v2.96.0

type LinuxParameters interface {
	constructs.Construct
	// Device mounts.
	Devices() *[]*Device
	// Whether the init process is enabled.
	InitProcessEnabled() *bool
	// The max swap memory.
	MaxSwap() awscdk.Size
	// The tree node.
	Node() constructs.Node
	// The shared memory size (in MiB).
	//
	// Not valid for Fargate launch type.
	SharedMemorySize() awscdk.Size
	// The swappiness behavior.
	Swappiness() *float64
	// TmpFs mounts.
	Tmpfs() *[]*Tmpfs
	// Adds one or more host devices to a container.
	AddDevices(device ...*Device)
	// Specifies the container path, mount options, and size (in MiB) of the tmpfs mount for a container.
	//
	// Only works with EC2 launch type.
	AddTmpfs(tmpfs ...*Tmpfs)
	// Renders the Linux parameters to the Batch version of this resource, which does not have 'capabilities' and requires tmpfs.containerPath to be defined.
	RenderLinuxParameters() *CfnJobDefinition_LinuxParametersProperty
	// Returns a string representation of this construct.
	ToString() *string
}

Linux-specific options that are applied to the container.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var size size

linuxParameters := awscdk.Aws_batch.NewLinuxParameters(this, jsii.String("MyLinuxParameters"), &LinuxParametersProps{
	InitProcessEnabled: jsii.Boolean(false),
	MaxSwap: size,
	SharedMemorySize: size,
	Swappiness: jsii.Number(123),
})

func NewLinuxParameters added in v2.96.0

func NewLinuxParameters(scope constructs.Construct, id *string, props *LinuxParametersProps) LinuxParameters

Constructs a new instance of the LinuxParameters class.

type LinuxParametersProps added in v2.96.0

type LinuxParametersProps struct {
	// Specifies whether to run an init process inside the container that forwards signals and reaps processes.
	// Default: false.
	//
	InitProcessEnabled *bool `field:"optional" json:"initProcessEnabled" yaml:"initProcessEnabled"`
	// The total amount of swap memory a container can use.
	//
	// This parameter
	// will be translated to the --memory-swap option to docker run.
	//
	// This parameter is only supported when you are using the EC2 launch type.
	// Accepted values are positive integers.
	// Default: No swap.
	//
	MaxSwap awscdk.Size `field:"optional" json:"maxSwap" yaml:"maxSwap"`
	// The value for the size of the /dev/shm volume.
	// Default: No shared memory.
	//
	SharedMemorySize awscdk.Size `field:"optional" json:"sharedMemorySize" yaml:"sharedMemorySize"`
	// This allows you to tune a container's memory swappiness behavior.
	//
	// This parameter
	// maps to the --memory-swappiness option to docker run. The swappiness relates
	// to the kernel's tendency to swap memory. A value of 0 will cause swapping to
	// not happen unless absolutely necessary. A value of 100 will cause pages to
	// be swapped very aggressively.
	//
	// This parameter is only supported when you are using the EC2 launch type.
	// Accepted values are whole numbers between 0 and 100. If a value is not
	// specified for maxSwap then this parameter is ignored.
	// Default: 60.
	//
	Swappiness *float64 `field:"optional" json:"swappiness" yaml:"swappiness"`
}

The properties for defining Linux-specific options that are applied to the container.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var size size

linuxParametersProps := &LinuxParametersProps{
	InitProcessEnabled: jsii.Boolean(false),
	MaxSwap: size,
	SharedMemorySize: size,
	Swappiness: jsii.Number(123),
}

type ManagedComputeEnvironmentProps added in v2.96.0

type ManagedComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
}

Props for a ManagedComputeEnvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

managedComputeEnvironmentProps := &ManagedComputeEnvironmentProps{
	Vpc: vpc,

	// the properties below are optional
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	MaxvCpus: jsii.Number(123),
	ReplaceComputeEnvironment: jsii.Boolean(false),
	SecurityGroups: []iSecurityGroup{
		securityGroup,
	},
	ServiceRole: role,
	Spot: jsii.Boolean(false),
	TerminateOnUpdate: jsii.Boolean(false),
	UpdateTimeout: cdk.Duration_Minutes(jsii.Number(30)),
	UpdateToLatestImageVersion: jsii.Boolean(false),
	VpcSubnets: &SubnetSelection{
		AvailabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		OnePerAz: jsii.Boolean(false),
		SubnetFilters: []*subnetFilter{
			subnetFilter,
		},
		SubnetGroupName: jsii.String("subnetGroupName"),
		Subnets: []iSubnet{
			subnet,
		},
		SubnetType: awscdk.Aws_ec2.SubnetType_PRIVATE_ISOLATED,
	},
}

type ManagedEc2EcsComputeEnvironment added in v2.96.0

type ManagedEc2EcsComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IManagedComputeEnvironment
	IManagedEc2EcsComputeEnvironment
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	AllocationStrategy() AllocationStrategy
	// The ARN of this compute environment.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	ComputeEnvironmentName() *string
	// The network connections associated with this resource.
	Connections() awsec2.Connections
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// Configure which AMIs this Compute Environment can launch.
	//
	// Leave this `undefined` to allow Batch to choose the latest AMIs it supports for each instance that it launches.
	Images() *[]*EcsMachineImage
	// The instance classes that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the size.
	InstanceClasses() *[]awsec2.InstanceClass
	// The execution Role that instances launched by this Compute Environment will use.
	InstanceRole() awsiam.IRole
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	InstanceTypes() *[]awsec2.InstanceType
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	LaunchTemplate() awsec2.ILaunchTemplate
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	MaxvCpus() *float64
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	MinvCpus() *float64
	// The tree node.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	PlacementGroup() awsec2.IPlacementGroup
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	ServiceRole() awsiam.IRole
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	Spot() *bool
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	SpotBidPercentage() *float64
	// The service-linked role that Spot Fleet needs to launch instances on your behalf.
	SpotFleetRole() awsiam.IRole
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// TagManager to set, remove and format tags.
	Tags() awscdk.TagManager
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	//
	// Note: the CDK will never set this value by default, `false` will set by CFN.
	// This is to avoid a deployment failure that occurs when this value is set.
	UpdateToLatestImageVersion() *bool
	// Add an instance class to this compute environment.
	AddInstanceClass(instanceClass awsec2.InstanceClass)
	// Add an instance type to this compute environment.
	AddInstanceType(instanceType awsec2.InstanceType)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

A ManagedComputeEnvironment that uses ECS orchestration on EC2 instances.

Example:

var computeEnv iManagedEc2EcsComputeEnvironment
vpc := ec2.NewVpc(this, jsii.String("VPC"))
computeEnv.AddInstanceClass(ec2.InstanceClass_M5AD)
// Or, specify it on the constructor:
// Or, specify it on the constructor:
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
})

func NewManagedEc2EcsComputeEnvironment added in v2.96.0

func NewManagedEc2EcsComputeEnvironment(scope constructs.Construct, id *string, props *ManagedEc2EcsComputeEnvironmentProps) ManagedEc2EcsComputeEnvironment

type ManagedEc2EcsComputeEnvironmentProps added in v2.96.0

type ManagedEc2EcsComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Default: - `BEST_FIT_PROGRESSIVE` if not using Spot instances,
	// `SPOT_CAPACITY_OPTIMIZED` if using Spot instances.
	//
	AllocationStrategy AllocationStrategy `field:"optional" json:"allocationStrategy" yaml:"allocationStrategy"`
	// Configure which AMIs this Compute Environment can launch.
	//
	// If you specify this property with only `image` specified, then the
	// `imageType` will default to `ECS_AL2`. *If your image needs GPU resources,
	// specify `ECS_AL2_NVIDIA`; otherwise, the instances will not be able to properly
	// join the ComputeEnvironment*.
	// Default: - ECS_AL2 for non-GPU instances, ECS_AL2_NVIDIA for GPU instances.
	//
	Images *[]*EcsMachineImage `field:"optional" json:"images" yaml:"images"`
	// The instance classes that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the instance size.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	InstanceClasses *[]awsec2.InstanceClass `field:"optional" json:"instanceClasses" yaml:"instanceClasses"`
	// The execution Role that instances launched by this Compute Environment will use.
	// Default: - a role will be created.
	//
	InstanceRole awsiam.IRole `field:"optional" json:"instanceRole" yaml:"instanceRole"`
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	InstanceTypes *[]awsec2.InstanceType `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	// Default: no launch template.
	//
	LaunchTemplate awsec2.ILaunchTemplate `field:"optional" json:"launchTemplate" yaml:"launchTemplate"`
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Default: 0.
	//
	MinvCpus *float64 `field:"optional" json:"minvCpus" yaml:"minvCpus"`
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
	//
	// Default: - no placement group.
	//
	PlacementGroup awsec2.IPlacementGroup `field:"optional" json:"placementGroup" yaml:"placementGroup"`
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	//
	// Implies `spot == true` if set.
	// Default: 100%.
	//
	SpotBidPercentage *float64 `field:"optional" json:"spotBidPercentage" yaml:"spotBidPercentage"`
	// The service-linked role that Spot Fleet needs to launch instances on your behalf.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html
	//
	// Default: - a new role will be created.
	//
	SpotFleetRole awsiam.IRole `field:"optional" json:"spotFleetRole" yaml:"spotFleetRole"`
	// Whether or not to use batch's optimal instance type.
	//
	// The optimal instance type is equivalent to adding the
	// C4, M4, and R4 instance classes. You can specify other instance classes
	// (of the same architecture) in addition to the optimal instance classes.
	// Default: true.
	//
	UseOptimalInstanceClasses *bool `field:"optional" json:"useOptimalInstanceClasses" yaml:"useOptimalInstanceClasses"`
}

Props for a ManagedEc2EcsComputeEnvironment.

Example:

var computeEnv iManagedEc2EcsComputeEnvironment
vpc := ec2.NewVpc(this, jsii.String("VPC"))
computeEnv.AddInstanceClass(ec2.InstanceClass_M5AD)
// Or, specify it on the constructor:
// Or, specify it on the constructor:
batch.NewManagedEc2EcsComputeEnvironment(this, jsii.String("myEc2ComputeEnv"), &ManagedEc2EcsComputeEnvironmentProps{
	Vpc: Vpc,
	InstanceClasses: []instanceClass{
		ec2.*instanceClass_R4,
	},
})

type ManagedEc2EksComputeEnvironment added in v2.96.0

type ManagedEc2EksComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IManagedComputeEnvironment
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	AllocationStrategy() AllocationStrategy
	// The ARN of this compute environment.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	ComputeEnvironmentName() *string
	// The network connections associated with this resource.
	Connections() awsec2.Connections
	// The cluster that backs this Compute Environment. Required for Compute Environments running Kubernetes jobs.
	//
	// Please ensure that you have followed the steps at
	//
	// https://docs.aws.amazon.com/batch/latest/userguide/getting-started-eks.html
	//
	// before attempting to deploy a `ManagedEc2EksComputeEnvironment` that uses this cluster.
	// If you do not follow the steps in the link, the deployment fail with a message that the
	// compute environment did not stabilize.
	EksCluster() awseks.ICluster
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// Configure which AMIs this Compute Environment can launch.
	Images() *[]*EksMachineImage
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	InstanceClasses() *[]awsec2.InstanceClass
	// The execution Role that instances launched by this Compute Environment will use.
	InstanceRole() awsiam.IRole
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	InstanceTypes() *[]awsec2.InstanceType
	// The namespace of the Cluster.
	//
	// Cannot be 'default', start with 'kube-', or be longer than 64 characters.
	KubernetesNamespace() *string
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.
	LaunchTemplate() awsec2.ILaunchTemplate
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	MaxvCpus() *float64
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	MinvCpus() *float64
	// The tree node.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	PlacementGroup() awsec2.IPlacementGroup
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	ReplaceComputeEnvironment() *bool
	// The security groups this Compute Environment will launch instances in.
	SecurityGroups() *[]awsec2.ISecurityGroup
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	ServiceRole() awsiam.IRole
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	Spot() *bool
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	//
	// Implies `spot == true` if set.
	SpotBidPercentage() *float64
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// TagManager to set, remove and format tags.
	Tags() awscdk.TagManager
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	TerminateOnUpdate() *bool
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	UpdateTimeout() awscdk.Duration
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	//
	// Note: the CDK will never set this value by default, `false` will set by CFN.
	// This is to avoid a deployment failure that occurs when this value is set.
	UpdateToLatestImageVersion() *bool
	// Add an instance class to this compute environment.
	AddInstanceClass(instanceClass awsec2.InstanceClass)
	// Add an instance type to this compute environment.
	AddInstanceType(instanceType awsec2.InstanceType)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

A ManagedComputeEnvironment that uses ECS orchestration on EC2 instances.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster
var instanceType instanceType
var launchTemplate launchTemplate
var machineImage iMachineImage
var placementGroup placementGroup
var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

managedEc2EksComputeEnvironment := awscdk.Aws_batch.NewManagedEc2EksComputeEnvironment(this, jsii.String("MyManagedEc2EksComputeEnvironment"), &ManagedEc2EksComputeEnvironmentProps{
	EksCluster: cluster,
	KubernetesNamespace: jsii.String("kubernetesNamespace"),
	Vpc: vpc,

	// the properties below are optional
	AllocationStrategy: awscdk.*Aws_batch.AllocationStrategy_BEST_FIT,
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	Images: []eksMachineImage{
		&eksMachineImage{
			Image: machineImage,
			ImageType: awscdk.*Aws_batch.EksMachineImageType_EKS_AL2,
		},
	},
	InstanceClasses: []instanceClass{
		awscdk.Aws_ec2.*instanceClass_STANDARD3,
	},
	InstanceRole: role,
	InstanceTypes: []*instanceType{
		instanceType,
	},
	LaunchTemplate: launchTemplate,
	MaxvCpus: jsii.Number(123),
	MinvCpus: jsii.Number(123),
	PlacementGroup: placementGroup,
	ReplaceComputeEnvironment: jsii.Boolean(false),
	SecurityGroups: []iSecurityGroup{
		securityGroup,
	},
	ServiceRole: role,
	Spot: jsii.Boolean(false),
	SpotBidPercentage: jsii.Number(123),
	TerminateOnUpdate: jsii.Boolean(false),
	UpdateTimeout: cdk.Duration_Minutes(jsii.Number(30)),
	UpdateToLatestImageVersion: jsii.Boolean(false),
	UseOptimalInstanceClasses: jsii.Boolean(false),
	VpcSubnets: &SubnetSelection{
		AvailabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		OnePerAz: jsii.Boolean(false),
		SubnetFilters: []*subnetFilter{
			subnetFilter,
		},
		SubnetGroupName: jsii.String("subnetGroupName"),
		Subnets: []iSubnet{
			subnet,
		},
		SubnetType: awscdk.*Aws_ec2.SubnetType_PRIVATE_ISOLATED,
	},
})

func NewManagedEc2EksComputeEnvironment added in v2.96.0

func NewManagedEc2EksComputeEnvironment(scope constructs.Construct, id *string, props *ManagedEc2EksComputeEnvironmentProps) ManagedEc2EksComputeEnvironment

type ManagedEc2EksComputeEnvironmentProps added in v2.96.0

type ManagedEc2EksComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// VPC in which this Compute Environment will launch Instances.
	Vpc awsec2.IVpc `field:"required" json:"vpc" yaml:"vpc"`
	// The maximum vCpus this `ManagedComputeEnvironment` can scale up to. Each vCPU is equivalent to 1024 CPU shares.
	//
	// *Note*: if this Compute Environment uses EC2 resources (not Fargate) with either `AllocationStrategy.BEST_FIT_PROGRESSIVE` or
	// `AllocationStrategy.SPOT_CAPACITY_OPTIMIZED`, or `AllocationStrategy.BEST_FIT` with Spot instances,
	// The scheduler may exceed this number by at most one of the instances specified in `instanceTypes`
	// or `instanceClasses`.
	// Default: 256.
	//
	MaxvCpus *float64 `field:"optional" json:"maxvCpus" yaml:"maxvCpus"`
	// Specifies whether this Compute Environment is replaced if an update is made that requires replacing its instances.
	//
	// To enable more properties to be updated,
	// set this property to `false`. When changing the value of this property to false,
	// do not change any other properties at the same time.
	// If other properties are changed at the same time,
	// and the change needs to be rolled back but it can't,
	// it's possible for the stack to go into the UPDATE_ROLLBACK_FAILED state.
	// You can't update a stack that is in the UPDATE_ROLLBACK_FAILED state.
	// However, if you can continue to roll it back,
	// you can return the stack to its original settings and then try to update it again.
	//
	// The properties which require a replacement of the Compute Environment are:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html
	//
	// Default: false.
	//
	ReplaceComputeEnvironment *bool `field:"optional" json:"replaceComputeEnvironment" yaml:"replaceComputeEnvironment"`
	// The security groups this Compute Environment will launch instances in.
	// Default: new security groups will be created.
	//
	SecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"securityGroups" yaml:"securityGroups"`
	// Whether or not to use spot instances.
	//
	// Spot instances are less expensive EC2 instances that can be
	// reclaimed by EC2 at any time; your job will be given two minutes
	// of notice before reclamation.
	// Default: false.
	//
	Spot *bool `field:"optional" json:"spot" yaml:"spot"`
	// Whether or not any running jobs will be immediately terminated when an infrastructure update occurs.
	//
	// If this is enabled, any terminated jobs may be retried, depending on the job's
	// retry policy.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: false.
	//
	TerminateOnUpdate *bool `field:"optional" json:"terminateOnUpdate" yaml:"terminateOnUpdate"`
	// Only meaningful if `terminateOnUpdate` is `false`.
	//
	// If so,
	// when an infrastructure update is triggered, any running jobs
	// will be allowed to run until `updateTimeout` has expired.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
	//
	// Default: 30 minutes.
	//
	UpdateTimeout awscdk.Duration `field:"optional" json:"updateTimeout" yaml:"updateTimeout"`
	// Whether or not the AMI is updated to the latest one supported by Batch when an infrastructure update occurs.
	//
	// If you specify a specific AMI, this property will be ignored.
	// Default: true.
	//
	UpdateToLatestImageVersion *bool `field:"optional" json:"updateToLatestImageVersion" yaml:"updateToLatestImageVersion"`
	// The VPC Subnets this Compute Environment will launch instances in.
	// Default: new subnets will be created.
	//
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// The cluster that backs this Compute Environment. Required for Compute Environments running Kubernetes jobs.
	//
	// Please ensure that you have followed the steps at
	//
	// https://docs.aws.amazon.com/batch/latest/userguide/getting-started-eks.html
	//
	// before attempting to deploy a `ManagedEc2EksComputeEnvironment` that uses this cluster.
	// If you do not follow the steps in the link, the deployment fail with a message that the
	// compute environment did not stabilize.
	EksCluster awseks.ICluster `field:"required" json:"eksCluster" yaml:"eksCluster"`
	// The namespace of the Cluster.
	KubernetesNamespace *string `field:"required" json:"kubernetesNamespace" yaml:"kubernetesNamespace"`
	// The allocation strategy to use if not enough instances of the best fitting instance type can be allocated.
	// Default: - `BEST_FIT_PROGRESSIVE` if not using Spot instances,
	// `SPOT_CAPACITY_OPTIMIZED` if using Spot instances.
	//
	AllocationStrategy AllocationStrategy `field:"optional" json:"allocationStrategy" yaml:"allocationStrategy"`
	// Configure which AMIs this Compute Environment can launch.
	// Default: If `imageKubernetesVersion` is specified,
	// - EKS_AL2 for non-GPU instances, EKS_AL2_NVIDIA for GPU instances,
	// Otherwise,
	// - ECS_AL2 for non-GPU instances, ECS_AL2_NVIDIA for GPU instances,.
	//
	Images *[]*EksMachineImage `field:"optional" json:"images" yaml:"images"`
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Batch will automatically choose the instance size.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	InstanceClasses *[]awsec2.InstanceClass `field:"optional" json:"instanceClasses" yaml:"instanceClasses"`
	// The execution Role that instances launched by this Compute Environment will use.
	// Default: - a role will be created.
	//
	InstanceRole awsiam.IRole `field:"optional" json:"instanceRole" yaml:"instanceRole"`
	// The instance types that this Compute Environment can launch.
	//
	// Which one is chosen depends on the `AllocationStrategy` used.
	// Default: - the instances Batch considers will be used (currently C4, M4, and R4).
	//
	InstanceTypes *[]awsec2.InstanceType `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The Launch Template that this Compute Environment will use to provision EC2 Instances.
	//
	// *Note*: if `securityGroups` is specified on both your
	// launch template and this Compute Environment, **the
	// `securityGroup`s on the Compute Environment override the
	// ones on the launch template.**
	// Default: - no launch template.
	//
	LaunchTemplate awsec2.ILaunchTemplate `field:"optional" json:"launchTemplate" yaml:"launchTemplate"`
	// The minimum vCPUs that an environment should maintain, even if the compute environment is DISABLED.
	// Default: 0.
	//
	MinvCpus *float64 `field:"optional" json:"minvCpus" yaml:"minvCpus"`
	// The EC2 placement group to associate with your compute resources.
	//
	// If you intend to submit multi-node parallel jobs to this Compute Environment,
	// you should consider creating a cluster placement group and associate it with your compute resources.
	// This keeps your multi-node parallel job on a logical grouping of instances
	// within a single Availability Zone with high network flow potential.
	// See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
	//
	// Default: - no placement group.
	//
	PlacementGroup awsec2.IPlacementGroup `field:"optional" json:"placementGroup" yaml:"placementGroup"`
	// The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched.
	//
	// For example, if your maximum percentage is 20%, the Spot price must be
	// less than 20% of the current On-Demand price for that Instance.
	// You always pay the lowest market price and never more than your maximum percentage.
	// For most use cases, Batch recommends leaving this field empty.
	//
	// Implies `spot == true` if set.
	// Default: - 100%.
	//
	SpotBidPercentage *float64 `field:"optional" json:"spotBidPercentage" yaml:"spotBidPercentage"`
	// Whether or not to use batch's optimal instance type.
	//
	// The optimal instance type is equivalent to adding the
	// C4, M4, and R4 instance classes. You can specify other instance classes
	// (of the same architecture) in addition to the optimal instance classes.
	// Default: true.
	//
	UseOptimalInstanceClasses *bool `field:"optional" json:"useOptimalInstanceClasses" yaml:"useOptimalInstanceClasses"`
}

Props for a ManagedEc2EksComputeEnvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster
var instanceType instanceType
var launchTemplate launchTemplate
var machineImage iMachineImage
var placementGroup placementGroup
var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

managedEc2EksComputeEnvironmentProps := &ManagedEc2EksComputeEnvironmentProps{
	EksCluster: cluster,
	KubernetesNamespace: jsii.String("kubernetesNamespace"),
	Vpc: vpc,

	// the properties below are optional
	AllocationStrategy: awscdk.Aws_batch.AllocationStrategy_BEST_FIT,
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	Images: []eksMachineImage{
		&eksMachineImage{
			Image: machineImage,
			ImageType: awscdk.*Aws_batch.EksMachineImageType_EKS_AL2,
		},
	},
	InstanceClasses: []instanceClass{
		awscdk.Aws_ec2.*instanceClass_STANDARD3,
	},
	InstanceRole: role,
	InstanceTypes: []*instanceType{
		instanceType,
	},
	LaunchTemplate: launchTemplate,
	MaxvCpus: jsii.Number(123),
	MinvCpus: jsii.Number(123),
	PlacementGroup: placementGroup,
	ReplaceComputeEnvironment: jsii.Boolean(false),
	SecurityGroups: []iSecurityGroup{
		securityGroup,
	},
	ServiceRole: role,
	Spot: jsii.Boolean(false),
	SpotBidPercentage: jsii.Number(123),
	TerminateOnUpdate: jsii.Boolean(false),
	UpdateTimeout: cdk.Duration_Minutes(jsii.Number(30)),
	UpdateToLatestImageVersion: jsii.Boolean(false),
	UseOptimalInstanceClasses: jsii.Boolean(false),
	VpcSubnets: &SubnetSelection{
		AvailabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		OnePerAz: jsii.Boolean(false),
		SubnetFilters: []*subnetFilter{
			subnetFilter,
		},
		SubnetGroupName: jsii.String("subnetGroupName"),
		Subnets: []iSubnet{
			subnet,
		},
		SubnetType: awscdk.*Aws_ec2.SubnetType_PRIVATE_ISOLATED,
	},
}

type MultiNodeContainer added in v2.96.0

type MultiNodeContainer struct {
	// The container that this node range will run.
	Container IEcsContainerDefinition `field:"required" json:"container" yaml:"container"`
	// The index of the last node to run this container.
	//
	// The container is run on all nodes in the range [startNode, endNode] (inclusive).
	EndNode *float64 `field:"required" json:"endNode" yaml:"endNode"`
	// The index of the first node to run this container.
	//
	// The container is run on all nodes in the range [startNode, endNode] (inclusive).
	StartNode *float64 `field:"required" json:"startNode" yaml:"startNode"`
}

Runs the container on nodes [startNode, endNode].

Example:

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	 // optional, omit to let Batch choose the type for you
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

type MultiNodeJobDefinition added in v2.96.0

type MultiNodeJobDefinition interface {
	awscdk.Resource
	IJobDefinition
	// The containers that this multinode job will run.
	Containers() *[]*MultiNodeContainer
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// If the prop `instanceType` is left `undefined`, then this will hold a fake instance type, for backwards compatibility reasons.
	InstanceType() awsec2.InstanceType
	// The ARN of this job definition.
	JobDefinitionArn() *string
	// The name of this job definition.
	JobDefinitionName() *string
	// The index of the main node in this job.
	//
	// The main node is responsible for orchestration.
	MainNode() *float64
	// The tree node.
	Node() constructs.Node
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	Parameters() *map[string]interface{}
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	PropagateTags() *bool
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	RetryAttempts() *float64
	// Defines the retry behavior for this job.
	RetryStrategies() *[]RetryStrategy
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	SchedulingPriority() *float64
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	Timeout() awscdk.Duration
	// Add a container to this multinode job.
	AddContainer(container *MultiNodeContainer)
	// Add a RetryStrategy to this JobDefinition.
	AddRetryStrategy(strategy RetryStrategy)
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

A JobDefinition that uses Ecs orchestration to run multiple containers.

Example:

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	 // optional, omit to let Batch choose the type for you
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

func NewMultiNodeJobDefinition added in v2.96.0

func NewMultiNodeJobDefinition(scope constructs.Construct, id *string, props *MultiNodeJobDefinitionProps) MultiNodeJobDefinition

type MultiNodeJobDefinitionProps added in v2.96.0

type MultiNodeJobDefinitionProps struct {
	// The name of this job definition.
	// Default: - generated by CloudFormation.
	//
	JobDefinitionName *string `field:"optional" json:"jobDefinitionName" yaml:"jobDefinitionName"`
	// The default parameters passed to the container These parameters can be referenced in the `command` that you give to the container.
	// See: https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters
	//
	// Default: none.
	//
	Parameters *map[string]interface{} `field:"optional" json:"parameters" yaml:"parameters"`
	// The number of times to retry a job.
	//
	// The job is retried on failure the same number of attempts as the value.
	// Default: 1.
	//
	RetryAttempts *float64 `field:"optional" json:"retryAttempts" yaml:"retryAttempts"`
	// Defines the retry behavior for this job.
	// Default: - no `RetryStrategy`.
	//
	RetryStrategies *[]RetryStrategy `field:"optional" json:"retryStrategies" yaml:"retryStrategies"`
	// The priority of this Job.
	//
	// Only used in Fairshare Scheduling
	// to decide which job to run first when there are multiple jobs
	// with the same share identifier.
	// Default: none.
	//
	SchedulingPriority *float64 `field:"optional" json:"schedulingPriority" yaml:"schedulingPriority"`
	// The timeout time for jobs that are submitted with this job definition.
	//
	// After the amount of time you specify passes,
	// Batch terminates your jobs if they aren't finished.
	// Default: - no timeout.
	//
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The containers that this multinode job will run.
	// See: https://aws.amazon.com/blogs/compute/building-a-tightly-coupled-molecular-dynamics-workflow-with-multi-node-parallel-jobs-in-aws-batch/
	//
	// Default: none.
	//
	Containers *[]*MultiNodeContainer `field:"optional" json:"containers" yaml:"containers"`
	// The instance type that this job definition will run.
	// Default: - optimal instance, selected by Batch.
	//
	InstanceType awsec2.InstanceType `field:"optional" json:"instanceType" yaml:"instanceType"`
	// The index of the main node in this job.
	//
	// The main node is responsible for orchestration.
	// Default: 0.
	//
	MainNode *float64 `field:"optional" json:"mainNode" yaml:"mainNode"`
	// Whether to propogate tags from the JobDefinition to the ECS task that Batch spawns.
	// Default: false.
	//
	PropagateTags *bool `field:"optional" json:"propagateTags" yaml:"propagateTags"`
}

Props to configure a MultiNodeJobDefinition.

Example:

multiNodeJob := batch.NewMultiNodeJobDefinition(this, jsii.String("JobDefinition"), &MultiNodeJobDefinitionProps{
	InstanceType: ec2.InstanceType_Of(ec2.InstanceClass_R4, ec2.InstanceSize_LARGE),
	 // optional, omit to let Batch choose the type for you
	Containers: []multiNodeContainer{
		&multiNodeContainer{
			Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("mainMPIContainer"), &EcsEc2ContainerDefinitionProps{
				Image: ecs.ContainerImage_FromRegistry(jsii.String("yourregsitry.com/yourMPIImage:latest")),
				Cpu: jsii.Number(256),
				Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
			}),
			StartNode: jsii.Number(0),
			EndNode: jsii.Number(5),
		},
	},
})
// convenience method
multiNodeJob.AddContainer(&multiNodeContainer{
	StartNode: jsii.Number(6),
	EndNode: jsii.Number(10),
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("multiContainer"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_*FromRegistry(jsii.String("amazon/amazon-ecs-sample")),
		Cpu: jsii.Number(256),
		Memory: cdk.Size_*Mebibytes(jsii.Number(2048)),
	}),
})

type OptimalInstanceType added in v2.99.0

type OptimalInstanceType interface {
	awsec2.InstanceType
	// The instance's CPU architecture.
	Architecture() awsec2.InstanceArchitecture
	SameInstanceClassAs(other awsec2.InstanceType) *bool
	// Return the instance type as a dotted string.
	ToString() *string
}

Not a real instance type!

Indicates that Batch will choose one it determines to be optimal for the workload.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

optimalInstanceType := awscdk.Aws_batch.NewOptimalInstanceType()

func NewOptimalInstanceType added in v2.99.0

func NewOptimalInstanceType() OptimalInstanceType

type OrderedComputeEnvironment added in v2.96.0

type OrderedComputeEnvironment struct {
	// The ComputeEnvironment to link to this JobQueue.
	ComputeEnvironment IComputeEnvironment `field:"required" json:"computeEnvironment" yaml:"computeEnvironment"`
	// The order associated with `computeEnvironment`.
	Order *float64 `field:"required" json:"order" yaml:"order"`
}

Assigns an order to a ComputeEnvironment.

The JobQueue will prioritize the lowest-order ComputeEnvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var computeEnvironment iComputeEnvironment

orderedComputeEnvironment := &OrderedComputeEnvironment{
	ComputeEnvironment: computeEnvironment,
	Order: jsii.Number(123),
}

type Reason added in v2.96.0

type Reason interface {
}

Common job exit reasons.

Example:

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

func NewReason added in v2.96.0

func NewReason() Reason

func Reason_CANNOT_PULL_CONTAINER added in v2.96.0

func Reason_CANNOT_PULL_CONTAINER() Reason

func Reason_Custom added in v2.96.0

func Reason_Custom(customReasonProps *CustomReason) Reason

A custom Reason that can match on multiple conditions.

Note that all specified conditions must be met for this reason to match.

func Reason_NON_ZERO_EXIT_CODE added in v2.96.0

func Reason_NON_ZERO_EXIT_CODE() Reason

func Reason_SPOT_INSTANCE_RECLAIMED added in v2.96.0

func Reason_SPOT_INSTANCE_RECLAIMED() Reason

type RetryStrategy added in v2.96.0

type RetryStrategy interface {
	// The action to take when the job exits with the Reason specified.
	Action() Action
	// If the job exits with this Reason it will trigger the specified Action.
	On() Reason
}

Define how Jobs using this JobDefinition respond to different exit conditions.

Example:

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
	}),
	RetryAttempts: jsii.Number(5),
	RetryStrategies: []retryStrategy{
		batch.*retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()),
	},
})
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_SPOT_INSTANCE_RECLAIMED()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_CANNOT_PULL_CONTAINER()))
jobDefn.addRetryStrategy(batch.retryStrategy_Of(batch.Action_EXIT, batch.Reason_Custom(&CustomReason{
	OnExitCode: jsii.String("40*"),
	OnReason: jsii.String("some reason"),
})))

func NewRetryStrategy added in v2.96.0

func NewRetryStrategy(action Action, on Reason) RetryStrategy

func RetryStrategy_Of added in v2.96.0

func RetryStrategy_Of(action Action, on Reason) RetryStrategy

Create a new RetryStrategy.

type Secret added in v2.96.0

type Secret interface {
	// The ARN of the secret.
	Arn() *string
	// Whether this secret uses a specific JSON field.
	HasField() *bool
	// Grants reading the secret to a principal.
	GrantRead(grantee awsiam.IGrantable) awsiam.Grant
}

A secret environment variable.

Example:

var mySecret iSecret

jobDefn := batch.NewEcsJobDefinition(this, jsii.String("JobDefn"), &EcsJobDefinitionProps{
	Container: batch.NewEcsEc2ContainerDefinition(this, jsii.String("containerDefn"), &EcsEc2ContainerDefinitionProps{
		Image: ecs.ContainerImage_FromRegistry(jsii.String("public.ecr.aws/amazonlinux/amazonlinux:latest")),
		Memory: cdk.Size_Mebibytes(jsii.Number(2048)),
		Cpu: jsii.Number(256),
		Secrets: map[string]secret{
			"MY_SECRET_ENV_VAR": batch.*secret_fromSecretsManager(mySecret),
		},
	}),
})

func Secret_FromSecretsManager added in v2.96.0

func Secret_FromSecretsManager(secret awssecretsmanager.ISecret, field *string) Secret

Creates a environment variable value from a secret stored in AWS Secrets Manager.

func Secret_FromSecretsManagerVersion added in v2.96.0

func Secret_FromSecretsManagerVersion(secret awssecretsmanager.ISecret, versionInfo *SecretVersionInfo, field *string) Secret

Creates a environment variable value from a secret stored in AWS Secrets Manager.

func Secret_FromSsmParameter added in v2.96.0

func Secret_FromSsmParameter(parameter awsssm.IParameter) Secret

Creates an environment variable value from a parameter stored in AWS Systems Manager Parameter Store.

type SecretPathVolume added in v2.96.0

type SecretPathVolume interface {
	EksVolume
	// The path on the container where the container is mounted.
	// Default: - the container is not mounted.
	//
	ContainerPath() *string
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name() *string
	// Specifies whether the secret or the secret's keys must be defined.
	// Default: true.
	//
	Optional() *bool
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly() *bool
	// The name of the secret.
	//
	// Must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	SecretName() *string
}

Specifies the configuration of a Kubernetes secret volume.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

secretPathVolume := awscdk.Aws_batch.NewSecretPathVolume(&SecretPathVolumeOptions{
	Name: jsii.String("name"),
	SecretName: jsii.String("secretName"),

	// the properties below are optional
	MountPath: jsii.String("mountPath"),
	Optional: jsii.Boolean(false),
	Readonly: jsii.Boolean(false),
})

See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

func EksVolume_Secret added in v2.96.0

func EksVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

func EmptyDirVolume_Secret added in v2.96.0

func EmptyDirVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

func HostPathVolume_Secret added in v2.96.0

func HostPathVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

func NewSecretPathVolume added in v2.96.0

func NewSecretPathVolume(options *SecretPathVolumeOptions) SecretPathVolume

func SecretPathVolume_Secret added in v2.96.0

func SecretPathVolume_Secret(options *SecretPathVolumeOptions) SecretPathVolume

Creates a Kubernetes Secret volume. See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

type SecretPathVolumeOptions added in v2.96.0

type SecretPathVolumeOptions struct {
	// The name of this volume.
	//
	// The name must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	Name *string `field:"required" json:"name" yaml:"name"`
	// The path on the container where the volume is mounted.
	// Default: - the volume is not mounted.
	//
	MountPath *string `field:"optional" json:"mountPath" yaml:"mountPath"`
	// If specified, the container has readonly access to the volume.
	//
	// Otherwise, the container has read/write access.
	// Default: false.
	//
	Readonly *bool `field:"optional" json:"readonly" yaml:"readonly"`
	// The name of the secret.
	//
	// Must be a valid DNS subdomain name.
	// See: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
	//
	SecretName *string `field:"required" json:"secretName" yaml:"secretName"`
	// Specifies whether the secret or the secret's keys must be defined.
	// Default: true.
	//
	Optional *bool `field:"optional" json:"optional" yaml:"optional"`
}

Options for a Kubernetes SecretPath Volume.

Example:

var jobDefn eksJobDefinition

jobDefn.Container.AddVolume(batch.EksVolume_EmptyDir(&EmptyDirVolumeOptions{
	Name: jsii.String("emptyDir"),
	MountPath: jsii.String("/Volumes/emptyDir"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_HostPath(&HostPathVolumeOptions{
	Name: jsii.String("hostPath"),
	HostPath: jsii.String("/sys"),
	MountPath: jsii.String("/Volumes/hostPath"),
}))
jobDefn.Container.AddVolume(batch.EksVolume_Secret(&SecretPathVolumeOptions{
	Name: jsii.String("secret"),
	Optional: jsii.Boolean(true),
	MountPath: jsii.String("/Volumes/secret"),
	SecretName: jsii.String("mySecret"),
}))

See: https://kubernetes.io/docs/concepts/storage/volumes/#secret

type SecretVersionInfo added in v2.96.0

type SecretVersionInfo struct {
	// version id of the secret.
	// Default: - use default version id.
	//
	VersionId *string `field:"optional" json:"versionId" yaml:"versionId"`
	// version stage of the secret.
	// Default: - use default version stage.
	//
	VersionStage *string `field:"optional" json:"versionStage" yaml:"versionStage"`
}

Specify the secret's version id or version stage.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

secretVersionInfo := &SecretVersionInfo{
	VersionId: jsii.String("versionId"),
	VersionStage: jsii.String("versionStage"),
}

type Share added in v2.96.0

type Share struct {
	// The identifier of this Share.
	//
	// All jobs that specify this share identifier
	// when submitted to the queue will be considered as part of this Share.
	ShareIdentifier *string `field:"required" json:"shareIdentifier" yaml:"shareIdentifier"`
	// The weight factor given to this Share.
	//
	// The Scheduler decides which jobs to put in the Compute Environment
	// such that the following ratio is equal for each job:
	//
	// `sharevCpu / weightFactor`,
	//
	// where `sharevCpu` is the total amount of vCPU given to that particular share; that is,
	// the sum of the vCPU of each job currently in the Compute Environment for that share.
	//
	// See the readme of this module for a detailed example that shows how these are used,
	// how it relates to `computeReservation`, and how `shareDecay` affects these calculations.
	WeightFactor *float64 `field:"required" json:"weightFactor" yaml:"weightFactor"`
}

Represents a group of Job Definitions.

All Job Definitions that declare a share identifier will be considered members of the Share defined by that share identifier.

The Scheduler divides the maximum available vCPUs of the ComputeEnvironment among Jobs in the Queue based on their shareIdentifier and the weightFactor associated with that shareIdentifier.

Example:

fairsharePolicy := batch.NewFairshareSchedulingPolicy(this, jsii.String("myFairsharePolicy"))

fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("A"),
	WeightFactor: jsii.Number(1),
})
fairsharePolicy.AddShare(&Share{
	ShareIdentifier: jsii.String("B"),
	WeightFactor: jsii.Number(1),
})
batch.NewJobQueue(this, jsii.String("JobQueue"), &JobQueueProps{
	SchedulingPolicy: fairsharePolicy,
})

type Tmpfs added in v2.96.0

type Tmpfs struct {
	// The absolute file path where the tmpfs volume is to be mounted.
	ContainerPath *string `field:"required" json:"containerPath" yaml:"containerPath"`
	// The size (in MiB) of the tmpfs volume.
	Size awscdk.Size `field:"required" json:"size" yaml:"size"`
	// The list of tmpfs volume mount options.
	//
	// For more information, see
	// [TmpfsMountOptions](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Tmpfs.html).
	// Default: none.
	//
	MountOptions *[]TmpfsMountOption `field:"optional" json:"mountOptions" yaml:"mountOptions"`
}

The details of a tmpfs mount for a container.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var size size

tmpfs := &Tmpfs{
	ContainerPath: jsii.String("containerPath"),
	Size: size,

	// the properties below are optional
	MountOptions: []tmpfsMountOption{
		awscdk.Aws_batch.*tmpfsMountOption_DEFAULTS,
	},
}

type TmpfsMountOption added in v2.96.0

type TmpfsMountOption string

The supported options for a tmpfs mount for a container.

const (
	TmpfsMountOption_DEFAULTS      TmpfsMountOption = "DEFAULTS"
	TmpfsMountOption_RO            TmpfsMountOption = "RO"
	TmpfsMountOption_RW            TmpfsMountOption = "RW"
	TmpfsMountOption_SUID          TmpfsMountOption = "SUID"
	TmpfsMountOption_NOSUID        TmpfsMountOption = "NOSUID"
	TmpfsMountOption_DEV           TmpfsMountOption = "DEV"
	TmpfsMountOption_NODEV         TmpfsMountOption = "NODEV"
	TmpfsMountOption_EXEC          TmpfsMountOption = "EXEC"
	TmpfsMountOption_NOEXEC        TmpfsMountOption = "NOEXEC"
	TmpfsMountOption_SYNC          TmpfsMountOption = "SYNC"
	TmpfsMountOption_ASYNC         TmpfsMountOption = "ASYNC"
	TmpfsMountOption_DIRSYNC       TmpfsMountOption = "DIRSYNC"
	TmpfsMountOption_REMOUNT       TmpfsMountOption = "REMOUNT"
	TmpfsMountOption_MAND          TmpfsMountOption = "MAND"
	TmpfsMountOption_NOMAND        TmpfsMountOption = "NOMAND"
	TmpfsMountOption_ATIME         TmpfsMountOption = "ATIME"
	TmpfsMountOption_NOATIME       TmpfsMountOption = "NOATIME"
	TmpfsMountOption_DIRATIME      TmpfsMountOption = "DIRATIME"
	TmpfsMountOption_NODIRATIME    TmpfsMountOption = "NODIRATIME"
	TmpfsMountOption_BIND          TmpfsMountOption = "BIND"
	TmpfsMountOption_RBIND         TmpfsMountOption = "RBIND"
	TmpfsMountOption_UNBINDABLE    TmpfsMountOption = "UNBINDABLE"
	TmpfsMountOption_RUNBINDABLE   TmpfsMountOption = "RUNBINDABLE"
	TmpfsMountOption_PRIVATE       TmpfsMountOption = "PRIVATE"
	TmpfsMountOption_RPRIVATE      TmpfsMountOption = "RPRIVATE"
	TmpfsMountOption_SHARED        TmpfsMountOption = "SHARED"
	TmpfsMountOption_RSHARED       TmpfsMountOption = "RSHARED"
	TmpfsMountOption_SLAVE         TmpfsMountOption = "SLAVE"
	TmpfsMountOption_RSLAVE        TmpfsMountOption = "RSLAVE"
	TmpfsMountOption_RELATIME      TmpfsMountOption = "RELATIME"
	TmpfsMountOption_NORELATIME    TmpfsMountOption = "NORELATIME"
	TmpfsMountOption_STRICTATIME   TmpfsMountOption = "STRICTATIME"
	TmpfsMountOption_NOSTRICTATIME TmpfsMountOption = "NOSTRICTATIME"
	TmpfsMountOption_MODE          TmpfsMountOption = "MODE"
	TmpfsMountOption_UID           TmpfsMountOption = "UID"
	TmpfsMountOption_GID           TmpfsMountOption = "GID"
	TmpfsMountOption_NR_INODES     TmpfsMountOption = "NR_INODES"
	TmpfsMountOption_NR_BLOCKS     TmpfsMountOption = "NR_BLOCKS"
	TmpfsMountOption_MPOL          TmpfsMountOption = "MPOL"
)

type Ulimit added in v2.96.0

type Ulimit struct {
	// The hard limit for this resource.
	//
	// The container will
	// be terminated if it exceeds this limit.
	HardLimit *float64 `field:"required" json:"hardLimit" yaml:"hardLimit"`
	// The resource to limit.
	Name UlimitName `field:"required" json:"name" yaml:"name"`
	// The reservation for this resource.
	//
	// The container will
	// not be terminated if it exceeds this limit.
	SoftLimit *float64 `field:"required" json:"softLimit" yaml:"softLimit"`
}

Sets limits for a resource with `ulimit` on linux systems.

Used by the Docker daemon.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

ulimit := &Ulimit{
	HardLimit: jsii.Number(123),
	Name: awscdk.Aws_batch.UlimitName_CORE,
	SoftLimit: jsii.Number(123),
}

type UlimitName added in v2.96.0

type UlimitName string

The resources to be limited.

const (
	// max core dump file size.
	UlimitName_CORE UlimitName = "CORE"
	// max cpu time (seconds) for a process.
	UlimitName_CPU UlimitName = "CPU"
	// max data segment size.
	UlimitName_DATA UlimitName = "DATA"
	// max file size.
	UlimitName_FSIZE UlimitName = "FSIZE"
	// max number of file locks.
	UlimitName_LOCKS UlimitName = "LOCKS"
	// max locked memory.
	UlimitName_MEMLOCK UlimitName = "MEMLOCK"
	// max POSIX message queue size.
	UlimitName_MSGQUEUE UlimitName = "MSGQUEUE"
	// max nice value for any process this user is running.
	UlimitName_NICE UlimitName = "NICE"
	// maximum number of open file descriptors.
	UlimitName_NOFILE UlimitName = "NOFILE"
	// maximum number of processes.
	UlimitName_NPROC UlimitName = "NPROC"
	// size of the process' resident set (in pages).
	UlimitName_RSS UlimitName = "RSS"
	// max realtime priority.
	UlimitName_RTPRIO UlimitName = "RTPRIO"
	// timeout for realtime tasks.
	UlimitName_RTTIME UlimitName = "RTTIME"
	// max number of pending signals.
	UlimitName_SIGPENDING UlimitName = "SIGPENDING"
	// max stack size (in bytes).
	UlimitName_STACK UlimitName = "STACK"
)

type UnmanagedComputeEnvironment added in v2.96.0

type UnmanagedComputeEnvironment interface {
	awscdk.Resource
	IComputeEnvironment
	IUnmanagedComputeEnvironment
	// The ARN of this compute environment.
	ComputeEnvironmentArn() *string
	// The name of the ComputeEnvironment.
	ComputeEnvironmentName() *string
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	Enabled() *bool
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The tree node.
	Node() constructs.Node
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//   cross-environment scenarios.
	PhysicalName() *string
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	ServiceRole() awsiam.IRole
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// The vCPUs this Compute Environment provides. Used only by the scheduler to schedule jobs in `Queue`s that use `FairshareSchedulingPolicy`s.
	//
	// **If this parameter is not provided on a fairshare queue, no capacity is reserved**;
	// that is, the `FairshareSchedulingPolicy` is ignored.
	UnmanagedvCPUs() *float64
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

Unmanaged ComputeEnvironments do not provision or manage EC2 instances on your behalf.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var role role

unmanagedComputeEnvironment := awscdk.Aws_batch.NewUnmanagedComputeEnvironment(this, jsii.String("MyUnmanagedComputeEnvironment"), &UnmanagedComputeEnvironmentProps{
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	ServiceRole: role,
	UnmanagedvCpus: jsii.Number(123),
})

func NewUnmanagedComputeEnvironment added in v2.96.0

func NewUnmanagedComputeEnvironment(scope constructs.Construct, id *string, props *UnmanagedComputeEnvironmentProps) UnmanagedComputeEnvironment

type UnmanagedComputeEnvironmentProps added in v2.96.0

type UnmanagedComputeEnvironmentProps struct {
	// The name of the ComputeEnvironment.
	// Default: - generated by CloudFormation.
	//
	ComputeEnvironmentName *string `field:"optional" json:"computeEnvironmentName" yaml:"computeEnvironmentName"`
	// Whether or not this ComputeEnvironment can accept jobs from a Queue.
	//
	// Enabled ComputeEnvironments can accept jobs from a Queue and
	// can scale instances up or down.
	// Disabled ComputeEnvironments cannot accept jobs from a Queue or
	// scale instances up or down.
	//
	// If you change a ComputeEnvironment from enabled to disabled while it is executing jobs,
	// Jobs in the `STARTED` or `RUNNING` states will not
	// be interrupted. As jobs complete, the ComputeEnvironment will scale instances down to `minvCpus`.
	//
	// To ensure you aren't billed for unused capacity, set `minvCpus` to `0`.
	// Default: true.
	//
	Enabled *bool `field:"optional" json:"enabled" yaml:"enabled"`
	// The role Batch uses to perform actions on your behalf in your account, such as provision instances to run your jobs.
	// Default: - a serviceRole will be created for managed CEs, none for unmanaged CEs.
	//
	ServiceRole awsiam.IRole `field:"optional" json:"serviceRole" yaml:"serviceRole"`
	// The vCPUs this Compute Environment provides. Used only by the scheduler to schedule jobs in `Queue`s that use `FairshareSchedulingPolicy`s.
	//
	// **If this parameter is not provided on a fairshare queue, no capacity is reserved**;
	// that is, the `FairshareSchedulingPolicy` is ignored.
	// Default: 0.
	//
	UnmanagedvCpus *float64 `field:"optional" json:"unmanagedvCpus" yaml:"unmanagedvCpus"`
}

Represents an UnmanagedComputeEnvironment.

Batch will not provision instances on your behalf in this ComputeEvironment.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var role role

unmanagedComputeEnvironmentProps := &UnmanagedComputeEnvironmentProps{
	ComputeEnvironmentName: jsii.String("computeEnvironmentName"),
	Enabled: jsii.Boolean(false),
	ServiceRole: role,
	UnmanagedvCpus: jsii.Number(123),
}

Source Files

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL