awseks

package
v2.48.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 27, 2022 License: Apache-2.0 Imports: 14 Imported by: 7

README

Amazon EKS Construct Library

This construct library allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, the library also supports defining Kubernetes resource manifests within EKS clusters.

Table Of Contents

Quick Start

This example defines an Amazon EKS cluster with the following configuration:

// provisiong a cluster
cluster := eks.NewCluster(this, jsii.String("hello-eks"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
})

// apply a kubernetes manifest to the cluster
cluster.addManifest(jsii.String("mypod"), map[string]interface{}{
	"apiVersion": jsii.String("v1"),
	"kind": jsii.String("Pod"),
	"metadata": map[string]*string{
		"name": jsii.String("mypod"),
	},
	"spec": map[string][]map[string]interface{}{
		"containers": []map[string]interface{}{
			map[string]interface{}{
				"name": jsii.String("hello"),
				"image": jsii.String("paulbouwer/hello-kubernetes:1.5"),
				"ports": []map[string]*f64{
					map[string]*f64{
						"containerPort": jsii.Number(8080),
					},
				},
			},
		},
	},
})

In order to interact with your cluster through kubectl, you can use the aws eks update-kubeconfig AWS CLI command to configure your local kubeconfig. The EKS module will define a CloudFormation output in your stack which contains the command to run. For example:

Outputs:
ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy

Execute the aws eks update-kubeconfig ... command in your terminal to create or update a local kubeconfig context:

$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config

And now you can simply use kubectl:

$ kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/aws-node-fpmwv             1/1     Running   0          21m
pod/aws-node-m9htf             1/1     Running   0          21m
pod/coredns-5cb4fb54c7-q222j   1/1     Running   0          23m
pod/coredns-5cb4fb54c7-v9nxx   1/1     Running   0          23m
...

Architectural Overview

The following is a qualitative diagram of the various possible components involved in the cluster deployment.

 +-----------------------------------------------+               +-----------------+
 |                 EKS Cluster                   |    kubectl    |                 |
 |-----------------------------------------------|<-------------+| Kubectl Handler |
 |                                               |               |                 |
 |                                               |               +-----------------+
 | +--------------------+    +-----------------+ |
 | |                    |    |                 | |
 | | Managed Node Group |    | Fargate Profile | |               +-----------------+
 | |                    |    |                 | |               |                 |
 | +--------------------+    +-----------------+ |               | Cluster Handler |
 |                                               |               |                 |
 +-----------------------------------------------+               +-----------------+
    ^                                   ^                          +
    |                                   |                          |
    | connect self managed capacity     |                          | aws-sdk
    |                                   | create/update/delete     |
    +                                   |                          v
 +--------------------+                 +              +-------------------+
 |                    |                 --------------+| eks.amazonaws.com |
 | Auto Scaling Group |                                +-------------------+
 |                    |
 +--------------------+

In a nutshell:

  • EKS Cluster - The cluster endpoint created by EKS.
  • Managed Node Group - EC2 worker nodes managed by EKS.
  • Fargate Profile - Fargate worker nodes managed by EKS.
  • Auto Scaling Group - EC2 worker nodes managed by the user.
  • KubectlHandler - Lambda function for invoking kubectl commands on the cluster - created by CDK.
  • ClusterHandler - Lambda function for interacting with EKS API to manage the cluster lifecycle - created by CDK.

A more detailed breakdown of each is provided further down this README.

Provisioning clusters

Creating a new cluster is done using the Cluster or FargateCluster constructs. The only required property is the kubernetes version.

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
})

You can also use FargateCluster to provision a cluster that uses only fargate workers.

eks.NewFargateCluster(this, jsii.String("HelloEKS"), &fargateClusterProps{
	version: eks.kubernetesVersion_V1_21(),
})

NOTE: Only 1 cluster per stack is supported. If you have a use-case for multiple clusters per stack, or would like to understand more about this limitation, see https://github.com/aws/aws-cdk/issues/10073.

Below you'll find a few important cluster configuration options. First of which is Capacity. Capacity is the amount and the type of worker nodes that are available to the cluster for deploying resources. Amazon EKS offers 3 ways of configuring capacity, which you can combine as you like:

Managed node groups

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.

For more details visit Amazon EKS Managed Node Groups.

Managed Node Groups are the recommended way to allocate cluster capacity.

By default, this library will allocate a managed node group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money).

At cluster instantiation time, you can customize the number of instances and their type:

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	defaultCapacity: jsii.Number(5),
	defaultCapacityInstance: ec2.instanceType.of(ec2.instanceClass_M5, ec2.instanceSize_SMALL),
})

To access the node group that was created on your behalf, you can use cluster.defaultNodegroup.

Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the cluster.addNodegroupCapacity method:

cluster := eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	defaultCapacity: jsii.Number(0),
})

cluster.addNodegroupCapacity(jsii.String("custom-node-group"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("m5.large")),
	},
	minSize: jsii.Number(4),
	diskSize: jsii.Number(100),
	amiType: eks.nodegroupAmiType_AL2_X86_64_GPU,
})

To set node taints, you can set taints option.

var cluster cluster

cluster.addNodegroupCapacity(jsii.String("custom-node-group"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("m5.large")),
	},
	taints: []taintSpec{
		&taintSpec{
			effect: eks.taintEffect_NO_SCHEDULE,
			key: jsii.String("foo"),
			value: jsii.String("bar"),
		},
	},
})
Spot Instances Support

Use capacityType to create managed node groups comprised of spot instances. To maximize the availability of your applications while using Spot Instances, we recommend that you configure a Spot managed node group to use multiple instance types with the instanceTypes property.

For more details visit Managed node group capacity types.

var cluster cluster

cluster.addNodegroupCapacity(jsii.String("extra-ng-spot"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("c5.large")),
		ec2.NewInstanceType(jsii.String("c5a.large")),
		ec2.NewInstanceType(jsii.String("c5d.large")),
	},
	minSize: jsii.Number(3),
	capacityType: eks.capacityType_SPOT,
})
Launch Template Support

You can specify a launch template that the node group will use. For example, this can be useful if you want to use a custom AMI or add custom user data.

When supplying a custom user data script, it must be encoded in the MIME multi-part archive format, since Amazon EKS merges with its own user data. Visit the Launch Template Docs for mode details.

var cluster cluster


userData := "MIME-Version: 1.0\nContent-Type: multipart/mixed; boundary=\"==MYBOUNDARY==\"\n\n--==MYBOUNDARY==\nContent-Type: text/x-shellscript; charset=\"us-ascii\"\n\n#!/bin/bash\necho \"Running custom user data script\"\n\n--==MYBOUNDARY==--\\\n"
lt := ec2.NewCfnLaunchTemplate(this, jsii.String("LaunchTemplate"), &cfnLaunchTemplateProps{
	launchTemplateData: &launchTemplateDataProperty{
		instanceType: jsii.String("t3.small"),
		userData: awscdk.Fn.base64(userData),
	},
})

cluster.addNodegroupCapacity(jsii.String("extra-ng"), &nodegroupOptions{
	launchTemplateSpec: &launchTemplateSpec{
		id: lt.ref,
		version: lt.attrLatestVersionNumber,
	},
})

Note that when using a custom AMI, Amazon EKS doesn't merge any user data. Which means you do not need the multi-part encoding. and are responsible for supplying the required bootstrap commands for nodes to join the cluster. In the following example, /ect/eks/bootstrap.sh from the AMI will be used to bootstrap the node.

var cluster cluster

userData := ec2.userData.forLinux()
userData.addCommands(jsii.String("set -o xtrace"),
fmt.Sprintf("/etc/eks/bootstrap.sh %v", cluster.clusterName))
lt := ec2.NewCfnLaunchTemplate(this, jsii.String("LaunchTemplate"), &cfnLaunchTemplateProps{
	launchTemplateData: &launchTemplateDataProperty{
		imageId: jsii.String("some-ami-id"),
		 // custom AMI
		instanceType: jsii.String("t3.small"),
		userData: awscdk.Fn.base64(userData.render()),
	},
})
cluster.addNodegroupCapacity(jsii.String("extra-ng"), &nodegroupOptions{
	launchTemplateSpec: &launchTemplateSpec{
		id: lt.ref,
		version: lt.attrLatestVersionNumber,
	},
})

You may specify one instanceType in the launch template or multiple instanceTypes in the node group, but not both.

For more details visit Launch Template Support.

Graviton 2 instance types are supported including c6g, m6g, r6g and t4g. Graviton 3 instance types are supported including c7g.

Fargate profiles

AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.

You can control which pods start on Fargate and how they run with Fargate Profiles, which are defined as part of your Amazon EKS cluster.

See Fargate Considerations in the AWS EKS User Guide.

You can add Fargate Profiles to any EKS cluster defined in your CDK app through the addFargateProfile() method. The following example adds a profile that will match all pods from the "default" namespace:

var cluster cluster

cluster.addFargateProfile(jsii.String("MyProfile"), &fargateProfileOptions{
	selectors: []selector{
		&selector{
			namespace: jsii.String("default"),
		},
	},
})

You can also directly use the FargateProfile construct to create profiles under different scopes:

var cluster cluster

eks.NewFargateProfile(this, jsii.String("MyProfile"), &fargateProfileProps{
	cluster: cluster,
	selectors: []selector{
		&selector{
			namespace: jsii.String("default"),
		},
	},
})

To create an EKS cluster that only uses Fargate capacity, you can use FargateCluster. The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to run CoreDNS on Fargate.

cluster := eks.NewFargateCluster(this, jsii.String("MyCluster"), &fargateClusterProps{
	version: eks.kubernetesVersion_V1_21(),
})

FargateCluster will create a default FargateProfile which can be accessed via the cluster's defaultProfile property. The created profile can also be customized by passing options as with addFargateProfile.

NOTE: Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (minimum version v1.1.4).

Self-managed nodes

Another way of allocating capacity to an EKS cluster is by using self-managed nodes. EC2 instances that are part of the auto-scaling group will serve as worker nodes for the cluster. This type of capacity is also commonly referred to as EC2 Capacity* or EC2 Nodes.

For a detailed overview please visit Self Managed Nodes.

Creating an auto-scaling group and connecting it to the cluster is done using the cluster.addAutoScalingGroupCapacity method:

var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("frontend-nodes"), &autoScalingGroupCapacityOptions{
	instanceType: ec2.NewInstanceType(jsii.String("t2.medium")),
	minCapacity: jsii.Number(3),
	vpcSubnets: &subnetSelection{
		subnetType: ec2.subnetType_PUBLIC,
	},
})

To connect an already initialized auto-scaling group, use the cluster.connectAutoScalingGroupCapacity() method:

var cluster cluster
var asg autoScalingGroup

cluster.connectAutoScalingGroupCapacity(asg, &autoScalingGroupOptions{
})

To connect a self-managed node group to an imported cluster, use the cluster.connectAutoScalingGroupCapacity() method:

var cluster cluster
var asg autoScalingGroup

importedCluster := eks.cluster.fromClusterAttributes(this, jsii.String("ImportedCluster"), &clusterAttributes{
	clusterName: cluster.clusterName,
	clusterSecurityGroupId: cluster.clusterSecurityGroupId,
})

importedCluster.connectAutoScalingGroupCapacity(asg, &autoScalingGroupOptions{
})

In both cases, the cluster security group will be automatically attached to the auto-scaling group, allowing for traffic to flow freely between managed and self-managed nodes.

Note: The default updateType for auto-scaling groups does not replace existing nodes. Since security groups are determined at launch time, self-managed nodes that were provisioned with version 1.78.0 or lower, will not be updated. To apply the new configuration on all your self-managed nodes, you'll need to replace the nodes using the UpdateType.REPLACING_UPDATE policy for the updateType property.

You can customize the /etc/eks/boostrap.sh script, which is responsible for bootstrapping the node to the EKS cluster. For example, you can use kubeletExtraArgs to add custom node labels or taints.

var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("spot"), &autoScalingGroupCapacityOptions{
	instanceType: ec2.NewInstanceType(jsii.String("t3.large")),
	minCapacity: jsii.Number(2),
	bootstrapOptions: &bootstrapOptions{
		kubeletExtraArgs: jsii.String("--node-labels foo=bar,goo=far"),
		awsApiRetryAttempts: jsii.Number(5),
	},
})

To disable bootstrapping altogether (i.e. to fully customize user-data), set bootstrapEnabled to false. You can also configure the cluster to use an auto-scaling group as the default capacity:

cluster := eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	defaultCapacityType: eks.defaultCapacityType_EC2,
})

This will allocate an auto-scaling group with 2 m5.large instances (this instance type suits most common use-cases, and is good value for money). To access the AutoScalingGroup that was created on your behalf, you can use cluster.defaultCapacity. You can also independently create an AutoScalingGroup and connect it to the cluster using the cluster.connectAutoScalingGroupCapacity method:

var cluster cluster
var asg autoScalingGroup

cluster.connectAutoScalingGroupCapacity(asg, &autoScalingGroupOptions{
})

This will add the necessary user-data to access the apiserver and configure all connections, roles, and tags needed for the instances in the auto-scaling group to properly join the cluster.

Spot Instances

When using self-managed nodes, you can configure the capacity to use spot instances, greatly reducing capacity cost. To enable spot capacity, use the spotPrice property:

var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("spot"), &autoScalingGroupCapacityOptions{
	spotPrice: jsii.String("0.1094"),
	instanceType: ec2.NewInstanceType(jsii.String("t3.large")),
	maxCapacity: jsii.Number(10),
})

Spot instance nodes will be labeled with lifecycle=Ec2Spot and tainted with PreferNoSchedule.

The AWS Node Termination Handler DaemonSet will be installed from Amazon EKS Helm chart repository on these nodes. The termination handler ensures that the Kubernetes control plane responds appropriately to events that can cause your EC2 instance to become unavailable, such as EC2 maintenance events and EC2 Spot interruptions and helps gracefully stop all pods running on spot nodes that are about to be terminated.

Handler Version: 1.7.0

Chart Version: 0.9.5

To disable the installation of the termination handler, set the spotInterruptHandler property to false. This applies both to addAutoScalingGroupCapacity and connectAutoScalingGroupCapacity.

Bottlerocket

Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.

Bottlerocket is supported when using managed nodegroups or self-managed auto-scaling groups.

To create a Bottlerocket managed nodegroup:

var cluster cluster

cluster.addNodegroupCapacity(jsii.String("BottlerocketNG"), &nodegroupOptions{
	amiType: eks.nodegroupAmiType_BOTTLEROCKET_X86_64,
})

The following example will create an auto-scaling group of 2 t3.small Linux instances running with the Bottlerocket AMI.

var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("BottlerocketNodes"), &autoScalingGroupCapacityOptions{
	instanceType: ec2.NewInstanceType(jsii.String("t3.small")),
	minCapacity: jsii.Number(2),
	machineImageType: eks.machineImageType_BOTTLEROCKET,
})

The specific Bottlerocket AMI variant will be auto selected according to the k8s version for the x86_64 architecture. For example, if the Amazon EKS cluster version is 1.17, the Bottlerocket AMI variant will be auto selected as aws-k8s-1.17 behind the scene.

See Variants for more details.

Please note Bottlerocket does not allow to customize bootstrap options and bootstrapOptions properties is not supported when you create the Bottlerocket capacity.

For more details about Bottlerocket, see Bottlerocket FAQs and Bottlerocket Open Source Blog.

Endpoint Access

When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl)

By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).

You can configure the cluster endpoint access by using the endpointAccess property:

cluster := eks.NewCluster(this, jsii.String("hello-eks"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	endpointAccess: eks.endpointAccess_PRIVATE(),
})

The default value is eks.EndpointAccess.PUBLIC_AND_PRIVATE. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and kubectl commands issued by this library stay within your VPC.

Alb Controller

Some Kubernetes resources are commonly implemented on AWS with the help of the ALB Controller.

From the docs:

AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.

  • It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
  • It satisfies Kubernetes Service resources by provisioning Network Load Balancers.

To deploy the controller on your EKS cluster, configure the albController property:

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	albController: &albControllerOptions{
		version: eks.albControllerVersion_V2_4_1(),
	},
})

Querying the controller pods should look something like this:

❯ kubectl get pods -n kube-system
NAME                                            READY   STATUS    RESTARTS   AGE
aws-load-balancer-controller-76bd6c7586-d929p   1/1     Running   0          109m
aws-load-balancer-controller-76bd6c7586-fqxph   1/1     Running   0          109m
...
...

Every Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller. If the controller is deleted before the manifest, it might result in dangling ELB/ALB resources. Currently, the EKS construct library does not detect such dependencies, and they should be done explicitly.

For example:

var cluster cluster

manifest := cluster.addManifest(jsii.String("manifest"), map[string]interface{}{
})
if cluster.albController {
	manifest.node.addDependency(cluster.albController)
}
VPC Support

You can specify the VPC of the cluster using the vpc and vpcSubnets properties:

var vpc vpc


eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	vpc: vpc,
	vpcSubnets: []subnetSelection{
		&subnetSelection{
			subnetType: ec2.subnetType_PRIVATE_WITH_EGRESS,
		},
	},
})

Note: Isolated VPCs (i.e with no internet access) are not currently supported. See https://github.com/aws/aws-cdk/issues/12171

If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).

Please note that the vpcSubnets property defines the subnets where EKS will place the control plane ENIs. To choose the subnets where EKS will place the worker nodes, please refer to the Provisioning clusters section above.

If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:

var vpc vpc
var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("nodes"), &autoScalingGroupCapacityOptions{
	vpcSubnets: &subnetSelection{
		subnets: vpc.privateSubnets,
	},
	instanceType: ec2.NewInstanceType(jsii.String("t2.medium")),
})

There are two additional components you might want to provision within the VPC.

Kubectl Handler

The KubectlHandler is a Lambda function responsible to issuing kubectl and helm commands against the cluster when you add resource manifests to the cluster.

The handler association to the VPC is derived from the endpointAccess configuration. The rule of thumb is: If the cluster VPC can be associated, it will be.

Breaking this down, it means that if the endpoint exposes private access (via EndpointAccess.PRIVATE or EndpointAccess.PUBLIC_AND_PRIVATE), and the VPC contains private subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.

If the endpoint does not expose private access (via EndpointAccess.PUBLIC) or the VPC does not contain private subnets, the function will not be provisioned within the VPC.

If your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as kubectlLambdaRole) of the EKS Cluster construct.

Cluster Handler

The ClusterHandler is a set of Lambda functions (onEventHandler, isCompleteHandler) responsible for interacting with the EKS API in order to control the cluster lifecycle. To provision these functions inside the VPC, set the placeClusterHandlerInVpc property to true. This will place the functions inside the private subnets of the VPC based on the selection strategy specified in the vpcSubnets property.

You can configure the environment of the Cluster Handler functions by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:

var proxyInstanceSecurityGroup securityGroup

cluster := eks.NewCluster(this, jsii.String("hello-eks"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	clusterHandlerEnvironment: map[string]*string{
		"https_proxy": jsii.String("http://proxy.myproxy.com"),
	},
	/**
	   * If the proxy is not open publicly, you can pass a security group to the
	   * Cluster Handler Lambdas so that it can reach the proxy.
	   */
	clusterHandlerSecurityGroup: proxyInstanceSecurityGroup,
})
Kubectl Support

The resources are created in the cluster by running kubectl apply from a python lambda function.

By default, CDK will create a new python lambda function to apply your k8s manifests. If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:

handlerRole := iam.role.fromRoleArn(this, jsii.String("HandlerRole"), jsii.String("arn:aws:iam::123456789012:role/lambda-role"))
kubectlProvider := eks.kubectlProvider.fromKubectlProviderAttributes(this, jsii.String("KubectlProvider"), &kubectlProviderAttributes{
	functionArn: jsii.String("arn:aws:lambda:us-east-2:123456789012:function:my-function:1"),
	kubectlRoleArn: jsii.String("arn:aws:iam::123456789012:role/kubectl-role"),
	handlerRole: handlerRole,
})

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("Cluster"), &clusterAttributes{
	clusterName: jsii.String("cluster"),
	kubectlProvider: kubectlProvider,
})
Environment

You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:

cluster := eks.NewCluster(this, jsii.String("hello-eks"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	kubectlEnvironment: map[string]*string{
		"http_proxy": jsii.String("http://proxy.myproxy.com"),
	},
})
Runtime

The kubectl handler uses kubectl, helm and the aws CLI in order to interact with the cluster. These are bundled into AWS Lambda layers included in the @aws-cdk/lambda-layer-awscli and @aws-cdk/lambda-layer-kubectl modules.

The version of kubectl used must be compatible wtih the Kubernetes version of the cluster. kubectl is supported within one minor version (older or newer) of Kubernetes (see Kubernetes version skew policy). Only version 1.20 of kubectl is available in aws-cdk-lib. If you need a different version, you will need to use one of the @aws-cdk/lambda-layer-kubectlvXY packages.

// Example automatically generated from non-compiling source. May contain errors.
import "github.com/aws-samples/dummy/awscdkliblambdalayerkubectlv22"


cluster := eks.NewCluster(this, jsii.String("hello-eks"), &clusterProps{
	version: eks.kubernetesVersion_V1_22(),
	kubectlLayer: *awscdkliblambdalayerkubectlv22.NewKubectlV22Layer(this, jsii.String("kubectl")),
})

You can also specify a custom lambda.LayerVersion if you wish to use a different version of these tools, or a version not available in any of the @aws-cdk/lambda-layer-kubectlvXY packages. The handler expects the layer to include the following two executables:

helm/helm
kubectl/kubectl

See more information in the Dockerfile for @aws-cdk/lambda-layer-awscli and the Dockerfile for @aws-cdk/lambda-layer-kubectl.

layer := lambda.NewLayerVersion(this, jsii.String("KubectlLayer"), &layerVersionProps{
	code: lambda.code.fromAsset(jsii.String("layer.zip")),
})

Now specify when the cluster is defined:

var layer layerVersion
var vpc vpc


cluster1 := eks.NewCluster(this, jsii.String("MyCluster"), &clusterProps{
	kubectlLayer: layer,
	vpc: vpc,
	clusterName: jsii.String("cluster-name"),
	version: eks.kubernetesVersion_V1_21(),
})

// or
cluster2 := eks.cluster.fromClusterAttributes(this, jsii.String("MyCluster"), &clusterAttributes{
	kubectlLayer: layer,
	vpc: vpc,
	clusterName: jsii.String("cluster-name"),
})
Memory

By default, the kubectl provider is configured with 1024MiB of memory. You can use the kubectlMemory option to specify the memory size for the AWS Lambda function:

// or
var vpc vpc
eks.NewCluster(this, jsii.String("MyCluster"), &clusterProps{
	kubectlMemory: awscdk.Size.gibibytes(jsii.Number(4)),
	version: eks.kubernetesVersion_V1_21(),
})
eks.cluster.fromClusterAttributes(this, jsii.String("MyCluster"), &clusterAttributes{
	kubectlMemory: awscdk.Size.gibibytes(jsii.Number(4)),
	vpc: vpc,
	clusterName: jsii.String("cluster-name"),
})
ARM64 Support

Instance types with ARM64 architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 instanceType (such as m6g.medium), and the latest Amazon Linux 2 AMI for ARM64 will be automatically selected.

var cluster cluster

// add a managed ARM64 nodegroup
cluster.addNodegroupCapacity(jsii.String("extra-ng-arm"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("m6g.medium")),
	},
	minSize: jsii.Number(2),
})

// add a self-managed ARM64 nodegroup
cluster.addAutoScalingGroupCapacity(jsii.String("self-ng-arm"), &autoScalingGroupCapacityOptions{
	instanceType: ec2.NewInstanceType(jsii.String("m6g.medium")),
	minCapacity: jsii.Number(2),
})
Masters Role

When you create a cluster, you can specify a mastersRole. The Cluster construct will associate this role with the system:masters RBAC group, giving it super-user access to the cluster.

var role role

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	mastersRole: role,
})

If you do not specify it, a default role will be created on your behalf, that can be assumed by anyone in the account with sts:AssumeRole permissions for this role.

This is the role you see as part of the stack outputs mentioned in the Quick Start.

$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config
Encryption

When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled. The documentation on creating a cluster can provide more details about the customer master key (CMK) that can be used for the encryption.

You can use the secretsEncryptionKey to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.

This setting can only be specified when the cluster is created and cannot be updated.

secretsKey := kms.NewKey(this, jsii.String("SecretsKey"))
cluster := eks.NewCluster(this, jsii.String("MyCluster"), &clusterProps{
	secretsEncryptionKey: secretsKey,
	version: eks.kubernetesVersion_V1_21(),
})

You can also use a similar configuration for running a cluster built using the FargateCluster construct.

secretsKey := kms.NewKey(this, jsii.String("SecretsKey"))
cluster := eks.NewFargateCluster(this, jsii.String("MyFargateCluster"), &fargateClusterProps{
	secretsEncryptionKey: secretsKey,
	version: eks.kubernetesVersion_V1_21(),
})

The Amazon Resource Name (ARN) for that CMK can be retrieved.

var cluster cluster

clusterEncryptionConfigKeyArn := cluster.clusterEncryptionConfigKeyArn

Permissions and Security

Amazon EKS provides several mechanism of securing the cluster and granting permissions to specific IAM users and roles.

AWS IAM Mapping

As described in the Amazon EKS User Guide, you can map AWS IAM users and roles to Kubernetes Role-based access control (RBAC).

The Amazon EKS construct manages the aws-auth ConfigMap Kubernetes resource on your behalf and exposes an API through the cluster.awsAuth for mapping users, roles and accounts.

Furthermore, when auto-scaling group capacity is added to the cluster, the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required.

For example, let's say you want to grant an IAM user administrative privileges on your cluster:

var cluster cluster

adminUser := iam.NewUser(this, jsii.String("Admin"))
cluster.awsAuth.addUserMapping(adminUser, &awsAuthMapping{
	groups: []*string{
		jsii.String("system:masters"),
	},
})

A convenience method for mapping a role to the system:masters group is also available:

var cluster cluster
var role role

cluster.awsAuth.addMastersRole(role)
Cluster Security Group

When you create an Amazon EKS cluster, a cluster security group is automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely between each other.

The ID for that security group can be retrieved after creating the cluster.

var cluster cluster

clusterSecurityGroupId := cluster.clusterSecurityGroupId
Node SSH Access

If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it when you add capacity to the cluster. You must also be able to connect to the hosts (meaning they must have a public IP and you should be allowed to connect to them on port 22):

See SSH into nodes for a code example.

If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation.

Service Accounts

With services account you can provide Kubernetes Pods access to AWS resources.

var cluster cluster

// add service account
serviceAccount := cluster.addServiceAccount(jsii.String("MyServiceAccount"))

bucket := s3.NewBucket(this, jsii.String("Bucket"))
bucket.grantReadWrite(serviceAccount)

mypod := cluster.addManifest(jsii.String("mypod"), map[string]interface{}{
	"apiVersion": jsii.String("v1"),
	"kind": jsii.String("Pod"),
	"metadata": map[string]*string{
		"name": jsii.String("mypod"),
	},
	"spec": map[string]interface{}{
		"serviceAccountName": serviceAccount.serviceAccountName,
		"containers": []map[string]interface{}{
			map[string]interface{}{
				"name": jsii.String("hello"),
				"image": jsii.String("paulbouwer/hello-kubernetes:1.5"),
				"ports": []map[string]*f64{
					map[string]*f64{
						"containerPort": jsii.Number(8080),
					},
				},
			},
		},
	},
})

// create the resource after the service account.
mypod.node.addDependency(serviceAccount)

// print the IAM role arn for this service account
// print the IAM role arn for this service account
awscdk.NewCfnOutput(this, jsii.String("ServiceAccountIamRole"), &cfnOutputProps{
	value: serviceAccount.role.roleArn,
})

Note that using serviceAccount.serviceAccountName above does not translate into a resource dependency. This is why an explicit dependency is needed. See https://github.com/aws/aws-cdk/issues/9910 for more details.

It is possible to pass annotations and labels to the service account.

var cluster cluster

// add service account with annotations and labels
serviceAccount := cluster.addServiceAccount(jsii.String("MyServiceAccount"), &serviceAccountOptions{
	annotations: map[string]*string{
		"eks.amazonaws.com/sts-regional-endpoints": jsii.String("false"),
	},
	labels: map[string]*string{
		"some-label": jsii.String("with-some-value"),
	},
})

You can also add service accounts to existing clusters. To do so, pass the openIdConnectProvider property when you import the cluster into the application.

// or create a new one using an existing issuer url
var issuerUrl string
// you can import an existing provider
provider := eks.openIdConnectProvider.fromOpenIdConnectProviderArn(this, jsii.String("Provider"), jsii.String("arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC"))
provider2 := eks.NewOpenIdConnectProvider(this, jsii.String("Provider"), &openIdConnectProviderProps{
	url: issuerUrl,
})

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("MyCluster"), &clusterAttributes{
	clusterName: jsii.String("Cluster"),
	openIdConnectProvider: provider,
	kubectlRoleArn: jsii.String("arn:aws:iam::123456:role/service-role/k8sservicerole"),
})

serviceAccount := cluster.addServiceAccount(jsii.String("MyServiceAccount"))

bucket := s3.NewBucket(this, jsii.String("Bucket"))
bucket.grantReadWrite(serviceAccount)

Note that adding service accounts requires running kubectl commands against the cluster. This means you must also pass the kubectlRoleArn when importing the cluster. See Using existing Clusters.

Applying Kubernetes Resources

The library supports several popular resource deployment mechanisms, among which are:

Kubernetes Manifests

The KubernetesManifest construct or cluster.addManifest method can be used to apply Kubernetes resource manifests to this cluster.

When using cluster.addManifest, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly use new KubernetesManifest to create the manifest in the scope of the other stack.

The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:

var cluster cluster

appLabel := map[string]*string{
	"app": jsii.String("hello-kubernetes"),
}

deployment := map[string]interface{}{
	"apiVersion": jsii.String("apps/v1"),
	"kind": jsii.String("Deployment"),
	"metadata": map[string]*string{
		"name": jsii.String("hello-kubernetes"),
	},
	"spec": map[string]interface{}{
		"replicas": jsii.Number(3),
		"selector": map[string]map[string]*string{
			"matchLabels": appLabel,
		},
		"template": map[string]map[string]map[string]*string{
			"metadata": map[string]map[string]*string{
				"labels": appLabel,
			},
			"spec": map[string][]map[string]interface{}{
				"containers": []map[string]interface{}{
					map[string]interface{}{
						"name": jsii.String("hello-kubernetes"),
						"image": jsii.String("paulbouwer/hello-kubernetes:1.5"),
						"ports": []map[string]*f64{
							map[string]*f64{
								"containerPort": jsii.Number(8080),
							},
						},
					},
				},
			},
		},
	},
}

service := map[string]interface{}{
	"apiVersion": jsii.String("v1"),
	"kind": jsii.String("Service"),
	"metadata": map[string]*string{
		"name": jsii.String("hello-kubernetes"),
	},
	"spec": map[string]interface{}{
		"type": jsii.String("LoadBalancer"),
		"ports": []map[string]*f64{
			map[string]*f64{
				"port": jsii.Number(80),
				"targetPort": jsii.Number(8080),
			},
		},
		"selector": appLabel,
	},
}

// option 1: use a construct
// option 1: use a construct
eks.NewKubernetesManifest(this, jsii.String("hello-kub"), &kubernetesManifestProps{
	cluster: cluster,
	manifest: []map[string]interface{}{
		deployment,
		service,
	},
})

// or, option2: use `addManifest`
cluster.addManifest(jsii.String("hello-kub"), service, deployment)
ALB Controller Integration

The KubernetesManifest construct can detect ingress resources inside your manifest and automatically add the necessary annotations so they are picked up by the ALB Controller.

See Alb Controller

To that end, it offers the following properties:

  • ingressAlb - Signal that the ingress detection should be done.
  • ingressAlbScheme - Which ALB scheme should be applied. Defaults to internal.
Adding resources from a URL

The following example will deploy the resource manifest hosting on remote server:

// This example is only available in TypeScript

import * as yaml from 'js-yaml';
import * as request from 'sync-request';

declare const cluster: eks.Cluster;
const manifestUrl = 'https://url/of/manifest.yaml';
const manifest = yaml.safeLoadAll(request('GET', manifestUrl).getBody());
cluster.addManifest('my-resource', manifest);
Dependencies

There are cases where Kubernetes resources must be deployed in a specific order. For example, you cannot define a resource in a Kubernetes namespace before the namespace was created.

You can represent dependencies between KubernetesManifests using resource.node.addDependency():

var cluster cluster

namespace := cluster.addManifest(jsii.String("my-namespace"), map[string]interface{}{
	"apiVersion": jsii.String("v1"),
	"kind": jsii.String("Namespace"),
	"metadata": map[string]*string{
		"name": jsii.String("my-app"),
	},
})

service := cluster.addManifest(jsii.String("my-service"), map[string]interface{}{
	"metadata": map[string]*string{
		"name": jsii.String("myservice"),
		"namespace": jsii.String("my-app"),
	},
	"spec": map[string]interface{}{
	},
})

service.node.addDependency(namespace)

NOTE: when a KubernetesManifest includes multiple resources (either directly or through cluster.addManifest()) (e.g. cluster.addManifest('foo', r1, r2, r3,...)), these resources will be applied as a single manifest via kubectl and will be applied sequentially (the standard behavior in kubectl).


Since Kubernetes manifests are implemented as CloudFormation resources in the CDK. This means that if the manifest is deleted from your code (or the stack is deleted), the next cdk deploy will issue a kubectl delete command and the Kubernetes resources in that manifest will be deleted.

Resource Pruning

When a resource is deleted from a Kubernetes manifest, the EKS module will automatically delete these resources by injecting a prune label to all manifest resources. This label is then passed to kubectl apply --prune.

Pruning is enabled by default but can be disabled through the prune option when a cluster is defined:

eks.NewCluster(this, jsii.String("MyCluster"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	prune: jsii.Boolean(false),
})
Manifests Validation

The kubectl CLI supports applying a manifest by skipping the validation. This can be accomplished by setting the skipValidation flag to true in the KubernetesManifest props.

var cluster cluster

eks.NewKubernetesManifest(this, jsii.String("HelloAppWithoutValidation"), &kubernetesManifestProps{
	cluster: cluster,
	manifest: []map[string]interface{}{
		map[string]interface{}{
			"foo": jsii.String("bar"),
		},
	},
	skipValidation: jsii.Boolean(true),
})
Helm Charts

The HelmChart construct or cluster.addHelmChart method can be used to add Kubernetes resources to this cluster using Helm.

When using cluster.addHelmChart, the manifest construct is defined within the cluster's stack scope. If the manifest contains attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error. To avoid this, directly use new HelmChart to create the chart in the scope of the other stack.

The following example will install the NGINX Ingress Controller to your cluster using Helm.

var cluster cluster

// option 1: use a construct
// option 1: use a construct
eks.NewHelmChart(this, jsii.String("NginxIngress"), &helmChartProps{
	cluster: cluster,
	chart: jsii.String("nginx-ingress"),
	repository: jsii.String("https://helm.nginx.com/stable"),
	namespace: jsii.String("kube-system"),
})

// or, option2: use `addHelmChart`
cluster.addHelmChart(jsii.String("NginxIngress"), &helmChartOptions{
	chart: jsii.String("nginx-ingress"),
	repository: jsii.String("https://helm.nginx.com/stable"),
	namespace: jsii.String("kube-system"),
})

Helm charts will be installed and updated using helm upgrade --install, where a few parameters are being passed down (such as repo, values, version, namespace, wait, timeout, etc). This means that if the chart is added to CDK with the same release name, it will try to update the chart in the cluster.

Additionally, the chartAsset property can be an aws-s3-assets.Asset. This allows the use of local, private helm charts.

import s3Assets "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster

chartAsset := s3Assets.NewAsset(this, jsii.String("ChartAsset"), &assetProps{
	path: jsii.String("/path/to/asset"),
})

cluster.addHelmChart(jsii.String("test-chart"), &helmChartOptions{
	chartAsset: chartAsset,
})

Nested values passed to the values parameter should be provided as a nested dictionary:

// Example automatically generated from non-compiling source. May contain errors.
cluster.addHelmChart(jsii.String("ExternalSecretsOperator"), map[string]interface{}{
	"chart": jsii.String("external-secrets"),
	"release": jsii.String("external-secrets"),
	"repository": jsii.String("https://charts.external-secrets.io"),
	"namespace": jsii.String("external-secrets"),
	"values": map[string]interface{}{
		"installCRDs": jsii.Boolean(true),
		"webhook": map[string]*f64{
			"port": jsii.Number(9443),
		},
	},
})
OCI Charts

OCI charts are also supported. Also replace the ${VARS} with appropriate values.

var cluster cluster

// option 1: use a construct
// option 1: use a construct
eks.NewHelmChart(this, jsii.String("MyOCIChart"), &helmChartProps{
	cluster: cluster,
	chart: jsii.String("some-chart"),
	repository: jsii.String("oci://${ACCOUNT_ID}.dkr.ecr.${ACCOUNT_REGION}.amazonaws.com/${REPO_NAME}"),
	namespace: jsii.String("oci"),
	version: jsii.String("0.0.1"),
})

Helm charts are implemented as CloudFormation resources in CDK. This means that if the chart is deleted from your code (or the stack is deleted), the next cdk deploy will issue a helm uninstall command and the Helm chart will be deleted.

When there is no release defined, a unique ID will be allocated for the release based on the construct path.

By default, all Helm charts will be installed concurrently. In some cases, this could cause race conditions where two Helm charts attempt to deploy the same resource or if Helm charts depend on each other. You can use chart.node.addDependency() in order to declare a dependency order between charts:

var cluster cluster

chart1 := cluster.addHelmChart(jsii.String("MyChart"), &helmChartOptions{
	chart: jsii.String("foo"),
})
chart2 := cluster.addHelmChart(jsii.String("MyChart"), &helmChartOptions{
	chart: jsii.String("bar"),
})

chart2.node.addDependency(chart1)
CDK8s Charts

CDK8s is an open-source library that enables Kubernetes manifest authoring using familiar programming languages. It is founded on the same technologies as the AWS CDK, such as constructs and jsii.

To learn more about cdk8s, visit the Getting Started tutorials.

The EKS module natively integrates with cdk8s and allows you to apply cdk8s charts on AWS EKS clusters via the cluster.addCdk8sChart method.

In addition to cdk8s, you can also use cdk8s+, which provides higher level abstraction for the core kubernetes api objects. You can think of it like the L2 constructs for Kubernetes. Any other cdk8s based libraries are also supported, for example cdk8s-debore.

To get started, add the following dependencies to your package.json file:

"dependencies": {
  "cdk8s": "^2.0.0",
  "cdk8s-plus-22": "^2.0.0-rc.30",
  "constructs": "^10.0.0"
}

Note that here we are using cdk8s-plus-22 as we are targeting Kubernetes version 1.22.0. If you operate a different kubernetes version, you should use the corresponding cdk8s-plus-XX library. See Select the appropriate cdk8s+ library for more details.

Similarly to how you would create a stack by extending aws-cdk-lib.Stack, we recommend you create a chart of your own that extends cdk8s.Chart, and add your kubernetes resources to it. You can use aws-cdk construct attributes and properties inside your cdk8s construct freely.

In this example we create a chart that accepts an s3.Bucket and passes its name to a kubernetes pod as an environment variable.

+ my-chart.ts

// Example automatically generated from non-compiling source. May contain errors.
import "github.com/aws/aws-cdk-go/awscdk"
import constructs "github.com/aws/constructs-go/constructs"
import cdk8s "github.com/cdk8s-team/cdk8s-core-go/cdk8s"
import kplus "github.com/aws-samples/dummy/cdk8splus22"

type myChartProps struct {
	bucket bucket
}

type MyChart struct {
	chart
}

func NewMyChart(scope construct, id *string, props myChartProps) *MyChart {
	this := &MyChart{}
	cdk8s.NewChart_Override(this, scope, id)

	kplus.NewPod(this, jsii.String("Pod"), map[string][]map[string]interface{}{
		"containers": []map[string]interface{}{
			map[string]interface{}{
				"image": jsii.String("my-image"),
				"env": map[string]interface{}{
					"BUCKET_NAME": kplus.EnvValue_fromValue(*props.bucket.bucketName),
				},
			},
		},
	})
	return this
}

Then, in your AWS CDK app:

// Example automatically generated from non-compiling source. May contain errors.
var cluster cluster


// some bucket..
bucket := s3.NewBucket(this, jsii.String("Bucket"))

// create a cdk8s chart and use `cdk8s.App` as the scope.
myChart := NewMyChart(cdk8s.NewApp(), jsii.String("MyChart"), &myChartProps{
	bucket: bucket,
})

// add the cdk8s chart to the cluster
cluster.addCdk8sChart(jsii.String("my-chart"), myChart)
Custom CDK8s Constructs

You can also compose a few stock cdk8s+ constructs into your own custom construct. However, since mixing scopes between aws-cdk and cdk8s is currently not supported, the Construct class you'll need to use is the one from the constructs module, and not from @aws-cdk/core like you normally would. This is why we used new cdk8s.App() as the scope of the chart above.

import constructs "github.com/aws/constructs-go/constructs"
import cdk8s "github.com/cdk8s-team/cdk8s-core-go/cdk8s"
import kplus "github.com/cdk8s-team/cdk8s-plus-go/cdk8splus21"

type loadBalancedWebService struct {
	port *f64
	image *string
	replicas *f64
}

app := cdk8s.NewApp()
chart := cdk8s.NewChart(app, jsii.String("my-chart"))

type loadBalancedWebService struct {
	construct
}

func NewLoadBalancedWebService(scope construct, id *string, props loadBalancedWebService) *loadBalancedWebService {
	this := &loadBalancedWebService{}
	constructs.NewConstruct_Override(this, scope, id)

	deployment := kplus.NewDeployment(chart, jsii.String("Deployment"), &deploymentProps{
		replicas: props.replicas,
		containers: []containerProps{
			kplus.NewContainer(&containerProps{
				image: props.image,
			}),
		},
	})

	deployment.exposeViaService(&exposeDeploymentViaServiceOptions{
		port: props.port,
		serviceType: kplus.serviceType_LOAD_BALANCER,
	})
	return this
}
Manually importing k8s specs and CRD's

If you find yourself unable to use cdk8s+, or just like to directly use the k8s native objects or CRD's, you can do so by manually importing them using the cdk8s-cli.

See Importing kubernetes objects for detailed instructions.

Patching Kubernetes Resources

The KubernetesPatch construct can be used to update existing kubernetes resources. The following example can be used to patch the hello-kubernetes deployment from the example above with 5 replicas.

var cluster cluster

eks.NewKubernetesPatch(this, jsii.String("hello-kub-deployment-label"), &kubernetesPatchProps{
	cluster: cluster,
	resourceName: jsii.String("deployment/hello-kubernetes"),
	applyPatch: map[string]interface{}{
		"spec": map[string]*f64{
			"replicas": jsii.Number(5),
		},
	},
	restorePatch: map[string]interface{}{
		"spec": map[string]*f64{
			"replicas": jsii.Number(3),
		},
	},
})

Querying Kubernetes Resources

The KubernetesObjectValue construct can be used to query for information about kubernetes objects, and use that as part of your CDK application.

For example, you can fetch the address of a LoadBalancer type service:

var cluster cluster

// query the load balancer address
myServiceAddress := eks.NewKubernetesObjectValue(this, jsii.String("LoadBalancerAttribute"), &kubernetesObjectValueProps{
	cluster: cluster,
	objectType: jsii.String("service"),
	objectName: jsii.String("my-service"),
	jsonPath: jsii.String(".status.loadBalancer.ingress[0].hostname"),
})

// pass the address to a lambda function
proxyFunction := lambda.NewFunction(this, jsii.String("ProxyFunction"), &functionProps{
	handler: jsii.String("index.handler"),
	code: lambda.code.fromInline(jsii.String("my-code")),
	runtime: lambda.runtime_NODEJS_14_X(),
	environment: map[string]*string{
		"myServiceAddress": myServiceAddress.value,
	},
})

Specifically, since the above use-case is quite common, there is an easier way to access that information:

var cluster cluster

loadBalancerAddress := cluster.getServiceLoadBalancerAddress(jsii.String("my-service"))

Using existing clusters

The Amazon EKS library allows defining Kubernetes resources such as Kubernetes manifests and Helm charts on clusters that are not defined as part of your CDK app.

First, you'll need to "import" a cluster to your CDK app. To do that, use the eks.Cluster.fromClusterAttributes() static method:

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("MyCluster"), &clusterAttributes{
	clusterName: jsii.String("my-cluster-name"),
	kubectlRoleArn: jsii.String("arn:aws:iam::1111111:role/iam-role-that-has-masters-access"),
})

Then, you can use addManifest or addHelmChart to define resources inside your Kubernetes cluster. For example:

var cluster cluster

cluster.addManifest(jsii.String("Test"), map[string]interface{}{
	"apiVersion": jsii.String("v1"),
	"kind": jsii.String("ConfigMap"),
	"metadata": map[string]*string{
		"name": jsii.String("myconfigmap"),
	},
	"data": map[string]*string{
		"Key": jsii.String("value"),
		"Another": jsii.String("123454"),
	},
})

At the minimum, when importing clusters for kubectl management, you will need to specify:

  • clusterName - the name of the cluster.
  • kubectlRoleArn - the ARN of an IAM role mapped to the system:masters RBAC role. If the cluster you are importing was created using the AWS CDK, the CloudFormation stack has an output that includes an IAM role that can be used. Otherwise, you can create an IAM role and map it to system:masters manually. The trust policy of this role should include the the arn:aws::iam::${accountId}:root principal in order to allow the execution role of the kubectl resource to assume it.

If the cluster is configured with private-only or private and restricted public Kubernetes endpoint access, you must also specify:

  • kubectlSecurityGroupId - the ID of an EC2 security group that is allowed connections to the cluster's control security group. For example, the EKS managed cluster security group.
  • kubectlPrivateSubnetIds - a list of private VPC subnets IDs that will be used to access the Kubernetes endpoint.

Logging

EKS supports cluster logging for 5 different types of events:

  • API requests to the cluster.
  • Cluster access via the Kubernetes API.
  • Authentication requests into the cluster.
  • State of cluster controllers.
  • Scheduling decisions.

You can enable logging for each one separately using the clusterLogging property. For example:

cluster := eks.NewCluster(this, jsii.String("Cluster"), &clusterProps{
	// ...
	version: eks.kubernetesVersion_V1_21(),
	clusterLogging: []clusterLoggingTypes{
		eks.*clusterLoggingTypes_API,
		eks.*clusterLoggingTypes_AUTHENTICATOR,
		eks.*clusterLoggingTypes_SCHEDULER,
	},
})

Known Issues and Limitations

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func AlbController_IsConstruct

func AlbController_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func AwsAuth_IsConstruct

func AwsAuth_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnAddon_CFN_RESOURCE_TYPE_NAME

func CfnAddon_CFN_RESOURCE_TYPE_NAME() *string

func CfnAddon_IsCfnElement

func CfnAddon_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnAddon_IsCfnResource

func CfnAddon_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnAddon_IsConstruct

func CfnAddon_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnCluster_CFN_RESOURCE_TYPE_NAME

func CfnCluster_CFN_RESOURCE_TYPE_NAME() *string

func CfnCluster_IsCfnElement

func CfnCluster_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnCluster_IsCfnResource

func CfnCluster_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnCluster_IsConstruct

func CfnCluster_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnFargateProfile_CFN_RESOURCE_TYPE_NAME

func CfnFargateProfile_CFN_RESOURCE_TYPE_NAME() *string

func CfnFargateProfile_IsCfnElement

func CfnFargateProfile_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnFargateProfile_IsCfnResource

func CfnFargateProfile_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnFargateProfile_IsConstruct

func CfnFargateProfile_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnIdentityProviderConfig_CFN_RESOURCE_TYPE_NAME added in v2.16.0

func CfnIdentityProviderConfig_CFN_RESOURCE_TYPE_NAME() *string

func CfnIdentityProviderConfig_IsCfnElement added in v2.16.0

func CfnIdentityProviderConfig_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnIdentityProviderConfig_IsCfnResource added in v2.16.0

func CfnIdentityProviderConfig_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnIdentityProviderConfig_IsConstruct added in v2.16.0

func CfnIdentityProviderConfig_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func CfnNodegroup_CFN_RESOURCE_TYPE_NAME

func CfnNodegroup_CFN_RESOURCE_TYPE_NAME() *string

func CfnNodegroup_IsCfnElement

func CfnNodegroup_IsCfnElement(x interface{}) *bool

Returns `true` if a construct is a stack element (i.e. part of the synthesized cloudformation template).

Uses duck-typing instead of `instanceof` to allow stack elements from different versions of this library to be included in the same stack.

Returns: The construct as a stack element or undefined if it is not a stack element.

func CfnNodegroup_IsCfnResource

func CfnNodegroup_IsCfnResource(construct constructs.IConstruct) *bool

Check whether the given construct is a CfnResource.

func CfnNodegroup_IsConstruct

func CfnNodegroup_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func Cluster_IsConstruct

func Cluster_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func Cluster_IsOwnedResource added in v2.32.0

func Cluster_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func Cluster_IsResource

func Cluster_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func FargateCluster_IsConstruct

func FargateCluster_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func FargateCluster_IsOwnedResource added in v2.32.0

func FargateCluster_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func FargateCluster_IsResource

func FargateCluster_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func FargateProfile_IsConstruct

func FargateProfile_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func HelmChart_IsConstruct

func HelmChart_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func HelmChart_RESOURCE_TYPE

func HelmChart_RESOURCE_TYPE() *string

func KubectlProvider_IsConstruct added in v2.4.0

func KubectlProvider_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func KubectlProvider_IsNestedStack added in v2.4.0

func KubectlProvider_IsNestedStack(x interface{}) *bool

Checks if `x` is an object of type `NestedStack`.

func KubectlProvider_IsStack added in v2.4.0

func KubectlProvider_IsStack(x interface{}) *bool

Return whether the given object is a Stack.

We do attribute detection since we can't reliably use 'instanceof'.

func KubectlProvider_Of added in v2.4.0

func KubectlProvider_Of(construct constructs.IConstruct) awscdk.Stack

Looks up the first stack scope in which `construct` is defined.

Fails if there is no stack up the tree.

func KubernetesManifest_IsConstruct

func KubernetesManifest_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func KubernetesManifest_RESOURCE_TYPE

func KubernetesManifest_RESOURCE_TYPE() *string

func KubernetesObjectValue_IsConstruct

func KubernetesObjectValue_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func KubernetesObjectValue_RESOURCE_TYPE

func KubernetesObjectValue_RESOURCE_TYPE() *string

func KubernetesPatch_IsConstruct

func KubernetesPatch_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func NewAlbController_Override

func NewAlbController_Override(a AlbController, scope constructs.Construct, id *string, props *AlbControllerProps)

func NewAwsAuth_Override

func NewAwsAuth_Override(a AwsAuth, scope constructs.Construct, id *string, props *AwsAuthProps)

func NewCfnAddon_Override

func NewCfnAddon_Override(c CfnAddon, scope constructs.Construct, id *string, props *CfnAddonProps)

Create a new `AWS::EKS::Addon`.

func NewCfnCluster_Override

func NewCfnCluster_Override(c CfnCluster, scope constructs.Construct, id *string, props *CfnClusterProps)

Create a new `AWS::EKS::Cluster`.

func NewCfnFargateProfile_Override

func NewCfnFargateProfile_Override(c CfnFargateProfile, scope constructs.Construct, id *string, props *CfnFargateProfileProps)

Create a new `AWS::EKS::FargateProfile`.

func NewCfnIdentityProviderConfig_Override added in v2.16.0

func NewCfnIdentityProviderConfig_Override(c CfnIdentityProviderConfig, scope constructs.Construct, id *string, props *CfnIdentityProviderConfigProps)

Create a new `AWS::EKS::IdentityProviderConfig`.

func NewCfnNodegroup_Override

func NewCfnNodegroup_Override(c CfnNodegroup, scope constructs.Construct, id *string, props *CfnNodegroupProps)

Create a new `AWS::EKS::Nodegroup`.

func NewCluster_Override

func NewCluster_Override(c Cluster, scope constructs.Construct, id *string, props *ClusterProps)

Initiates an EKS Cluster with the supplied arguments.

func NewEksOptimizedImage_Override

func NewEksOptimizedImage_Override(e EksOptimizedImage, props *EksOptimizedImageProps)

Constructs a new instance of the EcsOptimizedAmi class.

func NewFargateCluster_Override

func NewFargateCluster_Override(f FargateCluster, scope constructs.Construct, id *string, props *FargateClusterProps)

func NewFargateProfile_Override

func NewFargateProfile_Override(f FargateProfile, scope constructs.Construct, id *string, props *FargateProfileProps)

func NewHelmChart_Override

func NewHelmChart_Override(h HelmChart, scope constructs.Construct, id *string, props *HelmChartProps)

func NewKubectlProvider_Override added in v2.4.0

func NewKubectlProvider_Override(k KubectlProvider, scope constructs.Construct, id *string, props *KubectlProviderProps)

func NewKubernetesManifest_Override

func NewKubernetesManifest_Override(k KubernetesManifest, scope constructs.Construct, id *string, props *KubernetesManifestProps)

func NewKubernetesObjectValue_Override

func NewKubernetesObjectValue_Override(k KubernetesObjectValue, scope constructs.Construct, id *string, props *KubernetesObjectValueProps)

func NewKubernetesPatch_Override

func NewKubernetesPatch_Override(k KubernetesPatch, scope constructs.Construct, id *string, props *KubernetesPatchProps)

func NewNodegroup_Override

func NewNodegroup_Override(n Nodegroup, scope constructs.Construct, id *string, props *NodegroupProps)

func NewOpenIdConnectProvider_Override

func NewOpenIdConnectProvider_Override(o OpenIdConnectProvider, scope constructs.Construct, id *string, props *OpenIdConnectProviderProps)

Defines an OpenID Connect provider.

func NewServiceAccount_Override

func NewServiceAccount_Override(s ServiceAccount, scope constructs.Construct, id *string, props *ServiceAccountProps)

func Nodegroup_IsConstruct

func Nodegroup_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func Nodegroup_IsOwnedResource added in v2.32.0

func Nodegroup_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func Nodegroup_IsResource

func Nodegroup_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func OpenIdConnectProvider_FromOpenIdConnectProviderArn

func OpenIdConnectProvider_FromOpenIdConnectProviderArn(scope constructs.Construct, id *string, openIdConnectProviderArn *string) awsiam.IOpenIdConnectProvider

Imports an Open ID connect provider from an ARN.

func OpenIdConnectProvider_IsConstruct

func OpenIdConnectProvider_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

func OpenIdConnectProvider_IsOwnedResource added in v2.32.0

func OpenIdConnectProvider_IsOwnedResource(construct constructs.IConstruct) *bool

Returns true if the construct was created by CDK, and false otherwise.

func OpenIdConnectProvider_IsResource

func OpenIdConnectProvider_IsResource(construct constructs.IConstruct) *bool

Check whether the given construct is a Resource.

func ServiceAccount_IsConstruct

func ServiceAccount_IsConstruct(x interface{}) *bool

Checks if `x` is a construct.

Use this method instead of `instanceof` to properly detect `Construct` instances, even when the construct library is symlinked.

Explanation: in JavaScript, multiple copies of the `constructs` library on disk are seen as independent, completely different libraries. As a consequence, the class `Construct` in each copy of the `constructs` library is seen as a different class, and an instance of one class will not test as `instanceof` the other class. `npm install` will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of the `constructs` library can be accidentally installed, and `instanceof` will behave unpredictably. It is safest to avoid using `instanceof`, and using this type-testing method instead.

Returns: true if `x` is an object created from a class which extends `Construct`.

Types

type AlbController

type AlbController interface {
	constructs.Construct
	// The tree node.
	Node() constructs.Node
	// Returns a string representation of this construct.
	ToString() *string
}

Construct for installing the AWS ALB Contoller on EKS clusters.

Use the factory functions `get` and `getOrCreate` to obtain/create instances of this controller.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var albControllerVersion albControllerVersion
var cluster cluster
var policy interface{}

albController := awscdk.Aws_eks.NewAlbController(this, jsii.String("MyAlbController"), &albControllerProps{
	cluster: cluster,
	version: albControllerVersion,

	// the properties below are optional
	policy: policy,
	repository: jsii.String("repository"),
})

See: https://kubernetes-sigs.github.io/aws-load-balancer-controller

func AlbController_Create

func AlbController_Create(scope constructs.Construct, props *AlbControllerProps) AlbController

Create the controller construct associated with this cluster and scope.

Singleton per stack/cluster.

func NewAlbController

func NewAlbController(scope constructs.Construct, id *string, props *AlbControllerProps) AlbController

type AlbControllerOptions

type AlbControllerOptions struct {
	// Version of the controller.
	Version AlbControllerVersion `field:"required" json:"version" yaml:"version"`
	// The IAM policy to apply to the service account.
	//
	// If you're using one of the built-in versions, this is not required since
	// CDK ships with the appropriate policies for those versions.
	//
	// However, if you are using a custom version, this is required (and validated).
	Policy interface{} `field:"optional" json:"policy" yaml:"policy"`
	// The repository to pull the controller image from.
	//
	// Note that the default repository works for most regions, but not all.
	// If the repository is not applicable to your region, use a custom repository
	// according to the information here: https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases.
	Repository *string `field:"optional" json:"repository" yaml:"repository"`
}

Options for `AlbController`.

Example:

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	albController: &albControllerOptions{
		version: eks.albControllerVersion_V2_4_1(),
	},
})

type AlbControllerProps

type AlbControllerProps struct {
	// Version of the controller.
	Version AlbControllerVersion `field:"required" json:"version" yaml:"version"`
	// The IAM policy to apply to the service account.
	//
	// If you're using one of the built-in versions, this is not required since
	// CDK ships with the appropriate policies for those versions.
	//
	// However, if you are using a custom version, this is required (and validated).
	Policy interface{} `field:"optional" json:"policy" yaml:"policy"`
	// The repository to pull the controller image from.
	//
	// Note that the default repository works for most regions, but not all.
	// If the repository is not applicable to your region, use a custom repository
	// according to the information here: https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases.
	Repository *string `field:"optional" json:"repository" yaml:"repository"`
	// [disable-awslint:ref-via-interface] Cluster to install the controller onto.
	Cluster Cluster `field:"required" json:"cluster" yaml:"cluster"`
}

Properties for `AlbController`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var albControllerVersion albControllerVersion
var cluster cluster
var policy interface{}

albControllerProps := &albControllerProps{
	cluster: cluster,
	version: albControllerVersion,

	// the properties below are optional
	policy: policy,
	repository: jsii.String("repository"),
}

type AlbControllerVersion

type AlbControllerVersion interface {
	// Whether or not its a custom version.
	Custom() *bool
	// The version string.
	Version() *string
}

Controller version.

Corresponds to the image tag of 'amazon/aws-load-balancer-controller' image.

Example:

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	albController: &albControllerOptions{
		version: eks.albControllerVersion_V2_4_1(),
	},
})

func AlbControllerVersion_Of

func AlbControllerVersion_Of(version *string) AlbControllerVersion

Specify a custom version.

Use this if the version you need is not available in one of the predefined versions. Note that in this case, you will also need to provide an IAM policy in the controller options.

func AlbControllerVersion_V2_0_0

func AlbControllerVersion_V2_0_0() AlbControllerVersion

func AlbControllerVersion_V2_0_1

func AlbControllerVersion_V2_0_1() AlbControllerVersion

func AlbControllerVersion_V2_1_0

func AlbControllerVersion_V2_1_0() AlbControllerVersion

func AlbControllerVersion_V2_1_1

func AlbControllerVersion_V2_1_1() AlbControllerVersion

func AlbControllerVersion_V2_1_2

func AlbControllerVersion_V2_1_2() AlbControllerVersion

func AlbControllerVersion_V2_1_3

func AlbControllerVersion_V2_1_3() AlbControllerVersion

func AlbControllerVersion_V2_2_0

func AlbControllerVersion_V2_2_0() AlbControllerVersion

func AlbControllerVersion_V2_2_1

func AlbControllerVersion_V2_2_1() AlbControllerVersion

func AlbControllerVersion_V2_2_2

func AlbControllerVersion_V2_2_2() AlbControllerVersion

func AlbControllerVersion_V2_2_3

func AlbControllerVersion_V2_2_3() AlbControllerVersion

func AlbControllerVersion_V2_2_4

func AlbControllerVersion_V2_2_4() AlbControllerVersion

func AlbControllerVersion_V2_3_0

func AlbControllerVersion_V2_3_0() AlbControllerVersion

func AlbControllerVersion_V2_3_1 added in v2.4.0

func AlbControllerVersion_V2_3_1() AlbControllerVersion

func AlbControllerVersion_V2_4_1 added in v2.20.0

func AlbControllerVersion_V2_4_1() AlbControllerVersion

type AlbScheme

type AlbScheme string

ALB Scheme. See: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/annotations/#scheme

const (
	// The nodes of an internal load balancer have only private IP addresses.
	//
	// The DNS name of an internal load balancer is publicly resolvable to the private IP addresses of the nodes.
	// Therefore, internal load balancers can only route requests from clients with access to the VPC for the load balancer.
	AlbScheme_INTERNAL AlbScheme = "INTERNAL"
	// An internet-facing load balancer has a publicly resolvable DNS name, so it can route requests from clients over the internet to the EC2 instances that are registered with the load balancer.
	AlbScheme_INTERNET_FACING AlbScheme = "INTERNET_FACING"
)

type AutoScalingGroupCapacityOptions

type AutoScalingGroupCapacityOptions struct {
	// Whether the instances can initiate connections to anywhere by default.
	AllowAllOutbound *bool `field:"optional" json:"allowAllOutbound" yaml:"allowAllOutbound"`
	// Whether instances in the Auto Scaling Group should have public IP addresses associated with them.
	//
	// `launchTemplate` and `mixedInstancesPolicy` must not be specified when this property is specified.
	AssociatePublicIpAddress *bool `field:"optional" json:"associatePublicIpAddress" yaml:"associatePublicIpAddress"`
	// The name of the Auto Scaling group.
	//
	// This name must be unique per Region per account.
	AutoScalingGroupName *string `field:"optional" json:"autoScalingGroupName" yaml:"autoScalingGroupName"`
	// Specifies how block devices are exposed to the instance. You can specify virtual devices and EBS volumes.
	//
	// Each instance that is launched has an associated root device volume,
	// either an Amazon EBS volume or an instance store volume.
	// You can use block device mappings to specify additional EBS volumes or
	// instance store volumes to attach to an instance when it is launched.
	//
	// `launchTemplate` and `mixedInstancesPolicy` must not be specified when this property is specified.
	// See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
	//
	BlockDevices *[]*awsautoscaling.BlockDevice `field:"optional" json:"blockDevices" yaml:"blockDevices"`
	// Default scaling cooldown for this AutoScalingGroup.
	Cooldown awscdk.Duration `field:"optional" json:"cooldown" yaml:"cooldown"`
	// Initial amount of instances in the fleet.
	//
	// If this is set to a number, every deployment will reset the amount of
	// instances to this number. It is recommended to leave this value blank.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#cfn-as-group-desiredcapacity
	//
	DesiredCapacity *float64 `field:"optional" json:"desiredCapacity" yaml:"desiredCapacity"`
	// Enable monitoring for group metrics, these metrics describe the group rather than any of its instances.
	//
	// To report all group metrics use `GroupMetrics.all()`
	// Group metrics are reported in a granularity of 1 minute at no additional charge.
	GroupMetrics *[]awsautoscaling.GroupMetrics `field:"optional" json:"groupMetrics" yaml:"groupMetrics"`
	// Configuration for health checks.
	HealthCheck awsautoscaling.HealthCheck `field:"optional" json:"healthCheck" yaml:"healthCheck"`
	// If the ASG has scheduled actions, don't reset unchanged group sizes.
	//
	// Only used if the ASG has scheduled actions (which may scale your ASG up
	// or down regardless of cdk deployments). If true, the size of the group
	// will only be reset if it has been changed in the CDK app. If false, the
	// sizes will always be changed back to what they were in the CDK app
	// on deployment.
	IgnoreUnmodifiedSizeProperties *bool `field:"optional" json:"ignoreUnmodifiedSizeProperties" yaml:"ignoreUnmodifiedSizeProperties"`
	// Controls whether instances in this group are launched with detailed or basic monitoring.
	//
	// When detailed monitoring is enabled, Amazon CloudWatch generates metrics every minute and your account
	// is charged a fee. When you disable detailed monitoring, CloudWatch generates metrics every 5 minutes.
	//
	// `launchTemplate` and `mixedInstancesPolicy` must not be specified when this property is specified.
	// See: https://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-monitoring.html#enable-as-instance-metrics
	//
	InstanceMonitoring awsautoscaling.Monitoring `field:"optional" json:"instanceMonitoring" yaml:"instanceMonitoring"`
	// Name of SSH keypair to grant access to instances.
	//
	// `launchTemplate` and `mixedInstancesPolicy` must not be specified when this property is specified.
	KeyName *string `field:"optional" json:"keyName" yaml:"keyName"`
	// Maximum number of instances in the fleet.
	MaxCapacity *float64 `field:"optional" json:"maxCapacity" yaml:"maxCapacity"`
	// The maximum amount of time that an instance can be in service.
	//
	// The maximum duration applies
	// to all current and future instances in the group. As an instance approaches its maximum duration,
	// it is terminated and replaced, and cannot be used again.
	//
	// You must specify a value of at least 604,800 seconds (7 days). To clear a previously set value,
	// leave this property undefined.
	// See: https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-max-instance-lifetime.html
	//
	MaxInstanceLifetime awscdk.Duration `field:"optional" json:"maxInstanceLifetime" yaml:"maxInstanceLifetime"`
	// Minimum number of instances in the fleet.
	MinCapacity *float64 `field:"optional" json:"minCapacity" yaml:"minCapacity"`
	// Whether newly-launched instances are protected from termination by Amazon EC2 Auto Scaling when scaling in.
	//
	// By default, Auto Scaling can terminate an instance at any time after launch
	// when scaling in an Auto Scaling Group, subject to the group's termination
	// policy. However, you may wish to protect newly-launched instances from
	// being scaled in if they are going to run critical applications that should
	// not be prematurely terminated.
	//
	// This flag must be enabled if the Auto Scaling Group will be associated with
	// an ECS Capacity Provider with managed termination protection.
	NewInstancesProtectedFromScaleIn *bool `field:"optional" json:"newInstancesProtectedFromScaleIn" yaml:"newInstancesProtectedFromScaleIn"`
	// Configure autoscaling group to send notifications about fleet changes to an SNS topic(s).
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#cfn-as-group-notificationconfigurations
	//
	Notifications *[]*awsautoscaling.NotificationConfiguration `field:"optional" json:"notifications" yaml:"notifications"`
	// Configure waiting for signals during deployment.
	//
	// Use this to pause the CloudFormation deployment to wait for the instances
	// in the AutoScalingGroup to report successful startup during
	// creation and updates. The UserData script needs to invoke `cfn-signal`
	// with a success or failure code after it is done setting up the instance.
	//
	// Without waiting for signals, the CloudFormation deployment will proceed as
	// soon as the AutoScalingGroup has been created or updated but before the
	// instances in the group have been started.
	//
	// For example, to have instances wait for an Elastic Load Balancing health check before
	// they signal success, add a health-check verification by using the
	// cfn-init helper script. For an example, see the verify_instance_health
	// command in the Auto Scaling rolling updates sample template:
	//
	// https://github.com/awslabs/aws-cloudformation-templates/blob/master/aws/services/AutoScaling/AutoScalingRollingUpdates.yaml
	Signals awsautoscaling.Signals `field:"optional" json:"signals" yaml:"signals"`
	// The maximum hourly price (in USD) to be paid for any Spot Instance launched to fulfill the request.
	//
	// Spot Instances are
	// launched when the price you specify exceeds the current Spot market price.
	//
	// `launchTemplate` and `mixedInstancesPolicy` must not be specified when this property is specified.
	SpotPrice *string `field:"optional" json:"spotPrice" yaml:"spotPrice"`
	// A policy or a list of policies that are used to select the instances to terminate.
	//
	// The policies are executed in the order that you list them.
	// See: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html
	//
	TerminationPolicies *[]awsautoscaling.TerminationPolicy `field:"optional" json:"terminationPolicies" yaml:"terminationPolicies"`
	// What to do when an AutoScalingGroup's instance configuration is changed.
	//
	// This is applied when any of the settings on the ASG are changed that
	// affect how the instances should be created (VPC, instance type, startup
	// scripts, etc.). It indicates how the existing instances should be
	// replaced with new instances matching the new config. By default, nothing
	// is done and only new instances are launched with the new config.
	UpdatePolicy awsautoscaling.UpdatePolicy `field:"optional" json:"updatePolicy" yaml:"updatePolicy"`
	// Where to place instances within the VPC.
	VpcSubnets *awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// Instance type of the instances to start.
	InstanceType awsec2.InstanceType `field:"required" json:"instanceType" yaml:"instanceType"`
	// Configures the EC2 user-data script for instances in this autoscaling group to bootstrap the node (invoke `/etc/eks/bootstrap.sh`) and associate it with the EKS cluster.
	//
	// If you wish to provide a custom user data script, set this to `false` and
	// manually invoke `autoscalingGroup.addUserData()`.
	BootstrapEnabled *bool `field:"optional" json:"bootstrapEnabled" yaml:"bootstrapEnabled"`
	// EKS node bootstrapping options.
	BootstrapOptions *BootstrapOptions `field:"optional" json:"bootstrapOptions" yaml:"bootstrapOptions"`
	// Machine image type.
	MachineImageType MachineImageType `field:"optional" json:"machineImageType" yaml:"machineImageType"`
	// Will automatically update the aws-auth ConfigMap to map the IAM instance role to RBAC.
	//
	// This cannot be explicitly set to `true` if the cluster has kubectl disabled.
	MapRole *bool `field:"optional" json:"mapRole" yaml:"mapRole"`
	// Installs the AWS spot instance interrupt handler on the cluster if it's not already added.
	//
	// Only relevant if `spotPrice` is used.
	SpotInterruptHandler *bool `field:"optional" json:"spotInterruptHandler" yaml:"spotInterruptHandler"`
}

Options for adding worker nodes.

Example:

var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("BottlerocketNodes"), &autoScalingGroupCapacityOptions{
	instanceType: ec2.NewInstanceType(jsii.String("t3.small")),
	minCapacity: jsii.Number(2),
	machineImageType: eks.machineImageType_BOTTLEROCKET,
})

type AutoScalingGroupOptions

type AutoScalingGroupOptions struct {
	// Configures the EC2 user-data script for instances in this autoscaling group to bootstrap the node (invoke `/etc/eks/bootstrap.sh`) and associate it with the EKS cluster.
	//
	// If you wish to provide a custom user data script, set this to `false` and
	// manually invoke `autoscalingGroup.addUserData()`.
	BootstrapEnabled *bool `field:"optional" json:"bootstrapEnabled" yaml:"bootstrapEnabled"`
	// Allows options for node bootstrapping through EC2 user data.
	BootstrapOptions *BootstrapOptions `field:"optional" json:"bootstrapOptions" yaml:"bootstrapOptions"`
	// Allow options to specify different machine image type.
	MachineImageType MachineImageType `field:"optional" json:"machineImageType" yaml:"machineImageType"`
	// Will automatically update the aws-auth ConfigMap to map the IAM instance role to RBAC.
	//
	// This cannot be explicitly set to `true` if the cluster has kubectl disabled.
	MapRole *bool `field:"optional" json:"mapRole" yaml:"mapRole"`
	// Installs the AWS spot instance interrupt handler on the cluster if it's not already added.
	//
	// Only relevant if `spotPrice` is configured on the auto-scaling group.
	SpotInterruptHandler *bool `field:"optional" json:"spotInterruptHandler" yaml:"spotInterruptHandler"`
}

Options for adding an AutoScalingGroup as capacity.

Example:

var cluster cluster
var asg autoScalingGroup

cluster.connectAutoScalingGroupCapacity(asg, &autoScalingGroupOptions{
})

type AwsAuth

type AwsAuth interface {
	constructs.Construct
	// The tree node.
	Node() constructs.Node
	// Additional AWS account to add to the aws-auth configmap.
	AddAccount(accountId *string)
	// Adds the specified IAM role to the `system:masters` RBAC group, which means that anyone that can assume it will be able to administer this Kubernetes system.
	AddMastersRole(role awsiam.IRole, username *string)
	// Adds a mapping between an IAM role to a Kubernetes user and groups.
	AddRoleMapping(role awsiam.IRole, mapping *AwsAuthMapping)
	// Adds a mapping between an IAM user to a Kubernetes user and groups.
	AddUserMapping(user awsiam.IUser, mapping *AwsAuthMapping)
	// Returns a string representation of this construct.
	ToString() *string
}

Manages mapping between IAM users and roles to Kubernetes RBAC configuration.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster

awsAuth := awscdk.Aws_eks.NewAwsAuth(this, jsii.String("MyAwsAuth"), &awsAuthProps{
	cluster: cluster,
})

See: https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html

func NewAwsAuth

func NewAwsAuth(scope constructs.Construct, id *string, props *AwsAuthProps) AwsAuth

type AwsAuthMapping

type AwsAuthMapping struct {
	// A list of groups within Kubernetes to which the role is mapped.
	// See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings
	//
	Groups *[]*string `field:"required" json:"groups" yaml:"groups"`
	// The user name within Kubernetes to map to the IAM role.
	Username *string `field:"optional" json:"username" yaml:"username"`
}

AwsAuth mapping.

Example:

var cluster cluster

adminUser := iam.NewUser(this, jsii.String("Admin"))
cluster.awsAuth.addUserMapping(adminUser, &awsAuthMapping{
	groups: []*string{
		jsii.String("system:masters"),
	},
})

type AwsAuthProps

type AwsAuthProps struct {
	// The EKS cluster to apply this configuration to.
	//
	// [disable-awslint:ref-via-interface].
	Cluster Cluster `field:"required" json:"cluster" yaml:"cluster"`
}

Configuration props for the AwsAuth construct.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster

awsAuthProps := &awsAuthProps{
	cluster: cluster,
}

type BootstrapOptions

type BootstrapOptions struct {
	// Additional command line arguments to pass to the `/etc/eks/bootstrap.sh` command.
	// See: https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh
	//
	AdditionalArgs *string `field:"optional" json:"additionalArgs" yaml:"additionalArgs"`
	// Number of retry attempts for AWS API call (DescribeCluster).
	AwsApiRetryAttempts *float64 `field:"optional" json:"awsApiRetryAttempts" yaml:"awsApiRetryAttempts"`
	// Overrides the IP address to use for DNS queries within the cluster.
	DnsClusterIp *string `field:"optional" json:"dnsClusterIp" yaml:"dnsClusterIp"`
	// The contents of the `/etc/docker/daemon.json` file. Useful if you want a custom config differing from the default one in the EKS AMI.
	DockerConfigJson *string `field:"optional" json:"dockerConfigJson" yaml:"dockerConfigJson"`
	// Restores the docker default bridge network.
	EnableDockerBridge *bool `field:"optional" json:"enableDockerBridge" yaml:"enableDockerBridge"`
	// Extra arguments to add to the kubelet. Useful for adding labels or taints.
	//
	// For example, `--node-labels foo=bar,goo=far`.
	KubeletExtraArgs *string `field:"optional" json:"kubeletExtraArgs" yaml:"kubeletExtraArgs"`
	// Sets `--max-pods` for the kubelet based on the capacity of the EC2 instance.
	UseMaxPods *bool `field:"optional" json:"useMaxPods" yaml:"useMaxPods"`
}

EKS node bootstrapping options.

Example:

var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("spot"), &autoScalingGroupCapacityOptions{
	instanceType: ec2.NewInstanceType(jsii.String("t3.large")),
	minCapacity: jsii.Number(2),
	bootstrapOptions: &bootstrapOptions{
		kubeletExtraArgs: jsii.String("--node-labels foo=bar,goo=far"),
		awsApiRetryAttempts: jsii.Number(5),
	},
})

type CapacityType

type CapacityType string

Capacity type of the managed node group.

Example:

var cluster cluster

cluster.addNodegroupCapacity(jsii.String("extra-ng-spot"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("c5.large")),
		ec2.NewInstanceType(jsii.String("c5a.large")),
		ec2.NewInstanceType(jsii.String("c5d.large")),
	},
	minSize: jsii.Number(3),
	capacityType: eks.capacityType_SPOT,
})
const (
	// spot instances.
	CapacityType_SPOT CapacityType = "SPOT"
	// on-demand instances.
	CapacityType_ON_DEMAND CapacityType = "ON_DEMAND"
)

type CfnAddon

type CfnAddon interface {
	awscdk.CfnResource
	awscdk.IInspectable
	// The name of the add-on.
	AddonName() *string
	SetAddonName(val *string)
	// The version of the add-on.
	AddonVersion() *string
	SetAddonVersion(val *string)
	// The ARN of the add-on, such as `arn:aws:eks:us-west-2:111122223333:addon/1-19/vpc-cni/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` .
	AttrArn() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// The name of the cluster.
	ClusterName() *string
	SetClusterName(val *string)
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The tree node.
	Node() constructs.Node
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// How to resolve parameter value conflicts when migrating an existing add-on to an Amazon EKS add-on.
	ResolveConflicts() *string
	SetResolveConflicts(val *string)
	// The Amazon Resource Name (ARN) of an existing IAM role to bind to the add-on's service account.
	//
	// The role must be assigned the IAM permissions required by the add-on. If you don't specify an existing IAM role, then the add-on uses the permissions assigned to the node IAM role. For more information, see [Amazon EKS node IAM role](https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html) in the *Amazon EKS User Guide* .
	//
	// > To specify an existing IAM role, you must have an IAM OpenID Connect (OIDC) provider created for your cluster. For more information, see [Enabling IAM roles for service accounts on your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) in the *Amazon EKS User Guide* .
	ServiceAccountRoleArn() *string
	SetServiceAccountRoleArn(val *string)
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// The metadata that you apply to the add-on to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value, both of which you define. Add-on tags do not propagate to any other resources associated with the cluster.
	Tags() awscdk.TagManager
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//    "GlobalSecondaryIndexes": [
	//      {
	//        "Projection": {
	//          "NonKeyAttributes": [ "myattribute" ]
	//          ...
	//        }
	//        ...
	//      },
	//      {
	//        "ProjectionType": "INCLUDE"
	//        ...
	//      },
	//    ]
	//    ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

A CloudFormation `AWS::EKS::Addon`.

Creates an Amazon EKS add-on.

Amazon EKS add-ons help to automate the provisioning and lifecycle management of common operational software for Amazon EKS clusters. Amazon EKS add-ons require clusters running version 1.18 or later because Amazon EKS add-ons rely on the Server-side Apply Kubernetes feature, which is only available in Kubernetes 1.18 and later. For more information, see [Amazon EKS add-ons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) in the *Amazon EKS User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnAddon := awscdk.Aws_eks.NewCfnAddon(this, jsii.String("MyCfnAddon"), &cfnAddonProps{
	addonName: jsii.String("addonName"),
	clusterName: jsii.String("clusterName"),

	// the properties below are optional
	addonVersion: jsii.String("addonVersion"),
	resolveConflicts: jsii.String("resolveConflicts"),
	serviceAccountRoleArn: jsii.String("serviceAccountRoleArn"),
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
})

func NewCfnAddon

func NewCfnAddon(scope constructs.Construct, id *string, props *CfnAddonProps) CfnAddon

Create a new `AWS::EKS::Addon`.

type CfnAddonProps

type CfnAddonProps struct {
	// The name of the add-on.
	AddonName *string `field:"required" json:"addonName" yaml:"addonName"`
	// The name of the cluster.
	ClusterName *string `field:"required" json:"clusterName" yaml:"clusterName"`
	// The version of the add-on.
	AddonVersion *string `field:"optional" json:"addonVersion" yaml:"addonVersion"`
	// How to resolve parameter value conflicts when migrating an existing add-on to an Amazon EKS add-on.
	ResolveConflicts *string `field:"optional" json:"resolveConflicts" yaml:"resolveConflicts"`
	// The Amazon Resource Name (ARN) of an existing IAM role to bind to the add-on's service account.
	//
	// The role must be assigned the IAM permissions required by the add-on. If you don't specify an existing IAM role, then the add-on uses the permissions assigned to the node IAM role. For more information, see [Amazon EKS node IAM role](https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html) in the *Amazon EKS User Guide* .
	//
	// > To specify an existing IAM role, you must have an IAM OpenID Connect (OIDC) provider created for your cluster. For more information, see [Enabling IAM roles for service accounts on your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) in the *Amazon EKS User Guide* .
	ServiceAccountRoleArn *string `field:"optional" json:"serviceAccountRoleArn" yaml:"serviceAccountRoleArn"`
	// The metadata that you apply to the add-on to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value, both of which you define. Add-on tags do not propagate to any other resources associated with the cluster.
	Tags *[]*awscdk.CfnTag `field:"optional" json:"tags" yaml:"tags"`
}

Properties for defining a `CfnAddon`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnAddonProps := &cfnAddonProps{
	addonName: jsii.String("addonName"),
	clusterName: jsii.String("clusterName"),

	// the properties below are optional
	addonVersion: jsii.String("addonVersion"),
	resolveConflicts: jsii.String("resolveConflicts"),
	serviceAccountRoleArn: jsii.String("serviceAccountRoleArn"),
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
}

type CfnCluster

type CfnCluster interface {
	awscdk.CfnResource
	awscdk.IInspectable
	// The ARN of the cluster, such as `arn:aws:eks:us-west-2:666666666666:cluster/prod` .
	AttrArn() *string
	// The `certificate-authority-data` for your cluster.
	AttrCertificateAuthorityData() *string
	// The cluster security group that was created by Amazon EKS for the cluster.
	//
	// Managed node groups use this security group for control plane to data plane communication.
	//
	// This parameter is only returned by Amazon EKS clusters that support managed node groups. For more information, see [Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) in the *Amazon EKS User Guide* .
	AttrClusterSecurityGroupId() *string
	// Amazon Resource Name (ARN) or alias of the customer master key (CMK).
	AttrEncryptionConfigKeyArn() *string
	// The endpoint for your Kubernetes API server, such as `https://5E1D0CEXAMPLEA591B746AFC5AB30262.yl4.us-west-2.eks.amazonaws.com` .
	AttrEndpoint() *string
	AttrId() *string
	// The CIDR block that Kubernetes Service IP addresses are assigned from if you created a 1.21 or later cluster with version 1.10.1 or later of the Amazon VPC CNI add-on and specified `ipv6` for *ipFamily* when you created the cluster. Kubernetes assigns Service addresses from the unique local address range ( `fc00::/7` ) because you can't specify a custom IPv6 CIDR block when you create the cluster.
	AttrKubernetesNetworkConfigServiceIpv6Cidr() *string
	// The issuer URL for the OIDC identity provider.
	AttrOpenIdConnectIssuerUrl() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The encryption configuration for the cluster.
	EncryptionConfig() interface{}
	SetEncryptionConfig(val interface{})
	// The Kubernetes network configuration for the cluster.
	KubernetesNetworkConfig() interface{}
	SetKubernetesNetworkConfig(val interface{})
	// The logging configuration for your cluster.
	Logging() interface{}
	SetLogging(val interface{})
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The unique name to give to your cluster.
	Name() *string
	SetName(val *string)
	// The tree node.
	Node() constructs.Node
	// `AWS::EKS::Cluster.OutpostConfig`.
	OutpostConfig() interface{}
	SetOutpostConfig(val interface{})
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// The VPC configuration that's used by the cluster control plane.
	//
	// Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the *Amazon EKS User Guide* . You must specify at least two subnets. You can specify up to five security groups, but we recommend that you use a dedicated security group for your cluster control plane.
	//
	// > Updates require replacement of the `SecurityGroupIds` and `SubnetIds` sub-properties.
	ResourcesVpcConfig() interface{}
	SetResourcesVpcConfig(val interface{})
	// The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
	//
	// For more information, see [Amazon EKS Service IAM Role](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html) in the **Amazon EKS User Guide** .
	RoleArn() *string
	SetRoleArn(val *string)
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// The metadata that you apply to the cluster to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value, both of which you define. Cluster tags don't propagate to any other resources associated with the cluster.
	//
	// > You must have the `eks:TagResource` and `eks:UntagResource` permissions in your IAM user or IAM role used to manage the CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.
	Tags() awscdk.TagManager
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// The desired Kubernetes version for your cluster.
	//
	// If you don't specify a value here, the latest version available in Amazon EKS is used.
	Version() *string
	SetVersion(val *string)
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//    "GlobalSecondaryIndexes": [
	//      {
	//        "Projection": {
	//          "NonKeyAttributes": [ "myattribute" ]
	//          ...
	//        }
	//        ...
	//      },
	//      {
	//        "ProjectionType": "INCLUDE"
	//        ...
	//      },
	//    ]
	//    ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

A CloudFormation `AWS::EKS::Cluster`.

Creates an Amazon EKS control plane.

The Amazon EKS control plane consists of control plane instances that run the Kubernetes software, such as `etcd` and the API server. The control plane runs in an account managed by AWS , and the Kubernetes API is exposed by the Amazon EKS API server endpoint. Each Amazon EKS cluster control plane is single tenant and unique. It runs on its own set of Amazon EC2 instances.

The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your VPC subnets to provide connectivity from the control plane instances to the nodes (for example, to support `kubectl exec` , `logs` , and `proxy` data flows).

Amazon EKS nodes run in your AWS account and connect to your cluster's control plane over the Kubernetes API server endpoint and a certificate file that is created for your cluster.

In most cases, it takes several minutes to create a cluster. After you create an Amazon EKS cluster, you must configure your Kubernetes tooling to communicate with the API server and launch nodes into your cluster. For more information, see [Managing Cluster Authentication](https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html) and [Launching Amazon EKS nodes](https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html) in the *Amazon EKS User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnCluster := awscdk.Aws_eks.NewCfnCluster(this, jsii.String("MyCfnCluster"), &cfnClusterProps{
	resourcesVpcConfig: &resourcesVpcConfigProperty{
		subnetIds: []*string{
			jsii.String("subnetIds"),
		},

		// the properties below are optional
		endpointPrivateAccess: jsii.Boolean(false),
		endpointPublicAccess: jsii.Boolean(false),
		publicAccessCidrs: []*string{
			jsii.String("publicAccessCidrs"),
		},
		securityGroupIds: []*string{
			jsii.String("securityGroupIds"),
		},
	},
	roleArn: jsii.String("roleArn"),

	// the properties below are optional
	encryptionConfig: []interface{}{
		&encryptionConfigProperty{
			provider: &providerProperty{
				keyArn: jsii.String("keyArn"),
			},
			resources: []*string{
				jsii.String("resources"),
			},
		},
	},
	kubernetesNetworkConfig: &kubernetesNetworkConfigProperty{
		ipFamily: jsii.String("ipFamily"),
		serviceIpv4Cidr: jsii.String("serviceIpv4Cidr"),
		serviceIpv6Cidr: jsii.String("serviceIpv6Cidr"),
	},
	logging: &loggingProperty{
		clusterLogging: &clusterLoggingProperty{
			enabledTypes: []interface{}{
				&loggingTypeConfigProperty{
					type: jsii.String("type"),
				},
			},
		},
	},
	name: jsii.String("name"),
	outpostConfig: &outpostConfigProperty{
		controlPlaneInstanceType: jsii.String("controlPlaneInstanceType"),
		outpostArns: []*string{
			jsii.String("outpostArns"),
		},
	},
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
	version: jsii.String("version"),
})

func NewCfnCluster

func NewCfnCluster(scope constructs.Construct, id *string, props *CfnClusterProps) CfnCluster

Create a new `AWS::EKS::Cluster`.

type CfnClusterProps

type CfnClusterProps struct {
	// The VPC configuration that's used by the cluster control plane.
	//
	// Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the *Amazon EKS User Guide* . You must specify at least two subnets. You can specify up to five security groups, but we recommend that you use a dedicated security group for your cluster control plane.
	//
	// > Updates require replacement of the `SecurityGroupIds` and `SubnetIds` sub-properties.
	ResourcesVpcConfig interface{} `field:"required" json:"resourcesVpcConfig" yaml:"resourcesVpcConfig"`
	// The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
	//
	// For more information, see [Amazon EKS Service IAM Role](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html) in the **Amazon EKS User Guide** .
	RoleArn *string `field:"required" json:"roleArn" yaml:"roleArn"`
	// The encryption configuration for the cluster.
	EncryptionConfig interface{} `field:"optional" json:"encryptionConfig" yaml:"encryptionConfig"`
	// The Kubernetes network configuration for the cluster.
	KubernetesNetworkConfig interface{} `field:"optional" json:"kubernetesNetworkConfig" yaml:"kubernetesNetworkConfig"`
	// The logging configuration for your cluster.
	Logging interface{} `field:"optional" json:"logging" yaml:"logging"`
	// The unique name to give to your cluster.
	Name *string `field:"optional" json:"name" yaml:"name"`
	// `AWS::EKS::Cluster.OutpostConfig`.
	OutpostConfig interface{} `field:"optional" json:"outpostConfig" yaml:"outpostConfig"`
	// The metadata that you apply to the cluster to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value, both of which you define. Cluster tags don't propagate to any other resources associated with the cluster.
	//
	// > You must have the `eks:TagResource` and `eks:UntagResource` permissions in your IAM user or IAM role used to manage the CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.
	Tags *[]*awscdk.CfnTag `field:"optional" json:"tags" yaml:"tags"`
	// The desired Kubernetes version for your cluster.
	//
	// If you don't specify a value here, the latest version available in Amazon EKS is used.
	Version *string `field:"optional" json:"version" yaml:"version"`
}

Properties for defining a `CfnCluster`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnClusterProps := &cfnClusterProps{
	resourcesVpcConfig: &resourcesVpcConfigProperty{
		subnetIds: []*string{
			jsii.String("subnetIds"),
		},

		// the properties below are optional
		endpointPrivateAccess: jsii.Boolean(false),
		endpointPublicAccess: jsii.Boolean(false),
		publicAccessCidrs: []*string{
			jsii.String("publicAccessCidrs"),
		},
		securityGroupIds: []*string{
			jsii.String("securityGroupIds"),
		},
	},
	roleArn: jsii.String("roleArn"),

	// the properties below are optional
	encryptionConfig: []interface{}{
		&encryptionConfigProperty{
			provider: &providerProperty{
				keyArn: jsii.String("keyArn"),
			},
			resources: []*string{
				jsii.String("resources"),
			},
		},
	},
	kubernetesNetworkConfig: &kubernetesNetworkConfigProperty{
		ipFamily: jsii.String("ipFamily"),
		serviceIpv4Cidr: jsii.String("serviceIpv4Cidr"),
		serviceIpv6Cidr: jsii.String("serviceIpv6Cidr"),
	},
	logging: &loggingProperty{
		clusterLogging: &clusterLoggingProperty{
			enabledTypes: []interface{}{
				&loggingTypeConfigProperty{
					type: jsii.String("type"),
				},
			},
		},
	},
	name: jsii.String("name"),
	outpostConfig: &outpostConfigProperty{
		controlPlaneInstanceType: jsii.String("controlPlaneInstanceType"),
		outpostArns: []*string{
			jsii.String("outpostArns"),
		},
	},
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
	version: jsii.String("version"),
}

type CfnCluster_ClusterLoggingProperty

type CfnCluster_ClusterLoggingProperty struct {
	// The enabled control plane logs for your cluster. All log types are disabled if the array is empty.
	//
	// > When updating a resource, you must include this `EnabledTypes` property if the previous CloudFormation template of the resource had it.
	EnabledTypes interface{} `field:"optional" json:"enabledTypes" yaml:"enabledTypes"`
}

The cluster control plane logging configuration for your cluster.

> When updating a resource, you must include this `ClusterLogging` property if the previous CloudFormation template of the resource had it.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

clusterLoggingProperty := &clusterLoggingProperty{
	enabledTypes: []interface{}{
		&loggingTypeConfigProperty{
			type: jsii.String("type"),
		},
	},
}

type CfnCluster_EncryptionConfigProperty

type CfnCluster_EncryptionConfigProperty struct {
	// The encryption provider for the cluster.
	Provider interface{} `field:"optional" json:"provider" yaml:"provider"`
	// Specifies the resources to be encrypted.
	//
	// The only supported value is "secrets".
	Resources *[]*string `field:"optional" json:"resources" yaml:"resources"`
}

The encryption configuration for the cluster.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

encryptionConfigProperty := &encryptionConfigProperty{
	provider: &providerProperty{
		keyArn: jsii.String("keyArn"),
	},
	resources: []*string{
		jsii.String("resources"),
	},
}

type CfnCluster_KubernetesNetworkConfigProperty

type CfnCluster_KubernetesNetworkConfigProperty struct {
	// Specify which IP family is used to assign Kubernetes pod and service IP addresses.
	//
	// If you don't specify a value, `ipv4` is used by default. You can only specify an IP family when you create a cluster and can't change this value once the cluster is created. If you specify `ipv6` , the VPC and subnets that you specify for cluster creation must have both IPv4 and IPv6 CIDR blocks assigned to them. You can't specify `ipv6` for clusters in China Regions.
	//
	// You can only specify `ipv6` for 1.21 and later clusters that use version 1.10.1 or later of the Amazon VPC CNI add-on. If you specify `ipv6` , then ensure that your VPC meets the requirements listed in the considerations listed in [Assigning IPv6 addresses to pods and services](https://docs.aws.amazon.com/eks/latest/userguide/cni-ipv6.html) in the Amazon EKS User Guide. Kubernetes assigns services IPv6 addresses from the unique local address range (fc00::/7). You can't specify a custom IPv6 CIDR block. Pod addresses are assigned from the subnet's IPv6 CIDR.
	IpFamily *string `field:"optional" json:"ipFamily" yaml:"ipFamily"`
	// Don't specify a value if you select `ipv6` for *ipFamily* .
	//
	// The CIDR block to assign Kubernetes service IP addresses from. If you don't specify a block, Kubernetes assigns addresses from either the 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks. We recommend that you specify a block that does not overlap with resources in other networks that are peered or connected to your VPC. The block must meet the following requirements:
	//
	// - Within one of the following private IP address blocks: 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16.
	// - Doesn't overlap with any CIDR block assigned to the VPC that you selected for VPC.
	// - Between /24 and /12.
	//
	// > You can only specify a custom CIDR block when you create a cluster and can't change this value once the cluster is created.
	ServiceIpv4Cidr *string `field:"optional" json:"serviceIpv4Cidr" yaml:"serviceIpv4Cidr"`
	// The CIDR block that Kubernetes pod and service IP addresses are assigned from if you created a 1.21 or later cluster with version 1.10.1 or later of the Amazon VPC CNI add-on and specified `ipv6` for *ipFamily* when you created the cluster. Kubernetes assigns service addresses from the unique local address range ( `fc00::/7` ) because you can't specify a custom IPv6 CIDR block when you create the cluster.
	ServiceIpv6Cidr *string `field:"optional" json:"serviceIpv6Cidr" yaml:"serviceIpv6Cidr"`
}

The Kubernetes network configuration for the cluster.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

kubernetesNetworkConfigProperty := &kubernetesNetworkConfigProperty{
	ipFamily: jsii.String("ipFamily"),
	serviceIpv4Cidr: jsii.String("serviceIpv4Cidr"),
	serviceIpv6Cidr: jsii.String("serviceIpv6Cidr"),
}

type CfnCluster_LoggingProperty

type CfnCluster_LoggingProperty struct {
	// The cluster control plane logging configuration for your cluster.
	ClusterLogging interface{} `field:"optional" json:"clusterLogging" yaml:"clusterLogging"`
}

Enable or disable exporting the Kubernetes control plane logs for your cluster to CloudWatch Logs.

By default, cluster control plane logs aren't exported to CloudWatch Logs. For more information, see [Amazon EKS Cluster control plane logs](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) in the **Amazon EKS User Guide** .

> When updating a resource, you must include this `Logging` property if the previous CloudFormation template of the resource had it. > CloudWatch Logs ingestion, archive storage, and data scanning rates apply to exported control plane logs. For more information, see [CloudWatch Pricing](https://docs.aws.amazon.com/cloudwatch/pricing/) .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

loggingProperty := &loggingProperty{
	clusterLogging: &clusterLoggingProperty{
		enabledTypes: []interface{}{
			&loggingTypeConfigProperty{
				type: jsii.String("type"),
			},
		},
	},
}

type CfnCluster_LoggingTypeConfigProperty

type CfnCluster_LoggingTypeConfigProperty struct {
	// The name of the log type.
	Type *string `field:"optional" json:"type" yaml:"type"`
}

The enabled logging type.

For a list of the valid logging types, see the [`types` property of `LogSetup`](https://docs.aws.amazon.com/eks/latest/APIReference/API_LogSetup.html#AmazonEKS-Type-LogSetup-types) in the *Amazon EKS API Reference* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

loggingTypeConfigProperty := &loggingTypeConfigProperty{
	type: jsii.String("type"),
}

type CfnCluster_OutpostConfigProperty added in v2.42.0

type CfnCluster_OutpostConfigProperty struct {
	// `CfnCluster.OutpostConfigProperty.ControlPlaneInstanceType`.
	ControlPlaneInstanceType *string `field:"required" json:"controlPlaneInstanceType" yaml:"controlPlaneInstanceType"`
	// `CfnCluster.OutpostConfigProperty.OutpostArns`.
	OutpostArns *[]*string `field:"required" json:"outpostArns" yaml:"outpostArns"`
}

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

outpostConfigProperty := &outpostConfigProperty{
	controlPlaneInstanceType: jsii.String("controlPlaneInstanceType"),
	outpostArns: []*string{
		jsii.String("outpostArns"),
	},
}

type CfnCluster_ProviderProperty added in v2.20.0

type CfnCluster_ProviderProperty struct {
	// Amazon Resource Name (ARN) or alias of the KMS key.
	//
	// The KMS key must be symmetric, created in the same region as the cluster, and if the KMS key was created in a different account, the user must have access to the KMS key. For more information, see [Allowing Users in Other Accounts to Use a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide* .
	KeyArn *string `field:"optional" json:"keyArn" yaml:"keyArn"`
}

Identifies the AWS Key Management Service ( AWS KMS ) key used to encrypt the secrets.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

providerProperty := &providerProperty{
	keyArn: jsii.String("keyArn"),
}

type CfnCluster_ResourcesVpcConfigProperty

type CfnCluster_ResourcesVpcConfigProperty struct {
	// Specify subnets for your Amazon EKS nodes.
	//
	// Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your nodes and the Kubernetes control plane.
	SubnetIds *[]*string `field:"required" json:"subnetIds" yaml:"subnetIds"`
	// Set this value to `true` to enable private access for your cluster's Kubernetes API server endpoint.
	//
	// If you enable private access, Kubernetes API requests from within your cluster's VPC use the private VPC endpoint. The default value for this parameter is `false` , which disables private access for your Kubernetes API server. If you disable private access and you have nodes or AWS Fargate pods in the cluster, then ensure that `publicAccessCidrs` includes the necessary CIDR blocks for communication with the nodes or Fargate pods. For more information, see [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) in the **Amazon EKS User Guide** .
	EndpointPrivateAccess interface{} `field:"optional" json:"endpointPrivateAccess" yaml:"endpointPrivateAccess"`
	// Set this value to `false` to disable public access to your cluster's Kubernetes API server endpoint.
	//
	// If you disable public access, your cluster's Kubernetes API server can only receive requests from within the cluster VPC. The default value for this parameter is `true` , which enables public access for your Kubernetes API server. For more information, see [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) in the **Amazon EKS User Guide** .
	EndpointPublicAccess interface{} `field:"optional" json:"endpointPublicAccess" yaml:"endpointPublicAccess"`
	// The CIDR blocks that are allowed access to your cluster's public Kubernetes API server endpoint.
	//
	// Communication to the endpoint from addresses outside of the CIDR blocks that you specify is denied. The default value is `0.0.0.0/0` . If you've disabled private endpoint access and you have nodes or AWS Fargate pods in the cluster, then ensure that you specify the necessary CIDR blocks. For more information, see [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) in the **Amazon EKS User Guide** .
	PublicAccessCidrs *[]*string `field:"optional" json:"publicAccessCidrs" yaml:"publicAccessCidrs"`
	// Specify one or more security groups for the cross-account elastic network interfaces that Amazon EKS creates to use that allow communication between your nodes and the Kubernetes control plane.
	//
	// If you don't specify any security groups, then familiarize yourself with the difference between Amazon EKS defaults for clusters deployed with Kubernetes:
	//
	// - 1.14 Amazon EKS platform version `eks.2` and earlier
	// - 1.14 Amazon EKS platform version `eks.3` and later
	//
	// For more information, see [Amazon EKS security group considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the **Amazon EKS User Guide** .
	SecurityGroupIds *[]*string `field:"optional" json:"securityGroupIds" yaml:"securityGroupIds"`
}

An object representing the VPC configuration to use for an Amazon EKS cluster.

> When updating a resource, you must include these properties if the previous CloudFormation template of the resource had them: > > - `EndpointPublicAccess` > - `EndpointPrivateAccess` > - `PublicAccessCidrs`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

resourcesVpcConfigProperty := &resourcesVpcConfigProperty{
	subnetIds: []*string{
		jsii.String("subnetIds"),
	},

	// the properties below are optional
	endpointPrivateAccess: jsii.Boolean(false),
	endpointPublicAccess: jsii.Boolean(false),
	publicAccessCidrs: []*string{
		jsii.String("publicAccessCidrs"),
	},
	securityGroupIds: []*string{
		jsii.String("securityGroupIds"),
	},
}

type CfnFargateProfile

type CfnFargateProfile interface {
	awscdk.CfnResource
	awscdk.IInspectable
	// The ARN of the cluster, such as `arn:aws:eks:us-west-2:666666666666:fargateprofile/myCluster/myFargateProfile/1cb1a11a-1dc1-1d11-cf11-1111f11fa111` .
	AttrArn() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// The name of the Amazon EKS cluster to apply the Fargate profile to.
	ClusterName() *string
	SetClusterName(val *string)
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The name of the Fargate profile.
	FargateProfileName() *string
	SetFargateProfileName(val *string)
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The tree node.
	Node() constructs.Node
	// The Amazon Resource Name (ARN) of the pod execution role to use for pods that match the selectors in the Fargate profile.
	//
	// The pod execution role allows Fargate infrastructure to register with your cluster as a node, and it provides read access to Amazon ECR image repositories. For more information, see [Pod Execution Role](https://docs.aws.amazon.com/eks/latest/userguide/pod-execution-role.html) in the *Amazon EKS User Guide* .
	PodExecutionRoleArn() *string
	SetPodExecutionRoleArn(val *string)
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// The selectors to match for pods to use this Fargate profile.
	//
	// Each selector must have an associated namespace. Optionally, you can also specify labels for a namespace. You may specify up to five selectors in a Fargate profile.
	Selectors() interface{}
	SetSelectors(val interface{})
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// The IDs of subnets to launch your pods into.
	//
	// At this time, pods running on Fargate are not assigned public IP addresses, so only private subnets (with no direct route to an Internet Gateway) are accepted for this parameter.
	Subnets() *[]*string
	SetSubnets(val *[]*string)
	// The metadata to apply to the Fargate profile to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value. You define both. Fargate profile tags do not propagate to any other resources associated with the Fargate profile, such as the pods that are scheduled with it.
	Tags() awscdk.TagManager
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//    "GlobalSecondaryIndexes": [
	//      {
	//        "Projection": {
	//          "NonKeyAttributes": [ "myattribute" ]
	//          ...
	//        }
	//        ...
	//      },
	//      {
	//        "ProjectionType": "INCLUDE"
	//        ...
	//      },
	//    ]
	//    ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

A CloudFormation `AWS::EKS::FargateProfile`.

Creates an AWS Fargate profile for your Amazon EKS cluster. You must have at least one Fargate profile in a cluster to be able to run pods on Fargate.

The Fargate profile allows an administrator to declare which pods run on Fargate and specify which pods run on which Fargate profile. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and labels. A namespace is required for every selector. The label field consists of multiple optional key-value pairs. Pods that match the selectors are scheduled on Fargate. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is run on Fargate.

When you create a Fargate profile, you must specify a pod execution role to use with the pods that are scheduled with the profile. This role is added to the cluster's Kubernetes [Role Based Access Control](https://docs.aws.amazon.com/https://kubernetes.io/docs/admin/authorization/rbac/) (RBAC) for authorization so that the `kubelet` that is running on the Fargate infrastructure can register with your Amazon EKS cluster so that it can appear in your cluster as a node. The pod execution role also provides IAM permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For more information, see [Pod Execution Role](https://docs.aws.amazon.com/eks/latest/userguide/pod-execution-role.html) in the *Amazon EKS User Guide* .

Fargate profiles are immutable. However, you can create a new updated profile to replace an existing profile and then delete the original after the updated profile has finished creating.

If any Fargate profiles in a cluster are in the `DELETING` status, you must wait for that Fargate profile to finish deleting before you can create any other profiles in that cluster.

For more information, see [AWS Fargate Profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html) in the *Amazon EKS User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnFargateProfile := awscdk.Aws_eks.NewCfnFargateProfile(this, jsii.String("MyCfnFargateProfile"), &cfnFargateProfileProps{
	clusterName: jsii.String("clusterName"),
	podExecutionRoleArn: jsii.String("podExecutionRoleArn"),
	selectors: []interface{}{
		&selectorProperty{
			namespace: jsii.String("namespace"),

			// the properties below are optional
			labels: []interface{}{
				&labelProperty{
					key: jsii.String("key"),
					value: jsii.String("value"),
				},
			},
		},
	},

	// the properties below are optional
	fargateProfileName: jsii.String("fargateProfileName"),
	subnets: []*string{
		jsii.String("subnets"),
	},
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
})

func NewCfnFargateProfile

func NewCfnFargateProfile(scope constructs.Construct, id *string, props *CfnFargateProfileProps) CfnFargateProfile

Create a new `AWS::EKS::FargateProfile`.

type CfnFargateProfileProps

type CfnFargateProfileProps struct {
	// The name of the Amazon EKS cluster to apply the Fargate profile to.
	ClusterName *string `field:"required" json:"clusterName" yaml:"clusterName"`
	// The Amazon Resource Name (ARN) of the pod execution role to use for pods that match the selectors in the Fargate profile.
	//
	// The pod execution role allows Fargate infrastructure to register with your cluster as a node, and it provides read access to Amazon ECR image repositories. For more information, see [Pod Execution Role](https://docs.aws.amazon.com/eks/latest/userguide/pod-execution-role.html) in the *Amazon EKS User Guide* .
	PodExecutionRoleArn *string `field:"required" json:"podExecutionRoleArn" yaml:"podExecutionRoleArn"`
	// The selectors to match for pods to use this Fargate profile.
	//
	// Each selector must have an associated namespace. Optionally, you can also specify labels for a namespace. You may specify up to five selectors in a Fargate profile.
	Selectors interface{} `field:"required" json:"selectors" yaml:"selectors"`
	// The name of the Fargate profile.
	FargateProfileName *string `field:"optional" json:"fargateProfileName" yaml:"fargateProfileName"`
	// The IDs of subnets to launch your pods into.
	//
	// At this time, pods running on Fargate are not assigned public IP addresses, so only private subnets (with no direct route to an Internet Gateway) are accepted for this parameter.
	Subnets *[]*string `field:"optional" json:"subnets" yaml:"subnets"`
	// The metadata to apply to the Fargate profile to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value. You define both. Fargate profile tags do not propagate to any other resources associated with the Fargate profile, such as the pods that are scheduled with it.
	Tags *[]*awscdk.CfnTag `field:"optional" json:"tags" yaml:"tags"`
}

Properties for defining a `CfnFargateProfile`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnFargateProfileProps := &cfnFargateProfileProps{
	clusterName: jsii.String("clusterName"),
	podExecutionRoleArn: jsii.String("podExecutionRoleArn"),
	selectors: []interface{}{
		&selectorProperty{
			namespace: jsii.String("namespace"),

			// the properties below are optional
			labels: []interface{}{
				&labelProperty{
					key: jsii.String("key"),
					value: jsii.String("value"),
				},
			},
		},
	},

	// the properties below are optional
	fargateProfileName: jsii.String("fargateProfileName"),
	subnets: []*string{
		jsii.String("subnets"),
	},
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
}

type CfnFargateProfile_LabelProperty

type CfnFargateProfile_LabelProperty struct {
	// Enter a key.
	Key *string `field:"required" json:"key" yaml:"key"`
	// Enter a value.
	Value *string `field:"required" json:"value" yaml:"value"`
}

A key-value pair.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

labelProperty := &labelProperty{
	key: jsii.String("key"),
	value: jsii.String("value"),
}

type CfnFargateProfile_SelectorProperty

type CfnFargateProfile_SelectorProperty struct {
	// The Kubernetes namespace that the selector should match.
	Namespace *string `field:"required" json:"namespace" yaml:"namespace"`
	// The Kubernetes labels that the selector should match.
	//
	// A pod must contain all of the labels that are specified in the selector for it to be considered a match.
	Labels interface{} `field:"optional" json:"labels" yaml:"labels"`
}

An object representing an AWS Fargate profile selector.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

selectorProperty := &selectorProperty{
	namespace: jsii.String("namespace"),

	// the properties below are optional
	labels: []interface{}{
		&labelProperty{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
}

type CfnIdentityProviderConfig added in v2.16.0

type CfnIdentityProviderConfig interface {
	awscdk.CfnResource
	awscdk.IInspectable
	// The Amazon Resource Name (ARN) associated with the identity provider config.
	AttrIdentityProviderConfigArn() *string
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// The cluster that the configuration is associated to.
	ClusterName() *string
	SetClusterName(val *string)
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The name of the configuration.
	IdentityProviderConfigName() *string
	SetIdentityProviderConfigName(val *string)
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The tree node.
	Node() constructs.Node
	// An object that represents an OpenID Connect (OIDC) identity provider configuration.
	Oidc() interface{}
	SetOidc(val interface{})
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// The metadata to apply to the provider configuration to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value. You define both.
	Tags() awscdk.TagManager
	// The type of the identity provider configuration.
	//
	// The only type available is `oidc` .
	Type() *string
	SetType(val *string)
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//    "GlobalSecondaryIndexes": [
	//      {
	//        "Projection": {
	//          "NonKeyAttributes": [ "myattribute" ]
	//          ...
	//        }
	//        ...
	//      },
	//      {
	//        "ProjectionType": "INCLUDE"
	//        ...
	//      },
	//    ]
	//    ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

A CloudFormation `AWS::EKS::IdentityProviderConfig`.

Associate an identity provider configuration to a cluster.

If you want to authenticate identities using an identity provider, you can create an identity provider configuration and associate it to your cluster. After configuring authentication to your cluster you can create Kubernetes `roles` and `clusterroles` to assign permissions to the roles, and then bind the roles to the identities using Kubernetes `rolebindings` and `clusterrolebindings` . For more information see [Using RBAC Authorization](https://docs.aws.amazon.com/https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnIdentityProviderConfig := awscdk.Aws_eks.NewCfnIdentityProviderConfig(this, jsii.String("MyCfnIdentityProviderConfig"), &cfnIdentityProviderConfigProps{
	clusterName: jsii.String("clusterName"),
	type: jsii.String("type"),

	// the properties below are optional
	identityProviderConfigName: jsii.String("identityProviderConfigName"),
	oidc: &oidcIdentityProviderConfigProperty{
		clientId: jsii.String("clientId"),
		issuerUrl: jsii.String("issuerUrl"),

		// the properties below are optional
		groupsClaim: jsii.String("groupsClaim"),
		groupsPrefix: jsii.String("groupsPrefix"),
		requiredClaims: []interface{}{
			&requiredClaimProperty{
				key: jsii.String("key"),
				value: jsii.String("value"),
			},
		},
		usernameClaim: jsii.String("usernameClaim"),
		usernamePrefix: jsii.String("usernamePrefix"),
	},
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
})

func NewCfnIdentityProviderConfig added in v2.16.0

func NewCfnIdentityProviderConfig(scope constructs.Construct, id *string, props *CfnIdentityProviderConfigProps) CfnIdentityProviderConfig

Create a new `AWS::EKS::IdentityProviderConfig`.

type CfnIdentityProviderConfigProps added in v2.16.0

type CfnIdentityProviderConfigProps struct {
	// The cluster that the configuration is associated to.
	ClusterName *string `field:"required" json:"clusterName" yaml:"clusterName"`
	// The type of the identity provider configuration.
	//
	// The only type available is `oidc` .
	Type *string `field:"required" json:"type" yaml:"type"`
	// The name of the configuration.
	IdentityProviderConfigName *string `field:"optional" json:"identityProviderConfigName" yaml:"identityProviderConfigName"`
	// An object that represents an OpenID Connect (OIDC) identity provider configuration.
	Oidc interface{} `field:"optional" json:"oidc" yaml:"oidc"`
	// The metadata to apply to the provider configuration to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value. You define both.
	Tags *[]*awscdk.CfnTag `field:"optional" json:"tags" yaml:"tags"`
}

Properties for defining a `CfnIdentityProviderConfig`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

cfnIdentityProviderConfigProps := &cfnIdentityProviderConfigProps{
	clusterName: jsii.String("clusterName"),
	type: jsii.String("type"),

	// the properties below are optional
	identityProviderConfigName: jsii.String("identityProviderConfigName"),
	oidc: &oidcIdentityProviderConfigProperty{
		clientId: jsii.String("clientId"),
		issuerUrl: jsii.String("issuerUrl"),

		// the properties below are optional
		groupsClaim: jsii.String("groupsClaim"),
		groupsPrefix: jsii.String("groupsPrefix"),
		requiredClaims: []interface{}{
			&requiredClaimProperty{
				key: jsii.String("key"),
				value: jsii.String("value"),
			},
		},
		usernameClaim: jsii.String("usernameClaim"),
		usernamePrefix: jsii.String("usernamePrefix"),
	},
	tags: []cfnTag{
		&cfnTag{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
}

type CfnIdentityProviderConfig_OidcIdentityProviderConfigProperty added in v2.16.0

type CfnIdentityProviderConfig_OidcIdentityProviderConfigProperty struct {
	// This is also known as *audience* .
	//
	// The ID of the client application that makes authentication requests to the OIDC identity provider.
	ClientId *string `field:"required" json:"clientId" yaml:"clientId"`
	// The URL of the OIDC identity provider that allows the API server to discover public signing keys for verifying tokens.
	IssuerUrl *string `field:"required" json:"issuerUrl" yaml:"issuerUrl"`
	// The JSON web token (JWT) claim that the provider uses to return your groups.
	GroupsClaim *string `field:"optional" json:"groupsClaim" yaml:"groupsClaim"`
	// The prefix that is prepended to group claims to prevent clashes with existing names (such as `system:` groups).
	//
	// For example, the value `oidc:` creates group names like `oidc:engineering` and `oidc:infra` . The prefix can't contain `system:`
	GroupsPrefix *string `field:"optional" json:"groupsPrefix" yaml:"groupsPrefix"`
	// The key-value pairs that describe required claims in the identity token.
	//
	// If set, each claim is verified to be present in the token with a matching value.
	RequiredClaims interface{} `field:"optional" json:"requiredClaims" yaml:"requiredClaims"`
	// The JSON Web token (JWT) claim that is used as the username.
	UsernameClaim *string `field:"optional" json:"usernameClaim" yaml:"usernameClaim"`
	// The prefix that is prepended to username claims to prevent clashes with existing names.
	//
	// The prefix can't contain `system:`.
	UsernamePrefix *string `field:"optional" json:"usernamePrefix" yaml:"usernamePrefix"`
}

An object that represents the configuration for an OpenID Connect (OIDC) identity provider.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

oidcIdentityProviderConfigProperty := &oidcIdentityProviderConfigProperty{
	clientId: jsii.String("clientId"),
	issuerUrl: jsii.String("issuerUrl"),

	// the properties below are optional
	groupsClaim: jsii.String("groupsClaim"),
	groupsPrefix: jsii.String("groupsPrefix"),
	requiredClaims: []interface{}{
		&requiredClaimProperty{
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
	usernameClaim: jsii.String("usernameClaim"),
	usernamePrefix: jsii.String("usernamePrefix"),
}

type CfnIdentityProviderConfig_RequiredClaimProperty added in v2.16.0

type CfnIdentityProviderConfig_RequiredClaimProperty struct {
	// The key to match from the token.
	Key *string `field:"required" json:"key" yaml:"key"`
	// The value for the key from the token.
	Value *string `field:"required" json:"value" yaml:"value"`
}

A key-value pair that describes a required claim in the identity token.

If set, each claim is verified to be present in the token with a matching value.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

requiredClaimProperty := &requiredClaimProperty{
	key: jsii.String("key"),
	value: jsii.String("value"),
}

type CfnNodegroup

type CfnNodegroup interface {
	awscdk.CfnResource
	awscdk.IInspectable
	// The AMI type for your node group.
	//
	// GPU instance types should use the `AL2_x86_64_GPU` AMI type. Non-GPU instances should use the `AL2_x86_64` AMI type. Arm instances should use the `AL2_ARM_64` AMI type. All types use the Amazon EKS optimized Amazon Linux 2 AMI. If you specify `launchTemplate` , and your launch template uses a custom AMI, then don't specify `amiType` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	AmiType() *string
	SetAmiType(val *string)
	// The Amazon Resource Name (ARN) associated with the managed node group.
	AttrArn() *string
	// The name of the cluster that the managed node group resides in.
	AttrClusterName() *string
	AttrId() *string
	// The name associated with an Amazon EKS managed node group.
	AttrNodegroupName() *string
	// The capacity type of your managed node group.
	CapacityType() *string
	SetCapacityType(val *string)
	// Options for this resource, such as condition, update policy etc.
	CfnOptions() awscdk.ICfnResourceOptions
	CfnProperties() *map[string]interface{}
	// AWS resource type.
	CfnResourceType() *string
	// The name of the cluster to create the node group in.
	ClusterName() *string
	SetClusterName(val *string)
	// Returns: the stack trace of the point where this Resource was created from, sourced
	// from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most
	// node +internal+ entries filtered.
	CreationStack() *[]*string
	// The root device disk size (in GiB) for your node group instances.
	//
	// The default disk size is 20 GiB. If you specify `launchTemplate` , then don't specify `diskSize` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	DiskSize() *float64
	SetDiskSize(val *float64)
	// Force the update if the existing node group's pods are unable to be drained due to a pod disruption budget issue.
	//
	// If an update fails because pods could not be drained, you can force the update after it fails to terminate the old node whether or not any pods are running on the node.
	ForceUpdateEnabled() interface{}
	SetForceUpdateEnabled(val interface{})
	// Specify the instance types for a node group.
	//
	// If you specify a GPU instance type, be sure to specify `AL2_x86_64_GPU` with the `amiType` parameter. If you specify `launchTemplate` , then you can specify zero or one instance type in your launch template *or* you can specify 0-20 instance types for `instanceTypes` . If however, you specify an instance type in your launch template *and* specify any `instanceTypes` , the node group deployment will fail. If you don't specify an instance type in a launch template or for `instanceTypes` , then `t3.medium` is used, by default. If you specify `Spot` for `capacityType` , then we recommend specifying multiple values for `instanceTypes` . For more information, see [Managed node group capacity types](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types) and [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	InstanceTypes() *[]*string
	SetInstanceTypes(val *[]*string)
	// The Kubernetes labels to be applied to the nodes in the node group when they are created.
	Labels() interface{}
	SetLabels(val interface{})
	// An object representing a node group's launch template specification.
	//
	// If specified, then do not specify `instanceTypes` , `diskSize` , or `remoteAccess` and make sure that the launch template meets the requirements in `launchTemplateSpecification` .
	LaunchTemplate() interface{}
	SetLaunchTemplate(val interface{})
	// The logical ID for this CloudFormation stack element.
	//
	// The logical ID of the element
	// is calculated from the path of the resource node in the construct tree.
	//
	// To override this value, use `overrideLogicalId(newLogicalId)`.
	//
	// Returns: the logical ID as a stringified token. This value will only get
	// resolved during synthesis.
	LogicalId() *string
	// The tree node.
	Node() constructs.Node
	// The unique name to give your node group.
	NodegroupName() *string
	SetNodegroupName(val *string)
	// The Amazon Resource Name (ARN) of the IAM role to associate with your node group.
	//
	// The Amazon EKS worker node `kubelet` daemon makes calls to AWS APIs on your behalf. Nodes receive permissions for these API calls through an IAM instance profile and associated policies. Before you can launch nodes and register them into a cluster, you must create an IAM role for those nodes to use when they are launched. For more information, see [Amazon EKS node IAM role](https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html) in the **Amazon EKS User Guide** . If you specify `launchTemplate` , then don't specify [`IamInstanceProfile`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_IamInstanceProfile.html) in your launch template, or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	NodeRole() *string
	SetNodeRole(val *string)
	// Return a string that will be resolved to a CloudFormation `{ Ref }` for this element.
	//
	// If, by any chance, the intrinsic reference of a resource is not a string, you could
	// coerce it to an IResolvable through `Lazy.any({ produce: resource.ref })`.
	Ref() *string
	// The AMI version of the Amazon EKS optimized AMI to use with your node group (for example, `1.14.7- *YYYYMMDD*` ). By default, the latest available AMI version for the node group's current Kubernetes version is used. For more information, see [Amazon EKS optimized Linux AMI Versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) in the *Amazon EKS User Guide* .
	//
	// > Changing this value triggers an update of the node group if one is available. However, only the latest available AMI release version is valid as an input. You cannot roll back to a previous AMI release version.
	ReleaseVersion() *string
	SetReleaseVersion(val *string)
	// The remote access (SSH) configuration to use with your node group.
	//
	// If you specify `launchTemplate` , then don't specify `remoteAccess` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	RemoteAccess() interface{}
	SetRemoteAccess(val interface{})
	// The scaling configuration details for the Auto Scaling group that is created for your node group.
	ScalingConfig() interface{}
	SetScalingConfig(val interface{})
	// The stack in which this element is defined.
	//
	// CfnElements must be defined within a stack scope (directly or indirectly).
	Stack() awscdk.Stack
	// The subnets to use for the Auto Scaling group that is created for your node group.
	//
	// If you specify `launchTemplate` , then don't specify [`SubnetId`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateNetworkInterface.html) in your launch template, or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	Subnets() *[]*string
	SetSubnets(val *[]*string)
	// The metadata to apply to the node group to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value. You define both. Node group tags do not propagate to any other resources associated with the node group, such as the Amazon EC2 instances or subnets.
	Tags() awscdk.TagManager
	// The Kubernetes taints to be applied to the nodes in the node group when they are created.
	//
	// Effect is one of `No_Schedule` , `Prefer_No_Schedule` , or `No_Execute` . Kubernetes taints can be used together with tolerations to control how workloads are scheduled to your nodes. For more information, see [Node taints on managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/node-taints-managed-node-groups.html) .
	Taints() interface{}
	SetTaints(val interface{})
	// The node group update configuration.
	UpdateConfig() interface{}
	SetUpdateConfig(val interface{})
	// Deprecated.
	// Deprecated: use `updatedProperties`
	//
	// Return properties modified after initiation
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperites() *map[string]interface{}
	// Return properties modified after initiation.
	//
	// Resources that expose mutable properties should override this function to
	// collect and return the properties object for this resource.
	UpdatedProperties() *map[string]interface{}
	// The Kubernetes version to use for your managed nodes.
	//
	// By default, the Kubernetes version of the cluster is used, and this is the only accepted specified value. If you specify `launchTemplate` , and your launch template uses a custom AMI, then don't specify `version` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	Version() *string
	SetVersion(val *string)
	// Syntactic sugar for `addOverride(path, undefined)`.
	AddDeletionOverride(path *string)
	// Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
	//
	// This can be used for resources across stacks (or nested stack) boundaries
	// and the dependency will automatically be transferred to the relevant scope.
	AddDependsOn(target awscdk.CfnResource)
	// Add a value to the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	AddMetadata(key *string, value interface{})
	// Adds an override to the synthesized CloudFormation resource.
	//
	// To add a
	// property override, either use `addPropertyOverride` or prefix `path` with
	// "Properties." (i.e. `Properties.TopicName`).
	//
	// If the override is nested, separate each nested level using a dot (.) in the path parameter.
	// If there is an array as part of the nesting, specify the index in the path.
	//
	// To include a literal `.` in the property name, prefix with a `\`. In most
	// programming languages you will need to write this as `"\\."` because the
	// `\` itself will need to be escaped.
	//
	// For example,
	// “`typescript
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes', ['myattribute']);
	// cfnResource.addOverride('Properties.GlobalSecondaryIndexes.1.ProjectionType', 'INCLUDE');
	// “`
	// would add the overrides
	// “`json
	// "Properties": {
	//    "GlobalSecondaryIndexes": [
	//      {
	//        "Projection": {
	//          "NonKeyAttributes": [ "myattribute" ]
	//          ...
	//        }
	//        ...
	//      },
	//      {
	//        "ProjectionType": "INCLUDE"
	//        ...
	//      },
	//    ]
	//    ...
	// }
	// “`
	//
	// The `value` argument to `addOverride` will not be processed or translated
	// in any way. Pass raw JSON values in here with the correct capitalization
	// for CloudFormation. If you pass CDK classes or structs, they will be
	// rendered with lowercased key names, and CloudFormation will reject the
	// template.
	AddOverride(path *string, value interface{})
	// Adds an override that deletes the value of a property from the resource definition.
	AddPropertyDeletionOverride(propertyPath *string)
	// Adds an override to a resource property.
	//
	// Syntactic sugar for `addOverride("Properties.<...>", value)`.
	AddPropertyOverride(propertyPath *string, value interface{})
	// Sets the deletion policy of the resource based on the removal policy specified.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`). In some
	// cases, a snapshot can be taken of the resource prior to deletion
	// (`RemovalPolicy.SNAPSHOT`). A list of resources that support this policy
	// can be found in the following link:.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html#aws-attribute-deletionpolicy-options
	//
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy, options *awscdk.RemovalPolicyOptions)
	// Returns a token for an runtime attribute of this resource.
	//
	// Ideally, use generated attribute accessors (e.g. `resource.arn`), but this can be used for future compatibility
	// in case there is no generated attribute.
	GetAtt(attributeName *string) awscdk.Reference
	// Retrieve a value value from the CloudFormation Resource Metadata.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	// Note that this is a different set of metadata from CDK node metadata; this
	// metadata ends up in the stack template under the resource, whereas CDK
	// node metadata ends up in the Cloud Assembly.
	//
	GetMetadata(key *string) interface{}
	// Examines the CloudFormation resource and discloses attributes.
	Inspect(inspector awscdk.TreeInspector)
	// Overrides the auto-generated logical ID with a specific ID.
	OverrideLogicalId(newLogicalId *string)
	RenderProperties(props *map[string]interface{}) *map[string]interface{}
	// Can be overridden by subclasses to determine if this resource will be rendered into the cloudformation template.
	//
	// Returns: `true` if the resource should be included or `false` is the resource
	// should be omitted.
	ShouldSynthesize() *bool
	// Returns a string representation of this construct.
	//
	// Returns: a string representation of this resource.
	ToString() *string
	ValidateProperties(_properties interface{})
}

A CloudFormation `AWS::EKS::Nodegroup`.

Creates a managed node group for an Amazon EKS cluster. You can only create a node group for your cluster that is equal to the current Kubernetes version for the cluster. All node groups are created with the latest AMI release version for the respective minor Kubernetes version of the cluster, unless you deploy a custom AMI using a launch template. For more information about using launch templates, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) .

An Amazon EKS managed node group is an Amazon EC2 Auto Scaling group and associated Amazon EC2 instances that are managed by AWS for an Amazon EKS cluster. Each node group uses a version of the Amazon EKS optimized Amazon Linux 2 AMI. For more information, see [Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) in the *Amazon EKS User Guide* .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var labels interface{}
var tags interface{}

cfnNodegroup := awscdk.Aws_eks.NewCfnNodegroup(this, jsii.String("MyCfnNodegroup"), &cfnNodegroupProps{
	clusterName: jsii.String("clusterName"),
	nodeRole: jsii.String("nodeRole"),
	subnets: []*string{
		jsii.String("subnets"),
	},

	// the properties below are optional
	amiType: jsii.String("amiType"),
	capacityType: jsii.String("capacityType"),
	diskSize: jsii.Number(123),
	forceUpdateEnabled: jsii.Boolean(false),
	instanceTypes: []*string{
		jsii.String("instanceTypes"),
	},
	labels: labels,
	launchTemplate: &launchTemplateSpecificationProperty{
		id: jsii.String("id"),
		name: jsii.String("name"),
		version: jsii.String("version"),
	},
	nodegroupName: jsii.String("nodegroupName"),
	releaseVersion: jsii.String("releaseVersion"),
	remoteAccess: &remoteAccessProperty{
		ec2SshKey: jsii.String("ec2SshKey"),

		// the properties below are optional
		sourceSecurityGroups: []*string{
			jsii.String("sourceSecurityGroups"),
		},
	},
	scalingConfig: &scalingConfigProperty{
		desiredSize: jsii.Number(123),
		maxSize: jsii.Number(123),
		minSize: jsii.Number(123),
	},
	tags: tags,
	taints: []interface{}{
		&taintProperty{
			effect: jsii.String("effect"),
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
	updateConfig: &updateConfigProperty{
		maxUnavailable: jsii.Number(123),
		maxUnavailablePercentage: jsii.Number(123),
	},
	version: jsii.String("version"),
})

func NewCfnNodegroup

func NewCfnNodegroup(scope constructs.Construct, id *string, props *CfnNodegroupProps) CfnNodegroup

Create a new `AWS::EKS::Nodegroup`.

type CfnNodegroupProps

type CfnNodegroupProps struct {
	// The name of the cluster to create the node group in.
	ClusterName *string `field:"required" json:"clusterName" yaml:"clusterName"`
	// The Amazon Resource Name (ARN) of the IAM role to associate with your node group.
	//
	// The Amazon EKS worker node `kubelet` daemon makes calls to AWS APIs on your behalf. Nodes receive permissions for these API calls through an IAM instance profile and associated policies. Before you can launch nodes and register them into a cluster, you must create an IAM role for those nodes to use when they are launched. For more information, see [Amazon EKS node IAM role](https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html) in the **Amazon EKS User Guide** . If you specify `launchTemplate` , then don't specify [`IamInstanceProfile`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_IamInstanceProfile.html) in your launch template, or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	NodeRole *string `field:"required" json:"nodeRole" yaml:"nodeRole"`
	// The subnets to use for the Auto Scaling group that is created for your node group.
	//
	// If you specify `launchTemplate` , then don't specify [`SubnetId`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateNetworkInterface.html) in your launch template, or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	Subnets *[]*string `field:"required" json:"subnets" yaml:"subnets"`
	// The AMI type for your node group.
	//
	// GPU instance types should use the `AL2_x86_64_GPU` AMI type. Non-GPU instances should use the `AL2_x86_64` AMI type. Arm instances should use the `AL2_ARM_64` AMI type. All types use the Amazon EKS optimized Amazon Linux 2 AMI. If you specify `launchTemplate` , and your launch template uses a custom AMI, then don't specify `amiType` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	AmiType *string `field:"optional" json:"amiType" yaml:"amiType"`
	// The capacity type of your managed node group.
	CapacityType *string `field:"optional" json:"capacityType" yaml:"capacityType"`
	// The root device disk size (in GiB) for your node group instances.
	//
	// The default disk size is 20 GiB. If you specify `launchTemplate` , then don't specify `diskSize` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	DiskSize *float64 `field:"optional" json:"diskSize" yaml:"diskSize"`
	// Force the update if the existing node group's pods are unable to be drained due to a pod disruption budget issue.
	//
	// If an update fails because pods could not be drained, you can force the update after it fails to terminate the old node whether or not any pods are running on the node.
	ForceUpdateEnabled interface{} `field:"optional" json:"forceUpdateEnabled" yaml:"forceUpdateEnabled"`
	// Specify the instance types for a node group.
	//
	// If you specify a GPU instance type, be sure to specify `AL2_x86_64_GPU` with the `amiType` parameter. If you specify `launchTemplate` , then you can specify zero or one instance type in your launch template *or* you can specify 0-20 instance types for `instanceTypes` . If however, you specify an instance type in your launch template *and* specify any `instanceTypes` , the node group deployment will fail. If you don't specify an instance type in a launch template or for `instanceTypes` , then `t3.medium` is used, by default. If you specify `Spot` for `capacityType` , then we recommend specifying multiple values for `instanceTypes` . For more information, see [Managed node group capacity types](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types) and [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	InstanceTypes *[]*string `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The Kubernetes labels to be applied to the nodes in the node group when they are created.
	Labels interface{} `field:"optional" json:"labels" yaml:"labels"`
	// An object representing a node group's launch template specification.
	//
	// If specified, then do not specify `instanceTypes` , `diskSize` , or `remoteAccess` and make sure that the launch template meets the requirements in `launchTemplateSpecification` .
	LaunchTemplate interface{} `field:"optional" json:"launchTemplate" yaml:"launchTemplate"`
	// The unique name to give your node group.
	NodegroupName *string `field:"optional" json:"nodegroupName" yaml:"nodegroupName"`
	// The AMI version of the Amazon EKS optimized AMI to use with your node group (for example, `1.14.7- *YYYYMMDD*` ). By default, the latest available AMI version for the node group's current Kubernetes version is used. For more information, see [Amazon EKS optimized Linux AMI Versions](https://docs.aws.amazon.com/eks/latest/userguide/eks-linux-ami-versions.html) in the *Amazon EKS User Guide* .
	//
	// > Changing this value triggers an update of the node group if one is available. However, only the latest available AMI release version is valid as an input. You cannot roll back to a previous AMI release version.
	ReleaseVersion *string `field:"optional" json:"releaseVersion" yaml:"releaseVersion"`
	// The remote access (SSH) configuration to use with your node group.
	//
	// If you specify `launchTemplate` , then don't specify `remoteAccess` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	RemoteAccess interface{} `field:"optional" json:"remoteAccess" yaml:"remoteAccess"`
	// The scaling configuration details for the Auto Scaling group that is created for your node group.
	ScalingConfig interface{} `field:"optional" json:"scalingConfig" yaml:"scalingConfig"`
	// The metadata to apply to the node group to assist with categorization and organization.
	//
	// Each tag consists of a key and an optional value. You define both. Node group tags do not propagate to any other resources associated with the node group, such as the Amazon EC2 instances or subnets.
	Tags interface{} `field:"optional" json:"tags" yaml:"tags"`
	// The Kubernetes taints to be applied to the nodes in the node group when they are created.
	//
	// Effect is one of `No_Schedule` , `Prefer_No_Schedule` , or `No_Execute` . Kubernetes taints can be used together with tolerations to control how workloads are scheduled to your nodes. For more information, see [Node taints on managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/node-taints-managed-node-groups.html) .
	Taints interface{} `field:"optional" json:"taints" yaml:"taints"`
	// The node group update configuration.
	UpdateConfig interface{} `field:"optional" json:"updateConfig" yaml:"updateConfig"`
	// The Kubernetes version to use for your managed nodes.
	//
	// By default, the Kubernetes version of the cluster is used, and this is the only accepted specified value. If you specify `launchTemplate` , and your launch template uses a custom AMI, then don't specify `version` , or the node group deployment will fail. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .
	Version *string `field:"optional" json:"version" yaml:"version"`
}

Properties for defining a `CfnNodegroup`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var labels interface{}
var tags interface{}

cfnNodegroupProps := &cfnNodegroupProps{
	clusterName: jsii.String("clusterName"),
	nodeRole: jsii.String("nodeRole"),
	subnets: []*string{
		jsii.String("subnets"),
	},

	// the properties below are optional
	amiType: jsii.String("amiType"),
	capacityType: jsii.String("capacityType"),
	diskSize: jsii.Number(123),
	forceUpdateEnabled: jsii.Boolean(false),
	instanceTypes: []*string{
		jsii.String("instanceTypes"),
	},
	labels: labels,
	launchTemplate: &launchTemplateSpecificationProperty{
		id: jsii.String("id"),
		name: jsii.String("name"),
		version: jsii.String("version"),
	},
	nodegroupName: jsii.String("nodegroupName"),
	releaseVersion: jsii.String("releaseVersion"),
	remoteAccess: &remoteAccessProperty{
		ec2SshKey: jsii.String("ec2SshKey"),

		// the properties below are optional
		sourceSecurityGroups: []*string{
			jsii.String("sourceSecurityGroups"),
		},
	},
	scalingConfig: &scalingConfigProperty{
		desiredSize: jsii.Number(123),
		maxSize: jsii.Number(123),
		minSize: jsii.Number(123),
	},
	tags: tags,
	taints: []interface{}{
		&taintProperty{
			effect: jsii.String("effect"),
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
	updateConfig: &updateConfigProperty{
		maxUnavailable: jsii.Number(123),
		maxUnavailablePercentage: jsii.Number(123),
	},
	version: jsii.String("version"),
}

type CfnNodegroup_LaunchTemplateSpecificationProperty

type CfnNodegroup_LaunchTemplateSpecificationProperty struct {
	// The ID of the launch template.
	Id *string `field:"optional" json:"id" yaml:"id"`
	// The name of the launch template.
	Name *string `field:"optional" json:"name" yaml:"name"`
	// The version of the launch template to use.
	//
	// If no version is specified, then the template's default version is used.
	Version *string `field:"optional" json:"version" yaml:"version"`
}

An object representing a node group launch template specification.

The launch template cannot include [`SubnetId`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateNetworkInterface.html) , [`IamInstanceProfile`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_IamInstanceProfile.html) , [`RequestSpotInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RequestSpotInstances.html) , [`HibernationOptions`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_HibernationOptionsRequest.html) , or [`TerminateInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TerminateInstances.html) , or the node group deployment or update will fail. For more information about launch templates, see [`CreateLaunchTemplate`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateLaunchTemplate.html) in the Amazon EC2 API Reference. For more information about using launch templates with Amazon EKS, see [Launch template support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html) in the *Amazon EKS User Guide* .

Specify either `name` or `id` , but not both.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

launchTemplateSpecificationProperty := &launchTemplateSpecificationProperty{
	id: jsii.String("id"),
	name: jsii.String("name"),
	version: jsii.String("version"),
}

type CfnNodegroup_RemoteAccessProperty

type CfnNodegroup_RemoteAccessProperty struct {
	// The Amazon EC2 SSH key that provides access for SSH communication with the nodes in the managed node group.
	//
	// For more information, see [Amazon EC2 key pairs and Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon Elastic Compute Cloud User Guide for Linux Instances* .
	Ec2SshKey *string `field:"required" json:"ec2SshKey" yaml:"ec2SshKey"`
	// The security groups that are allowed SSH access (port 22) to the nodes.
	//
	// If you specify an Amazon EC2 SSH key but do not specify a source security group when you create a managed node group, then port 22 on the nodes is opened to the internet (0.0.0.0/0). For more information, see [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide* .
	SourceSecurityGroups *[]*string `field:"optional" json:"sourceSecurityGroups" yaml:"sourceSecurityGroups"`
}

An object representing the remote access configuration for the managed node group.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

remoteAccessProperty := &remoteAccessProperty{
	ec2SshKey: jsii.String("ec2SshKey"),

	// the properties below are optional
	sourceSecurityGroups: []*string{
		jsii.String("sourceSecurityGroups"),
	},
}

type CfnNodegroup_ScalingConfigProperty

type CfnNodegroup_ScalingConfigProperty struct {
	// The current number of nodes that the managed node group should maintain.
	//
	// > If you use Cluster Autoscaler, you shouldn't change the desiredSize value directly, as this can cause the Cluster Autoscaler to suddenly scale up or scale down.
	//
	// Whenever this parameter changes, the number of worker nodes in the node group is updated to the specified size. If this parameter is given a value that is smaller than the current number of running worker nodes, the necessary number of worker nodes are terminated to match the given value. When using CloudFormation, no action occurs if you remove this parameter from your CFN template.
	//
	// This parameter can be different from minSize in some cases, such as when starting with extra hosts for testing. This parameter can also be different when you want to start with an estimated number of needed hosts, but let Cluster Autoscaler reduce the number if there are too many. When Cluster Autoscaler is used, the desiredSize parameter is altered by Cluster Autoscaler (but can be out-of-date for short periods of time). Cluster Autoscaler doesn't scale a managed node group lower than minSize or higher than maxSize.
	DesiredSize *float64 `field:"optional" json:"desiredSize" yaml:"desiredSize"`
	// The maximum number of nodes that the managed node group can scale out to.
	//
	// For information about the maximum number that you can specify, see [Amazon EKS service quotas](https://docs.aws.amazon.com/eks/latest/userguide/service-quotas.html) in the *Amazon EKS User Guide* .
	MaxSize *float64 `field:"optional" json:"maxSize" yaml:"maxSize"`
	// The minimum number of nodes that the managed node group can scale in to.
	MinSize *float64 `field:"optional" json:"minSize" yaml:"minSize"`
}

An object representing the scaling configuration details for the Auto Scaling group that is associated with your node group.

When creating a node group, you must specify all or none of the properties. When updating a node group, you can specify any or none of the properties.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

scalingConfigProperty := &scalingConfigProperty{
	desiredSize: jsii.Number(123),
	maxSize: jsii.Number(123),
	minSize: jsii.Number(123),
}

type CfnNodegroup_TaintProperty

type CfnNodegroup_TaintProperty struct {
	// The effect of the taint.
	Effect *string `field:"optional" json:"effect" yaml:"effect"`
	// The key of the taint.
	Key *string `field:"optional" json:"key" yaml:"key"`
	// The value of the taint.
	Value *string `field:"optional" json:"value" yaml:"value"`
}

A property that allows a node to repel a set of pods.

For more information, see [Node taints on managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/node-taints-managed-node-groups.html) .

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

taintProperty := &taintProperty{
	effect: jsii.String("effect"),
	key: jsii.String("key"),
	value: jsii.String("value"),
}

type CfnNodegroup_UpdateConfigProperty

type CfnNodegroup_UpdateConfigProperty struct {
	// The maximum number of nodes unavailable at once during a version update.
	//
	// Nodes will be updated in parallel. This value or `maxUnavailablePercentage` is required to have a value.The maximum number is 100.
	MaxUnavailable *float64 `field:"optional" json:"maxUnavailable" yaml:"maxUnavailable"`
	// The maximum percentage of nodes unavailable during a version update.
	//
	// This percentage of nodes will be updated in parallel, up to 100 nodes at once. This value or `maxUnavailable` is required to have a value.
	MaxUnavailablePercentage *float64 `field:"optional" json:"maxUnavailablePercentage" yaml:"maxUnavailablePercentage"`
}

The update configuration for the node group.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

updateConfigProperty := &updateConfigProperty{
	maxUnavailable: jsii.Number(123),
	maxUnavailablePercentage: jsii.Number(123),
}

type Cluster

type Cluster interface {
	awscdk.Resource
	ICluster
	// An IAM role with administrative permissions to create or update the cluster.
	//
	// This role also has `systems:master` permissions.
	AdminRole() awsiam.Role
	// The ALB Controller construct defined for this cluster.
	//
	// Will be undefined if `albController` wasn't configured.
	AlbController() AlbController
	// Lazily creates the AwsAuth resource, which manages AWS authentication mapping.
	AwsAuth() AwsAuth
	// An AWS Lambda layer that contains the `aws` CLI.
	//
	// If not defined, a default layer will be used containing the AWS CLI 1.x.
	AwscliLayer() awslambda.ILayerVersion
	// The AWS generated ARN for the Cluster resource.
	//
	// For example, `arn:aws:eks:us-west-2:666666666666:cluster/prod`.
	ClusterArn() *string
	// The certificate-authority-data for your cluster.
	ClusterCertificateAuthorityData() *string
	// Amazon Resource Name (ARN) or alias of the customer master key (CMK).
	ClusterEncryptionConfigKeyArn() *string
	// The endpoint URL for the Cluster.
	//
	// This is the URL inside the kubeconfig file to use with kubectl
	//
	// For example, `https://5E1D0CEXAMPLEA591B746AFC5AB30262.yl4.us-west-2.eks.amazonaws.com`
	ClusterEndpoint() *string
	// A security group to associate with the Cluster Handler's Lambdas.
	//
	// The Cluster Handler's Lambdas are responsible for calling AWS's EKS API.
	//
	// Requires `placeClusterHandlerInVpc` to be set to true.
	ClusterHandlerSecurityGroup() awsec2.ISecurityGroup
	// The Name of the created EKS Cluster.
	ClusterName() *string
	// If this cluster is kubectl-enabled, returns the OpenID Connect issuer.
	//
	// This is because the values is only be retrieved by the API and not exposed
	// by CloudFormation. If this cluster is not kubectl-enabled (i.e. uses the
	// stock `CfnCluster`), this is `undefined`.
	ClusterOpenIdConnectIssuer() *string
	// If this cluster is kubectl-enabled, returns the OpenID Connect issuer url.
	//
	// This is because the values is only be retrieved by the API and not exposed
	// by CloudFormation. If this cluster is not kubectl-enabled (i.e. uses the
	// stock `CfnCluster`), this is `undefined`.
	ClusterOpenIdConnectIssuerUrl() *string
	// The cluster security group that was created by Amazon EKS for the cluster.
	ClusterSecurityGroup() awsec2.ISecurityGroup
	// The id of the cluster security group that was created by Amazon EKS for the cluster.
	ClusterSecurityGroupId() *string
	// Manages connection rules (Security Group Rules) for the cluster.
	Connections() awsec2.Connections
	// The auto scaling group that hosts the default capacity for this cluster.
	//
	// This will be `undefined` if the `defaultCapacityType` is not `EC2` or
	// `defaultCapacityType` is `EC2` but default capacity is set to 0.
	DefaultCapacity() awsautoscaling.AutoScalingGroup
	// The node group that hosts the default capacity for this cluster.
	//
	// This will be `undefined` if the `defaultCapacityType` is `EC2` or
	// `defaultCapacityType` is `NODEGROUP` but default capacity is set to 0.
	DefaultNodegroup() Nodegroup
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// Custom environment variables when running `kubectl` against this cluster.
	KubectlEnvironment() *map[string]*string
	// An IAM role that can perform kubectl operations against this cluster.
	//
	// The role should be mapped to the `system:masters` Kubernetes RBAC role.
	//
	// This role is directly passed to the lambda handler that sends Kube Ctl commands to the cluster.
	KubectlLambdaRole() awsiam.IRole
	// An AWS Lambda layer that includes `kubectl` and `helm`.
	//
	// If not defined, a default layer will be used containing Kubectl 1.20 and Helm 3.8
	KubectlLayer() awslambda.ILayerVersion
	// The amount of memory allocated to the kubectl provider's lambda function.
	KubectlMemory() awscdk.Size
	// Subnets to host the `kubectl` compute resources.
	KubectlPrivateSubnets() *[]awsec2.ISubnet
	// An IAM role that can perform kubectl operations against this cluster.
	//
	// The role should be mapped to the `system:masters` Kubernetes RBAC role.
	KubectlRole() awsiam.IRole
	// A security group to use for `kubectl` execution.
	KubectlSecurityGroup() awsec2.ISecurityGroup
	// The tree node.
	Node() constructs.Node
	// The AWS Lambda layer that contains the NPM dependency `proxy-agent`.
	//
	// If
	// undefined, a SAR app that contains this layer will be used.
	OnEventLayer() awslambda.ILayerVersion
	// An `OpenIdConnectProvider` resource associated with this cluster, and which can be used to link this cluster to AWS IAM.
	//
	// A provider will only be defined if this property is accessed (lazy initialization).
	OpenIdConnectProvider() awsiam.IOpenIdConnectProvider
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//    cross-environment scenarios.
	PhysicalName() *string
	// Determines if Kubernetes resources can be pruned automatically.
	Prune() *bool
	// IAM role assumed by the EKS Control Plane.
	Role() awsiam.IRole
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// The VPC in which this Cluster was created.
	Vpc() awsec2.IVpc
	// Add nodes to this EKS cluster.
	//
	// The nodes will automatically be configured with the right VPC and AMI
	// for the instance type and Kubernetes version.
	//
	// Note that if you specify `updateType: RollingUpdate` or `updateType: ReplacingUpdate`, your nodes might be replaced at deploy
	// time without notice in case the recommended AMI for your machine image type has been updated by AWS.
	// The default behavior for `updateType` is `None`, which means only new instances will be launched using the new AMI.
	//
	// Spot instances will be labeled `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
	// In addition, the [spot interrupt handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler)
	// daemon will be installed on all spot instances to handle
	// [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/).
	AddAutoScalingGroupCapacity(id *string, options *AutoScalingGroupCapacityOptions) awsautoscaling.AutoScalingGroup
	// Defines a CDK8s chart in this cluster.
	//
	// Returns: a `KubernetesManifest` construct representing the chart.
	AddCdk8sChart(id *string, chart constructs.Construct, options *KubernetesManifestOptions) KubernetesManifest
	// Adds a Fargate profile to this cluster.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html
	//
	AddFargateProfile(id *string, options *FargateProfileOptions) FargateProfile
	// Defines a Helm chart in this cluster.
	//
	// Returns: a `HelmChart` construct.
	AddHelmChart(id *string, options *HelmChartOptions) HelmChart
	// Defines a Kubernetes resource in this cluster.
	//
	// The manifest will be applied/deleted using kubectl as needed.
	//
	// Returns: a `KubernetesResource` object.
	AddManifest(id *string, manifest ...*map[string]interface{}) KubernetesManifest
	// Add managed nodegroup to this Amazon EKS cluster.
	//
	// This method will create a new managed nodegroup and add into the capacity.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html
	//
	AddNodegroupCapacity(id *string, options *NodegroupOptions) Nodegroup
	// Creates a new service account with corresponding IAM Role (IRSA).
	AddServiceAccount(id *string, options *ServiceAccountOptions) ServiceAccount
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Connect capacity in the form of an existing AutoScalingGroup to the EKS cluster.
	//
	// The AutoScalingGroup must be running an EKS-optimized AMI containing the
	// /etc/eks/bootstrap.sh script. This method will configure Security Groups,
	// add the right policies to the instance role, apply the right tags, and add
	// the required user data to the instance's launch configuration.
	//
	// Spot instances will be labeled `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
	// If kubectl is enabled, the
	// [spot interrupt handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler)
	// daemon will be installed on all spot instances to handle
	// [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/).
	//
	// Prefer to use `addAutoScalingGroupCapacity` if possible.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
	//
	ConnectAutoScalingGroupCapacity(autoScalingGroup awsautoscaling.AutoScalingGroup, options *AutoScalingGroupOptions)
	GeneratePhysicalName() *string
	// Fetch the load balancer address of an ingress backed by a load balancer.
	GetIngressLoadBalancerAddress(ingressName *string, options *IngressLoadBalancerAddressOptions) *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Fetch the load balancer address of a service of type 'LoadBalancer'.
	GetServiceLoadBalancerAddress(serviceName *string, options *ServiceLoadBalancerAddressOptions) *string
	// Returns a string representation of this construct.
	ToString() *string
}

A Cluster represents a managed Kubernetes Service (EKS).

This is a fully managed cluster of API Servers (control-plane) The user is still required to create the worker nodes.

Example:

var vpc vpc

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	vpc: vpc,
	vpcSubnets: []subnetSelection{
		&subnetSelection{
			subnetType: ec2.subnetType_PRIVATE_WITH_EGRESS,
		},
	},
})

func NewCluster

func NewCluster(scope constructs.Construct, id *string, props *ClusterProps) Cluster

Initiates an EKS Cluster with the supplied arguments.

type ClusterAttributes

type ClusterAttributes struct {
	// The physical name of the Cluster.
	ClusterName *string `field:"required" json:"clusterName" yaml:"clusterName"`
	// An AWS Lambda layer that contains the `aws` CLI.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/awscli/aws
	// “`.
	AwscliLayer awslambda.ILayerVersion `field:"optional" json:"awscliLayer" yaml:"awscliLayer"`
	// The certificate-authority-data for your cluster.
	ClusterCertificateAuthorityData *string `field:"optional" json:"clusterCertificateAuthorityData" yaml:"clusterCertificateAuthorityData"`
	// Amazon Resource Name (ARN) or alias of the customer master key (CMK).
	ClusterEncryptionConfigKeyArn *string `field:"optional" json:"clusterEncryptionConfigKeyArn" yaml:"clusterEncryptionConfigKeyArn"`
	// The API Server endpoint URL.
	ClusterEndpoint *string `field:"optional" json:"clusterEndpoint" yaml:"clusterEndpoint"`
	// A security group id to associate with the Cluster Handler's Lambdas.
	//
	// The Cluster Handler's Lambdas are responsible for calling AWS's EKS API.
	ClusterHandlerSecurityGroupId *string `field:"optional" json:"clusterHandlerSecurityGroupId" yaml:"clusterHandlerSecurityGroupId"`
	// The cluster security group that was created by Amazon EKS for the cluster.
	ClusterSecurityGroupId *string `field:"optional" json:"clusterSecurityGroupId" yaml:"clusterSecurityGroupId"`
	// Environment variables to use when running `kubectl` against this cluster.
	KubectlEnvironment *map[string]*string `field:"optional" json:"kubectlEnvironment" yaml:"kubectlEnvironment"`
	// An IAM role that can perform kubectl operations against this cluster.
	//
	// The role should be mapped to the `system:masters` Kubernetes RBAC role.
	//
	// This role is directly passed to the lambda handler that sends Kube Ctl commands
	// to the cluster.
	KubectlLambdaRole awsiam.IRole `field:"optional" json:"kubectlLambdaRole" yaml:"kubectlLambdaRole"`
	// An AWS Lambda Layer which includes `kubectl` and Helm.
	//
	// This layer is used by the kubectl handler to apply manifests and install
	// helm charts. You must pick an appropriate releases of one of the
	// `@aws-cdk/layer-kubectl-vXX` packages, that works with the version of
	// Kubernetes you have chosen. If you don't supply this value `kubectl`
	// 1.20 will be used, but that version is most likely too old.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/helm/helm
	// /opt/kubectl/kubectl
	// “`.
	KubectlLayer awslambda.ILayerVersion `field:"optional" json:"kubectlLayer" yaml:"kubectlLayer"`
	// Amount of memory to allocate to the provider's lambda function.
	KubectlMemory awscdk.Size `field:"optional" json:"kubectlMemory" yaml:"kubectlMemory"`
	// Subnets to host the `kubectl` compute resources.
	//
	// If not specified, the k8s
	// endpoint is expected to be accessible publicly.
	KubectlPrivateSubnetIds *[]*string `field:"optional" json:"kubectlPrivateSubnetIds" yaml:"kubectlPrivateSubnetIds"`
	// KubectlProvider for issuing kubectl commands.
	KubectlProvider IKubectlProvider `field:"optional" json:"kubectlProvider" yaml:"kubectlProvider"`
	// An IAM role with cluster administrator and "system:masters" permissions.
	KubectlRoleArn *string `field:"optional" json:"kubectlRoleArn" yaml:"kubectlRoleArn"`
	// A security group to use for `kubectl` execution.
	//
	// If not specified, the k8s
	// endpoint is expected to be accessible publicly.
	KubectlSecurityGroupId *string `field:"optional" json:"kubectlSecurityGroupId" yaml:"kubectlSecurityGroupId"`
	// An AWS Lambda Layer which includes the NPM dependency `proxy-agent`.
	//
	// This layer
	// is used by the onEvent handler to route AWS SDK requests through a proxy.
	//
	// The handler expects the layer to include the following node_modules:
	//
	// proxy-agent.
	OnEventLayer awslambda.ILayerVersion `field:"optional" json:"onEventLayer" yaml:"onEventLayer"`
	// An Open ID Connect provider for this cluster that can be used to configure service accounts.
	//
	// You can either import an existing provider using `iam.OpenIdConnectProvider.fromProviderArn`,
	// or create a new provider using `new eks.OpenIdConnectProvider`
	OpenIdConnectProvider awsiam.IOpenIdConnectProvider `field:"optional" json:"openIdConnectProvider" yaml:"openIdConnectProvider"`
	// Indicates whether Kubernetes resources added through `addManifest()` can be automatically pruned.
	//
	// When this is enabled (default), prune labels will be
	// allocated and injected to each resource. These labels will then be used
	// when issuing the `kubectl apply` operation with the `--prune` switch.
	Prune *bool `field:"optional" json:"prune" yaml:"prune"`
	// Additional security groups associated with this cluster.
	SecurityGroupIds *[]*string `field:"optional" json:"securityGroupIds" yaml:"securityGroupIds"`
	// The VPC in which this Cluster was created.
	Vpc awsec2.IVpc `field:"optional" json:"vpc" yaml:"vpc"`
}

Attributes for EKS clusters.

Example:

var cluster cluster
var asg autoScalingGroup

importedCluster := eks.cluster.fromClusterAttributes(this, jsii.String("ImportedCluster"), &clusterAttributes{
	clusterName: cluster.clusterName,
	clusterSecurityGroupId: cluster.clusterSecurityGroupId,
})

importedCluster.connectAutoScalingGroupCapacity(asg, &autoScalingGroupOptions{
})

type ClusterLoggingTypes added in v2.10.0

type ClusterLoggingTypes string

EKS cluster logging types.

Example:

cluster := eks.NewCluster(this, jsii.String("Cluster"), &clusterProps{
	// ...
	version: eks.kubernetesVersion_V1_21(),
	clusterLogging: []clusterLoggingTypes{
		eks.*clusterLoggingTypes_API,
		eks.*clusterLoggingTypes_AUTHENTICATOR,
		eks.*clusterLoggingTypes_SCHEDULER,
	},
})
const (
	// Logs pertaining to API requests to the cluster.
	ClusterLoggingTypes_API ClusterLoggingTypes = "API"
	// Logs pertaining to cluster access via the Kubernetes API.
	ClusterLoggingTypes_AUDIT ClusterLoggingTypes = "AUDIT"
	// Logs pertaining to authentication requests into the cluster.
	ClusterLoggingTypes_AUTHENTICATOR ClusterLoggingTypes = "AUTHENTICATOR"
	// Logs pertaining to state of cluster controllers.
	ClusterLoggingTypes_CONTROLLER_MANAGER ClusterLoggingTypes = "CONTROLLER_MANAGER"
	// Logs pertaining to scheduling decisions.
	ClusterLoggingTypes_SCHEDULER ClusterLoggingTypes = "SCHEDULER"
)

type ClusterOptions

type ClusterOptions struct {
	// The Kubernetes version to run in the cluster.
	Version KubernetesVersion `field:"required" json:"version" yaml:"version"`
	// Name for the cluster.
	ClusterName *string `field:"optional" json:"clusterName" yaml:"clusterName"`
	// Determines whether a CloudFormation output with the name of the cluster will be synthesized.
	OutputClusterName *bool `field:"optional" json:"outputClusterName" yaml:"outputClusterName"`
	// Determines whether a CloudFormation output with the `aws eks update-kubeconfig` command will be synthesized.
	//
	// This command will include
	// the cluster name and, if applicable, the ARN of the masters IAM role.
	OutputConfigCommand *bool `field:"optional" json:"outputConfigCommand" yaml:"outputConfigCommand"`
	// Role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
	Role awsiam.IRole `field:"optional" json:"role" yaml:"role"`
	// Security Group to use for Control Plane ENIs.
	SecurityGroup awsec2.ISecurityGroup `field:"optional" json:"securityGroup" yaml:"securityGroup"`
	// The VPC in which to create the Cluster.
	Vpc awsec2.IVpc `field:"optional" json:"vpc" yaml:"vpc"`
	// Where to place EKS Control Plane ENIs.
	//
	// If you want to create public load balancers, this must include public subnets.
	//
	// For example, to only select private subnets, supply the following:
	//
	// `vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }]`
	VpcSubnets *[]*awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// Install the AWS Load Balancer Controller onto the cluster.
	// See: https://kubernetes-sigs.github.io/aws-load-balancer-controller
	//
	AlbController *AlbControllerOptions `field:"optional" json:"albController" yaml:"albController"`
	// An AWS Lambda layer that contains the `aws` CLI.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/awscli/aws
	// “`.
	AwscliLayer awslambda.ILayerVersion `field:"optional" json:"awscliLayer" yaml:"awscliLayer"`
	// Custom environment variables when interacting with the EKS endpoint to manage the cluster lifecycle.
	ClusterHandlerEnvironment *map[string]*string `field:"optional" json:"clusterHandlerEnvironment" yaml:"clusterHandlerEnvironment"`
	// A security group to associate with the Cluster Handler's Lambdas.
	//
	// The Cluster Handler's Lambdas are responsible for calling AWS's EKS API.
	//
	// Requires `placeClusterHandlerInVpc` to be set to true.
	ClusterHandlerSecurityGroup awsec2.ISecurityGroup `field:"optional" json:"clusterHandlerSecurityGroup" yaml:"clusterHandlerSecurityGroup"`
	// The cluster log types which you want to enable.
	ClusterLogging *[]ClusterLoggingTypes `field:"optional" json:"clusterLogging" yaml:"clusterLogging"`
	// Controls the "eks.amazonaws.com/compute-type" annotation in the CoreDNS configuration on your cluster to determine which compute type to use for CoreDNS.
	CoreDnsComputeType CoreDnsComputeType `field:"optional" json:"coreDnsComputeType" yaml:"coreDnsComputeType"`
	// Configure access to the Kubernetes API server endpoint..
	// See: https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
	//
	EndpointAccess EndpointAccess `field:"optional" json:"endpointAccess" yaml:"endpointAccess"`
	// Environment variables for the kubectl execution.
	//
	// Only relevant for kubectl enabled clusters.
	KubectlEnvironment *map[string]*string `field:"optional" json:"kubectlEnvironment" yaml:"kubectlEnvironment"`
	// An AWS Lambda Layer which includes `kubectl` and Helm.
	//
	// This layer is used by the kubectl handler to apply manifests and install
	// helm charts. You must pick an appropriate releases of one of the
	// `@aws-cdk/layer-kubectl-vXX` packages, that works with the version of
	// Kubernetes you have chosen. If you don't supply this value `kubectl`
	// 1.20 will be used, but that version is most likely too old.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/helm/helm
	// /opt/kubectl/kubectl
	// “`.
	KubectlLayer awslambda.ILayerVersion `field:"optional" json:"kubectlLayer" yaml:"kubectlLayer"`
	// Amount of memory to allocate to the provider's lambda function.
	KubectlMemory awscdk.Size `field:"optional" json:"kubectlMemory" yaml:"kubectlMemory"`
	// An IAM role that will be added to the `system:masters` Kubernetes RBAC group.
	// See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings
	//
	MastersRole awsiam.IRole `field:"optional" json:"mastersRole" yaml:"mastersRole"`
	// An AWS Lambda Layer which includes the NPM dependency `proxy-agent`.
	//
	// This layer
	// is used by the onEvent handler to route AWS SDK requests through a proxy.
	//
	// By default, the provider will use the layer included in the
	// "aws-lambda-layer-node-proxy-agent" SAR application which is available in all
	// commercial regions.
	//
	// To deploy the layer locally define it in your app as follows:
	//
	// “`ts
	// const layer = new lambda.LayerVersion(this, 'proxy-agent-layer', {
	//    code: lambda.Code.fromAsset(`${__dirname}/layer.zip`),
	//    compatibleRuntimes: [lambda.Runtime.NODEJS_14_X],
	// });
	// “`.
	OnEventLayer awslambda.ILayerVersion `field:"optional" json:"onEventLayer" yaml:"onEventLayer"`
	// Determines whether a CloudFormation output with the ARN of the "masters" IAM role will be synthesized (if `mastersRole` is specified).
	OutputMastersRoleArn *bool `field:"optional" json:"outputMastersRoleArn" yaml:"outputMastersRoleArn"`
	// If set to true, the cluster handler functions will be placed in the private subnets of the cluster vpc, subject to the `vpcSubnets` selection strategy.
	PlaceClusterHandlerInVpc *bool `field:"optional" json:"placeClusterHandlerInVpc" yaml:"placeClusterHandlerInVpc"`
	// Indicates whether Kubernetes resources added through `addManifest()` can be automatically pruned.
	//
	// When this is enabled (default), prune labels will be
	// allocated and injected to each resource. These labels will then be used
	// when issuing the `kubectl apply` operation with the `--prune` switch.
	Prune *bool `field:"optional" json:"prune" yaml:"prune"`
	// KMS secret for envelope encryption for Kubernetes secrets.
	SecretsEncryptionKey awskms.IKey `field:"optional" json:"secretsEncryptionKey" yaml:"secretsEncryptionKey"`
	// The CIDR block to assign Kubernetes service IP addresses from.
	// See: https://docs.aws.amazon.com/eks/latest/APIReference/API_KubernetesNetworkConfigRequest.html#AmazonEKS-Type-KubernetesNetworkConfigRequest-serviceIpv4Cidr
	//
	ServiceIpv4Cidr *string `field:"optional" json:"serviceIpv4Cidr" yaml:"serviceIpv4Cidr"`
}

Options for EKS clusters.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var albControllerVersion albControllerVersion
var endpointAccess endpointAccess
var key key
var kubernetesVersion kubernetesVersion
var layerVersion layerVersion
var policy interface{}
var role role
var securityGroup securityGroup
var size size
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

clusterOptions := &clusterOptions{
	version: kubernetesVersion,

	// the properties below are optional
	albController: &albControllerOptions{
		version: albControllerVersion,

		// the properties below are optional
		policy: policy,
		repository: jsii.String("repository"),
	},
	awscliLayer: layerVersion,
	clusterHandlerEnvironment: map[string]*string{
		"clusterHandlerEnvironmentKey": jsii.String("clusterHandlerEnvironment"),
	},
	clusterHandlerSecurityGroup: securityGroup,
	clusterLogging: []clusterLoggingTypes{
		awscdk.Aws_eks.*clusterLoggingTypes_API,
	},
	clusterName: jsii.String("clusterName"),
	coreDnsComputeType: awscdk.*Aws_eks.coreDnsComputeType_EC2,
	endpointAccess: endpointAccess,
	kubectlEnvironment: map[string]*string{
		"kubectlEnvironmentKey": jsii.String("kubectlEnvironment"),
	},
	kubectlLayer: layerVersion,
	kubectlMemory: size,
	mastersRole: role,
	onEventLayer: layerVersion,
	outputClusterName: jsii.Boolean(false),
	outputConfigCommand: jsii.Boolean(false),
	outputMastersRoleArn: jsii.Boolean(false),
	placeClusterHandlerInVpc: jsii.Boolean(false),
	prune: jsii.Boolean(false),
	role: role,
	secretsEncryptionKey: key,
	securityGroup: securityGroup,
	serviceIpv4Cidr: jsii.String("serviceIpv4Cidr"),
	vpc: vpc,
	vpcSubnets: []subnetSelection{
		&subnetSelection{
			availabilityZones: []*string{
				jsii.String("availabilityZones"),
			},
			onePerAz: jsii.Boolean(false),
			subnetFilters: []*subnetFilter{
				subnetFilter,
			},
			subnetGroupName: jsii.String("subnetGroupName"),
			subnets: []iSubnet{
				subnet,
			},
			subnetType: awscdk.Aws_ec2.subnetType_PRIVATE_ISOLATED,
		},
	},
}

type ClusterProps

type ClusterProps struct {
	// The Kubernetes version to run in the cluster.
	Version KubernetesVersion `field:"required" json:"version" yaml:"version"`
	// Name for the cluster.
	ClusterName *string `field:"optional" json:"clusterName" yaml:"clusterName"`
	// Determines whether a CloudFormation output with the name of the cluster will be synthesized.
	OutputClusterName *bool `field:"optional" json:"outputClusterName" yaml:"outputClusterName"`
	// Determines whether a CloudFormation output with the `aws eks update-kubeconfig` command will be synthesized.
	//
	// This command will include
	// the cluster name and, if applicable, the ARN of the masters IAM role.
	OutputConfigCommand *bool `field:"optional" json:"outputConfigCommand" yaml:"outputConfigCommand"`
	// Role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
	Role awsiam.IRole `field:"optional" json:"role" yaml:"role"`
	// Security Group to use for Control Plane ENIs.
	SecurityGroup awsec2.ISecurityGroup `field:"optional" json:"securityGroup" yaml:"securityGroup"`
	// The VPC in which to create the Cluster.
	Vpc awsec2.IVpc `field:"optional" json:"vpc" yaml:"vpc"`
	// Where to place EKS Control Plane ENIs.
	//
	// If you want to create public load balancers, this must include public subnets.
	//
	// For example, to only select private subnets, supply the following:
	//
	// `vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }]`
	VpcSubnets *[]*awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// Install the AWS Load Balancer Controller onto the cluster.
	// See: https://kubernetes-sigs.github.io/aws-load-balancer-controller
	//
	AlbController *AlbControllerOptions `field:"optional" json:"albController" yaml:"albController"`
	// An AWS Lambda layer that contains the `aws` CLI.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/awscli/aws
	// “`.
	AwscliLayer awslambda.ILayerVersion `field:"optional" json:"awscliLayer" yaml:"awscliLayer"`
	// Custom environment variables when interacting with the EKS endpoint to manage the cluster lifecycle.
	ClusterHandlerEnvironment *map[string]*string `field:"optional" json:"clusterHandlerEnvironment" yaml:"clusterHandlerEnvironment"`
	// A security group to associate with the Cluster Handler's Lambdas.
	//
	// The Cluster Handler's Lambdas are responsible for calling AWS's EKS API.
	//
	// Requires `placeClusterHandlerInVpc` to be set to true.
	ClusterHandlerSecurityGroup awsec2.ISecurityGroup `field:"optional" json:"clusterHandlerSecurityGroup" yaml:"clusterHandlerSecurityGroup"`
	// The cluster log types which you want to enable.
	ClusterLogging *[]ClusterLoggingTypes `field:"optional" json:"clusterLogging" yaml:"clusterLogging"`
	// Controls the "eks.amazonaws.com/compute-type" annotation in the CoreDNS configuration on your cluster to determine which compute type to use for CoreDNS.
	CoreDnsComputeType CoreDnsComputeType `field:"optional" json:"coreDnsComputeType" yaml:"coreDnsComputeType"`
	// Configure access to the Kubernetes API server endpoint..
	// See: https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
	//
	EndpointAccess EndpointAccess `field:"optional" json:"endpointAccess" yaml:"endpointAccess"`
	// Environment variables for the kubectl execution.
	//
	// Only relevant for kubectl enabled clusters.
	KubectlEnvironment *map[string]*string `field:"optional" json:"kubectlEnvironment" yaml:"kubectlEnvironment"`
	// An AWS Lambda Layer which includes `kubectl` and Helm.
	//
	// This layer is used by the kubectl handler to apply manifests and install
	// helm charts. You must pick an appropriate releases of one of the
	// `@aws-cdk/layer-kubectl-vXX` packages, that works with the version of
	// Kubernetes you have chosen. If you don't supply this value `kubectl`
	// 1.20 will be used, but that version is most likely too old.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/helm/helm
	// /opt/kubectl/kubectl
	// “`.
	KubectlLayer awslambda.ILayerVersion `field:"optional" json:"kubectlLayer" yaml:"kubectlLayer"`
	// Amount of memory to allocate to the provider's lambda function.
	KubectlMemory awscdk.Size `field:"optional" json:"kubectlMemory" yaml:"kubectlMemory"`
	// An IAM role that will be added to the `system:masters` Kubernetes RBAC group.
	// See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings
	//
	MastersRole awsiam.IRole `field:"optional" json:"mastersRole" yaml:"mastersRole"`
	// An AWS Lambda Layer which includes the NPM dependency `proxy-agent`.
	//
	// This layer
	// is used by the onEvent handler to route AWS SDK requests through a proxy.
	//
	// By default, the provider will use the layer included in the
	// "aws-lambda-layer-node-proxy-agent" SAR application which is available in all
	// commercial regions.
	//
	// To deploy the layer locally define it in your app as follows:
	//
	// “`ts
	// const layer = new lambda.LayerVersion(this, 'proxy-agent-layer', {
	//    code: lambda.Code.fromAsset(`${__dirname}/layer.zip`),
	//    compatibleRuntimes: [lambda.Runtime.NODEJS_14_X],
	// });
	// “`.
	OnEventLayer awslambda.ILayerVersion `field:"optional" json:"onEventLayer" yaml:"onEventLayer"`
	// Determines whether a CloudFormation output with the ARN of the "masters" IAM role will be synthesized (if `mastersRole` is specified).
	OutputMastersRoleArn *bool `field:"optional" json:"outputMastersRoleArn" yaml:"outputMastersRoleArn"`
	// If set to true, the cluster handler functions will be placed in the private subnets of the cluster vpc, subject to the `vpcSubnets` selection strategy.
	PlaceClusterHandlerInVpc *bool `field:"optional" json:"placeClusterHandlerInVpc" yaml:"placeClusterHandlerInVpc"`
	// Indicates whether Kubernetes resources added through `addManifest()` can be automatically pruned.
	//
	// When this is enabled (default), prune labels will be
	// allocated and injected to each resource. These labels will then be used
	// when issuing the `kubectl apply` operation with the `--prune` switch.
	Prune *bool `field:"optional" json:"prune" yaml:"prune"`
	// KMS secret for envelope encryption for Kubernetes secrets.
	SecretsEncryptionKey awskms.IKey `field:"optional" json:"secretsEncryptionKey" yaml:"secretsEncryptionKey"`
	// The CIDR block to assign Kubernetes service IP addresses from.
	// See: https://docs.aws.amazon.com/eks/latest/APIReference/API_KubernetesNetworkConfigRequest.html#AmazonEKS-Type-KubernetesNetworkConfigRequest-serviceIpv4Cidr
	//
	ServiceIpv4Cidr *string `field:"optional" json:"serviceIpv4Cidr" yaml:"serviceIpv4Cidr"`
	// Number of instances to allocate as an initial capacity for this cluster.
	//
	// Instance type can be configured through `defaultCapacityInstanceType`,
	// which defaults to `m5.large`.
	//
	// Use `cluster.addAutoScalingGroupCapacity` to add additional customized capacity. Set this
	// to `0` is you wish to avoid the initial capacity allocation.
	DefaultCapacity *float64 `field:"optional" json:"defaultCapacity" yaml:"defaultCapacity"`
	// The instance type to use for the default capacity.
	//
	// This will only be taken
	// into account if `defaultCapacity` is > 0.
	DefaultCapacityInstance awsec2.InstanceType `field:"optional" json:"defaultCapacityInstance" yaml:"defaultCapacityInstance"`
	// The default capacity type for the cluster.
	DefaultCapacityType DefaultCapacityType `field:"optional" json:"defaultCapacityType" yaml:"defaultCapacityType"`
	// The IAM role to pass to the Kubectl Lambda Handler.
	KubectlLambdaRole awsiam.IRole `field:"optional" json:"kubectlLambdaRole" yaml:"kubectlLambdaRole"`
	// The tags assigned to the EKS cluster.
	Tags *map[string]*string `field:"optional" json:"tags" yaml:"tags"`
}

Common configuration props for EKS clusters.

Example:

var vpc vpc

eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	vpc: vpc,
	vpcSubnets: []subnetSelection{
		&subnetSelection{
			subnetType: ec2.subnetType_PRIVATE_WITH_EGRESS,
		},
	},
})

type CommonClusterOptions

type CommonClusterOptions struct {
	// The Kubernetes version to run in the cluster.
	Version KubernetesVersion `field:"required" json:"version" yaml:"version"`
	// Name for the cluster.
	ClusterName *string `field:"optional" json:"clusterName" yaml:"clusterName"`
	// Determines whether a CloudFormation output with the name of the cluster will be synthesized.
	OutputClusterName *bool `field:"optional" json:"outputClusterName" yaml:"outputClusterName"`
	// Determines whether a CloudFormation output with the `aws eks update-kubeconfig` command will be synthesized.
	//
	// This command will include
	// the cluster name and, if applicable, the ARN of the masters IAM role.
	OutputConfigCommand *bool `field:"optional" json:"outputConfigCommand" yaml:"outputConfigCommand"`
	// Role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
	Role awsiam.IRole `field:"optional" json:"role" yaml:"role"`
	// Security Group to use for Control Plane ENIs.
	SecurityGroup awsec2.ISecurityGroup `field:"optional" json:"securityGroup" yaml:"securityGroup"`
	// The VPC in which to create the Cluster.
	Vpc awsec2.IVpc `field:"optional" json:"vpc" yaml:"vpc"`
	// Where to place EKS Control Plane ENIs.
	//
	// If you want to create public load balancers, this must include public subnets.
	//
	// For example, to only select private subnets, supply the following:
	//
	// `vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }]`
	VpcSubnets *[]*awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
}

Options for configuring an EKS cluster.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var kubernetesVersion kubernetesVersion
var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter
var vpc vpc

commonClusterOptions := &commonClusterOptions{
	version: kubernetesVersion,

	// the properties below are optional
	clusterName: jsii.String("clusterName"),
	outputClusterName: jsii.Boolean(false),
	outputConfigCommand: jsii.Boolean(false),
	role: role,
	securityGroup: securityGroup,
	vpc: vpc,
	vpcSubnets: []subnetSelection{
		&subnetSelection{
			availabilityZones: []*string{
				jsii.String("availabilityZones"),
			},
			onePerAz: jsii.Boolean(false),
			subnetFilters: []*subnetFilter{
				subnetFilter,
			},
			subnetGroupName: jsii.String("subnetGroupName"),
			subnets: []iSubnet{
				subnet,
			},
			subnetType: awscdk.Aws_ec2.subnetType_PRIVATE_ISOLATED,
		},
	},
}

type CoreDnsComputeType

type CoreDnsComputeType string

The type of compute resources to use for CoreDNS.

const (
	// Deploy CoreDNS on EC2 instances.
	CoreDnsComputeType_EC2 CoreDnsComputeType = "EC2"
	// Deploy CoreDNS on Fargate-managed instances.
	CoreDnsComputeType_FARGATE CoreDnsComputeType = "FARGATE"
)

type CpuArch

type CpuArch string

CPU architecture.

const (
	// arm64 CPU type.
	CpuArch_ARM_64 CpuArch = "ARM_64"
	// x86_64 CPU type.
	CpuArch_X86_64 CpuArch = "X86_64"
)

type DefaultCapacityType

type DefaultCapacityType string

The default capacity type for the cluster.

Example:

cluster := eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	defaultCapacityType: eks.defaultCapacityType_EC2,
})
const (
	// managed node group.
	DefaultCapacityType_NODEGROUP DefaultCapacityType = "NODEGROUP"
	// EC2 autoscaling group.
	DefaultCapacityType_EC2 DefaultCapacityType = "EC2"
)

type EksOptimizedImage

type EksOptimizedImage interface {
	awsec2.IMachineImage
	// Return the correct image.
	GetImage(scope constructs.Construct) *awsec2.MachineImageConfig
}

Construct an Amazon Linux 2 image from the latest EKS Optimized AMI published in SSM.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksOptimizedImage := awscdk.Aws_eks.NewEksOptimizedImage(&eksOptimizedImageProps{
	cpuArch: awscdk.*Aws_eks.cpuArch_ARM_64,
	kubernetesVersion: jsii.String("kubernetesVersion"),
	nodeType: awscdk.*Aws_eks.nodeType_STANDARD,
})

func NewEksOptimizedImage

func NewEksOptimizedImage(props *EksOptimizedImageProps) EksOptimizedImage

Constructs a new instance of the EcsOptimizedAmi class.

type EksOptimizedImageProps

type EksOptimizedImageProps struct {
	// What cpu architecture to retrieve the image for (arm64 or x86_64).
	CpuArch CpuArch `field:"optional" json:"cpuArch" yaml:"cpuArch"`
	// The Kubernetes version to use.
	KubernetesVersion *string `field:"optional" json:"kubernetesVersion" yaml:"kubernetesVersion"`
	// What instance type to retrieve the image for (standard or GPU-optimized).
	NodeType NodeType `field:"optional" json:"nodeType" yaml:"nodeType"`
}

Properties for EksOptimizedImage.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

eksOptimizedImageProps := &eksOptimizedImageProps{
	cpuArch: awscdk.Aws_eks.cpuArch_ARM_64,
	kubernetesVersion: jsii.String("kubernetesVersion"),
	nodeType: awscdk.*Aws_eks.nodeType_STANDARD,
}

type EndpointAccess

type EndpointAccess interface {
	// Restrict public access to specific CIDR blocks.
	//
	// If public access is disabled, this method will result in an error.
	OnlyFrom(cidr ...*string) EndpointAccess
}

Endpoint access characteristics.

Example:

cluster := eks.NewCluster(this, jsii.String("hello-eks"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	endpointAccess: eks.endpointAccess_PRIVATE(),
})

func EndpointAccess_PRIVATE

func EndpointAccess_PRIVATE() EndpointAccess

func EndpointAccess_PUBLIC

func EndpointAccess_PUBLIC() EndpointAccess

func EndpointAccess_PUBLIC_AND_PRIVATE

func EndpointAccess_PUBLIC_AND_PRIVATE() EndpointAccess

type FargateCluster

type FargateCluster interface {
	Cluster
	// An IAM role with administrative permissions to create or update the cluster.
	//
	// This role also has `systems:master` permissions.
	AdminRole() awsiam.Role
	// The ALB Controller construct defined for this cluster.
	//
	// Will be undefined if `albController` wasn't configured.
	AlbController() AlbController
	// Lazily creates the AwsAuth resource, which manages AWS authentication mapping.
	AwsAuth() AwsAuth
	// An AWS Lambda layer that contains the `aws` CLI.
	//
	// If not defined, a default layer will be used containing the AWS CLI 1.x.
	AwscliLayer() awslambda.ILayerVersion
	// The AWS generated ARN for the Cluster resource.
	//
	// For example, `arn:aws:eks:us-west-2:666666666666:cluster/prod`.
	ClusterArn() *string
	// The certificate-authority-data for your cluster.
	ClusterCertificateAuthorityData() *string
	// Amazon Resource Name (ARN) or alias of the customer master key (CMK).
	ClusterEncryptionConfigKeyArn() *string
	// The endpoint URL for the Cluster.
	//
	// This is the URL inside the kubeconfig file to use with kubectl
	//
	// For example, `https://5E1D0CEXAMPLEA591B746AFC5AB30262.yl4.us-west-2.eks.amazonaws.com`
	ClusterEndpoint() *string
	// A security group to associate with the Cluster Handler's Lambdas.
	//
	// The Cluster Handler's Lambdas are responsible for calling AWS's EKS API.
	//
	// Requires `placeClusterHandlerInVpc` to be set to true.
	ClusterHandlerSecurityGroup() awsec2.ISecurityGroup
	// The Name of the created EKS Cluster.
	ClusterName() *string
	// If this cluster is kubectl-enabled, returns the OpenID Connect issuer.
	//
	// This is because the values is only be retrieved by the API and not exposed
	// by CloudFormation. If this cluster is not kubectl-enabled (i.e. uses the
	// stock `CfnCluster`), this is `undefined`.
	ClusterOpenIdConnectIssuer() *string
	// If this cluster is kubectl-enabled, returns the OpenID Connect issuer url.
	//
	// This is because the values is only be retrieved by the API and not exposed
	// by CloudFormation. If this cluster is not kubectl-enabled (i.e. uses the
	// stock `CfnCluster`), this is `undefined`.
	ClusterOpenIdConnectIssuerUrl() *string
	// The cluster security group that was created by Amazon EKS for the cluster.
	ClusterSecurityGroup() awsec2.ISecurityGroup
	// The id of the cluster security group that was created by Amazon EKS for the cluster.
	ClusterSecurityGroupId() *string
	// Manages connection rules (Security Group Rules) for the cluster.
	Connections() awsec2.Connections
	// The auto scaling group that hosts the default capacity for this cluster.
	//
	// This will be `undefined` if the `defaultCapacityType` is not `EC2` or
	// `defaultCapacityType` is `EC2` but default capacity is set to 0.
	DefaultCapacity() awsautoscaling.AutoScalingGroup
	// The node group that hosts the default capacity for this cluster.
	//
	// This will be `undefined` if the `defaultCapacityType` is `EC2` or
	// `defaultCapacityType` is `NODEGROUP` but default capacity is set to 0.
	DefaultNodegroup() Nodegroup
	// Fargate Profile that was created with the cluster.
	DefaultProfile() FargateProfile
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// Custom environment variables when running `kubectl` against this cluster.
	KubectlEnvironment() *map[string]*string
	// An IAM role that can perform kubectl operations against this cluster.
	//
	// The role should be mapped to the `system:masters` Kubernetes RBAC role.
	//
	// This role is directly passed to the lambda handler that sends Kube Ctl commands to the cluster.
	KubectlLambdaRole() awsiam.IRole
	// An AWS Lambda layer that includes `kubectl` and `helm`.
	//
	// If not defined, a default layer will be used containing Kubectl 1.20 and Helm 3.8
	KubectlLayer() awslambda.ILayerVersion
	// The amount of memory allocated to the kubectl provider's lambda function.
	KubectlMemory() awscdk.Size
	// Subnets to host the `kubectl` compute resources.
	KubectlPrivateSubnets() *[]awsec2.ISubnet
	// An IAM role that can perform kubectl operations against this cluster.
	//
	// The role should be mapped to the `system:masters` Kubernetes RBAC role.
	KubectlRole() awsiam.IRole
	// A security group to use for `kubectl` execution.
	KubectlSecurityGroup() awsec2.ISecurityGroup
	// The tree node.
	Node() constructs.Node
	// The AWS Lambda layer that contains the NPM dependency `proxy-agent`.
	//
	// If
	// undefined, a SAR app that contains this layer will be used.
	OnEventLayer() awslambda.ILayerVersion
	// An `OpenIdConnectProvider` resource associated with this cluster, and which can be used to link this cluster to AWS IAM.
	//
	// A provider will only be defined if this property is accessed (lazy initialization).
	OpenIdConnectProvider() awsiam.IOpenIdConnectProvider
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//    cross-environment scenarios.
	PhysicalName() *string
	// Determines if Kubernetes resources can be pruned automatically.
	Prune() *bool
	// IAM role assumed by the EKS Control Plane.
	Role() awsiam.IRole
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// The VPC in which this Cluster was created.
	Vpc() awsec2.IVpc
	// Add nodes to this EKS cluster.
	//
	// The nodes will automatically be configured with the right VPC and AMI
	// for the instance type and Kubernetes version.
	//
	// Note that if you specify `updateType: RollingUpdate` or `updateType: ReplacingUpdate`, your nodes might be replaced at deploy
	// time without notice in case the recommended AMI for your machine image type has been updated by AWS.
	// The default behavior for `updateType` is `None`, which means only new instances will be launched using the new AMI.
	//
	// Spot instances will be labeled `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
	// In addition, the [spot interrupt handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler)
	// daemon will be installed on all spot instances to handle
	// [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/).
	AddAutoScalingGroupCapacity(id *string, options *AutoScalingGroupCapacityOptions) awsautoscaling.AutoScalingGroup
	// Defines a CDK8s chart in this cluster.
	//
	// Returns: a `KubernetesManifest` construct representing the chart.
	AddCdk8sChart(id *string, chart constructs.Construct, options *KubernetesManifestOptions) KubernetesManifest
	// Adds a Fargate profile to this cluster.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html
	//
	AddFargateProfile(id *string, options *FargateProfileOptions) FargateProfile
	// Defines a Helm chart in this cluster.
	//
	// Returns: a `HelmChart` construct.
	AddHelmChart(id *string, options *HelmChartOptions) HelmChart
	// Defines a Kubernetes resource in this cluster.
	//
	// The manifest will be applied/deleted using kubectl as needed.
	//
	// Returns: a `KubernetesResource` object.
	AddManifest(id *string, manifest ...*map[string]interface{}) KubernetesManifest
	// Add managed nodegroup to this Amazon EKS cluster.
	//
	// This method will create a new managed nodegroup and add into the capacity.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html
	//
	AddNodegroupCapacity(id *string, options *NodegroupOptions) Nodegroup
	// Creates a new service account with corresponding IAM Role (IRSA).
	AddServiceAccount(id *string, options *ServiceAccountOptions) ServiceAccount
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	// Connect capacity in the form of an existing AutoScalingGroup to the EKS cluster.
	//
	// The AutoScalingGroup must be running an EKS-optimized AMI containing the
	// /etc/eks/bootstrap.sh script. This method will configure Security Groups,
	// add the right policies to the instance role, apply the right tags, and add
	// the required user data to the instance's launch configuration.
	//
	// Spot instances will be labeled `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
	// If kubectl is enabled, the
	// [spot interrupt handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler)
	// daemon will be installed on all spot instances to handle
	// [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/).
	//
	// Prefer to use `addAutoScalingGroupCapacity` if possible.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
	//
	ConnectAutoScalingGroupCapacity(autoScalingGroup awsautoscaling.AutoScalingGroup, options *AutoScalingGroupOptions)
	GeneratePhysicalName() *string
	// Fetch the load balancer address of an ingress backed by a load balancer.
	GetIngressLoadBalancerAddress(ingressName *string, options *IngressLoadBalancerAddressOptions) *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Fetch the load balancer address of a service of type 'LoadBalancer'.
	GetServiceLoadBalancerAddress(serviceName *string, options *ServiceLoadBalancerAddressOptions) *string
	// Returns a string representation of this construct.
	ToString() *string
}

Defines an EKS cluster that runs entirely on AWS Fargate.

The cluster is created with a default Fargate Profile that matches the "default" and "kube-system" namespaces. You can add additional profiles using `addFargateProfile`.

Example:

cluster := eks.NewFargateCluster(this, jsii.String("MyCluster"), &fargateClusterProps{
	version: eks.kubernetesVersion_V1_21(),
})

func NewFargateCluster

func NewFargateCluster(scope constructs.Construct, id *string, props *FargateClusterProps) FargateCluster

type FargateClusterProps

type FargateClusterProps struct {
	// The Kubernetes version to run in the cluster.
	Version KubernetesVersion `field:"required" json:"version" yaml:"version"`
	// Name for the cluster.
	ClusterName *string `field:"optional" json:"clusterName" yaml:"clusterName"`
	// Determines whether a CloudFormation output with the name of the cluster will be synthesized.
	OutputClusterName *bool `field:"optional" json:"outputClusterName" yaml:"outputClusterName"`
	// Determines whether a CloudFormation output with the `aws eks update-kubeconfig` command will be synthesized.
	//
	// This command will include
	// the cluster name and, if applicable, the ARN of the masters IAM role.
	OutputConfigCommand *bool `field:"optional" json:"outputConfigCommand" yaml:"outputConfigCommand"`
	// Role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf.
	Role awsiam.IRole `field:"optional" json:"role" yaml:"role"`
	// Security Group to use for Control Plane ENIs.
	SecurityGroup awsec2.ISecurityGroup `field:"optional" json:"securityGroup" yaml:"securityGroup"`
	// The VPC in which to create the Cluster.
	Vpc awsec2.IVpc `field:"optional" json:"vpc" yaml:"vpc"`
	// Where to place EKS Control Plane ENIs.
	//
	// If you want to create public load balancers, this must include public subnets.
	//
	// For example, to only select private subnets, supply the following:
	//
	// `vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }]`
	VpcSubnets *[]*awsec2.SubnetSelection `field:"optional" json:"vpcSubnets" yaml:"vpcSubnets"`
	// Install the AWS Load Balancer Controller onto the cluster.
	// See: https://kubernetes-sigs.github.io/aws-load-balancer-controller
	//
	AlbController *AlbControllerOptions `field:"optional" json:"albController" yaml:"albController"`
	// An AWS Lambda layer that contains the `aws` CLI.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/awscli/aws
	// “`.
	AwscliLayer awslambda.ILayerVersion `field:"optional" json:"awscliLayer" yaml:"awscliLayer"`
	// Custom environment variables when interacting with the EKS endpoint to manage the cluster lifecycle.
	ClusterHandlerEnvironment *map[string]*string `field:"optional" json:"clusterHandlerEnvironment" yaml:"clusterHandlerEnvironment"`
	// A security group to associate with the Cluster Handler's Lambdas.
	//
	// The Cluster Handler's Lambdas are responsible for calling AWS's EKS API.
	//
	// Requires `placeClusterHandlerInVpc` to be set to true.
	ClusterHandlerSecurityGroup awsec2.ISecurityGroup `field:"optional" json:"clusterHandlerSecurityGroup" yaml:"clusterHandlerSecurityGroup"`
	// The cluster log types which you want to enable.
	ClusterLogging *[]ClusterLoggingTypes `field:"optional" json:"clusterLogging" yaml:"clusterLogging"`
	// Controls the "eks.amazonaws.com/compute-type" annotation in the CoreDNS configuration on your cluster to determine which compute type to use for CoreDNS.
	CoreDnsComputeType CoreDnsComputeType `field:"optional" json:"coreDnsComputeType" yaml:"coreDnsComputeType"`
	// Configure access to the Kubernetes API server endpoint..
	// See: https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
	//
	EndpointAccess EndpointAccess `field:"optional" json:"endpointAccess" yaml:"endpointAccess"`
	// Environment variables for the kubectl execution.
	//
	// Only relevant for kubectl enabled clusters.
	KubectlEnvironment *map[string]*string `field:"optional" json:"kubectlEnvironment" yaml:"kubectlEnvironment"`
	// An AWS Lambda Layer which includes `kubectl` and Helm.
	//
	// This layer is used by the kubectl handler to apply manifests and install
	// helm charts. You must pick an appropriate releases of one of the
	// `@aws-cdk/layer-kubectl-vXX` packages, that works with the version of
	// Kubernetes you have chosen. If you don't supply this value `kubectl`
	// 1.20 will be used, but that version is most likely too old.
	//
	// The handler expects the layer to include the following executables:
	//
	// “`
	// /opt/helm/helm
	// /opt/kubectl/kubectl
	// “`.
	KubectlLayer awslambda.ILayerVersion `field:"optional" json:"kubectlLayer" yaml:"kubectlLayer"`
	// Amount of memory to allocate to the provider's lambda function.
	KubectlMemory awscdk.Size `field:"optional" json:"kubectlMemory" yaml:"kubectlMemory"`
	// An IAM role that will be added to the `system:masters` Kubernetes RBAC group.
	// See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings
	//
	MastersRole awsiam.IRole `field:"optional" json:"mastersRole" yaml:"mastersRole"`
	// An AWS Lambda Layer which includes the NPM dependency `proxy-agent`.
	//
	// This layer
	// is used by the onEvent handler to route AWS SDK requests through a proxy.
	//
	// By default, the provider will use the layer included in the
	// "aws-lambda-layer-node-proxy-agent" SAR application which is available in all
	// commercial regions.
	//
	// To deploy the layer locally define it in your app as follows:
	//
	// “`ts
	// const layer = new lambda.LayerVersion(this, 'proxy-agent-layer', {
	//    code: lambda.Code.fromAsset(`${__dirname}/layer.zip`),
	//    compatibleRuntimes: [lambda.Runtime.NODEJS_14_X],
	// });
	// “`.
	OnEventLayer awslambda.ILayerVersion `field:"optional" json:"onEventLayer" yaml:"onEventLayer"`
	// Determines whether a CloudFormation output with the ARN of the "masters" IAM role will be synthesized (if `mastersRole` is specified).
	OutputMastersRoleArn *bool `field:"optional" json:"outputMastersRoleArn" yaml:"outputMastersRoleArn"`
	// If set to true, the cluster handler functions will be placed in the private subnets of the cluster vpc, subject to the `vpcSubnets` selection strategy.
	PlaceClusterHandlerInVpc *bool `field:"optional" json:"placeClusterHandlerInVpc" yaml:"placeClusterHandlerInVpc"`
	// Indicates whether Kubernetes resources added through `addManifest()` can be automatically pruned.
	//
	// When this is enabled (default), prune labels will be
	// allocated and injected to each resource. These labels will then be used
	// when issuing the `kubectl apply` operation with the `--prune` switch.
	Prune *bool `field:"optional" json:"prune" yaml:"prune"`
	// KMS secret for envelope encryption for Kubernetes secrets.
	SecretsEncryptionKey awskms.IKey `field:"optional" json:"secretsEncryptionKey" yaml:"secretsEncryptionKey"`
	// The CIDR block to assign Kubernetes service IP addresses from.
	// See: https://docs.aws.amazon.com/eks/latest/APIReference/API_KubernetesNetworkConfigRequest.html#AmazonEKS-Type-KubernetesNetworkConfigRequest-serviceIpv4Cidr
	//
	ServiceIpv4Cidr *string `field:"optional" json:"serviceIpv4Cidr" yaml:"serviceIpv4Cidr"`
	// Fargate Profile to create along with the cluster.
	DefaultProfile *FargateProfileOptions `field:"optional" json:"defaultProfile" yaml:"defaultProfile"`
}

Configuration props for EKS Fargate.

Example:

cluster := eks.NewFargateCluster(this, jsii.String("MyCluster"), &fargateClusterProps{
	version: eks.kubernetesVersion_V1_21(),
})

type FargateProfile

type FargateProfile interface {
	constructs.Construct
	awscdk.ITaggable
	// The full Amazon Resource Name (ARN) of the Fargate profile.
	FargateProfileArn() *string
	// The name of the Fargate profile.
	FargateProfileName() *string
	// The tree node.
	Node() constructs.Node
	// The pod execution role to use for pods that match the selectors in the Fargate profile.
	//
	// The pod execution role allows Fargate infrastructure to
	// register with your cluster as a node, and it provides read access to Amazon
	// ECR image repositories.
	PodExecutionRole() awsiam.IRole
	// Resource tags.
	Tags() awscdk.TagManager
	// Returns a string representation of this construct.
	ToString() *string
}

Fargate profiles allows an administrator to declare which pods run on Fargate.

This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and optional labels. You must define a namespace for every selector. The label field consists of multiple optional key-value pairs. Pods that match a selector (by matching a namespace for the selector and all of the labels specified in the selector) are scheduled on Fargate. If a namespace selector is defined without any labels, Amazon EKS will attempt to schedule all pods that run in that namespace onto Fargate using the profile. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is scheduled on Fargate.

If a pod matches multiple Fargate profiles, Amazon EKS picks one of the matches at random. In this case, you can specify which profile a pod should use by adding the following Kubernetes label to the pod specification: eks.amazonaws.com/fargate-profile: profile_name. However, the pod must still match a selector in that profile in order to be scheduled onto Fargate.

Example:

var cluster cluster

eks.NewFargateProfile(this, jsii.String("MyProfile"), &fargateProfileProps{
	cluster: cluster,
	selectors: []selector{
		&selector{
			namespace: jsii.String("default"),
		},
	},
})

func NewFargateProfile

func NewFargateProfile(scope constructs.Construct, id *string, props *FargateProfileProps) FargateProfile

type FargateProfileOptions

type FargateProfileOptions struct {
	// The selectors to match for pods to use this Fargate profile.
	//
	// Each selector
	// must have an associated namespace. Optionally, you can also specify labels
	// for a namespace.
	//
	// At least one selector is required and you may specify up to five selectors.
	Selectors *[]*Selector `field:"required" json:"selectors" yaml:"selectors"`
	// The name of the Fargate profile.
	FargateProfileName *string `field:"optional" json:"fargateProfileName" yaml:"fargateProfileName"`
	// The pod execution role to use for pods that match the selectors in the Fargate profile.
	//
	// The pod execution role allows Fargate infrastructure to
	// register with your cluster as a node, and it provides read access to Amazon
	// ECR image repositories.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/pod-execution-role.html
	//
	PodExecutionRole awsiam.IRole `field:"optional" json:"podExecutionRole" yaml:"podExecutionRole"`
	// Select which subnets to launch your pods into.
	//
	// At this time, pods running
	// on Fargate are not assigned public IP addresses, so only private subnets
	// (with no direct route to an Internet Gateway) are allowed.
	//
	// You must specify the VPC to customize the subnet selection.
	SubnetSelection *awsec2.SubnetSelection `field:"optional" json:"subnetSelection" yaml:"subnetSelection"`
	// The VPC from which to select subnets to launch your pods into.
	//
	// By default, all private subnets are selected. You can customize this using
	// `subnetSelection`.
	Vpc awsec2.IVpc `field:"optional" json:"vpc" yaml:"vpc"`
}

Options for defining EKS Fargate Profiles.

Example:

var cluster cluster

cluster.addFargateProfile(jsii.String("MyProfile"), &fargateProfileOptions{
	selectors: []selector{
		&selector{
			namespace: jsii.String("default"),
		},
	},
})

type FargateProfileProps

type FargateProfileProps struct {
	// The selectors to match for pods to use this Fargate profile.
	//
	// Each selector
	// must have an associated namespace. Optionally, you can also specify labels
	// for a namespace.
	//
	// At least one selector is required and you may specify up to five selectors.
	Selectors *[]*Selector `field:"required" json:"selectors" yaml:"selectors"`
	// The name of the Fargate profile.
	FargateProfileName *string `field:"optional" json:"fargateProfileName" yaml:"fargateProfileName"`
	// The pod execution role to use for pods that match the selectors in the Fargate profile.
	//
	// The pod execution role allows Fargate infrastructure to
	// register with your cluster as a node, and it provides read access to Amazon
	// ECR image repositories.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/pod-execution-role.html
	//
	PodExecutionRole awsiam.IRole `field:"optional" json:"podExecutionRole" yaml:"podExecutionRole"`
	// Select which subnets to launch your pods into.
	//
	// At this time, pods running
	// on Fargate are not assigned public IP addresses, so only private subnets
	// (with no direct route to an Internet Gateway) are allowed.
	//
	// You must specify the VPC to customize the subnet selection.
	SubnetSelection *awsec2.SubnetSelection `field:"optional" json:"subnetSelection" yaml:"subnetSelection"`
	// The VPC from which to select subnets to launch your pods into.
	//
	// By default, all private subnets are selected. You can customize this using
	// `subnetSelection`.
	Vpc awsec2.IVpc `field:"optional" json:"vpc" yaml:"vpc"`
	// The EKS cluster to apply the Fargate profile to.
	//
	// [disable-awslint:ref-via-interface].
	Cluster Cluster `field:"required" json:"cluster" yaml:"cluster"`
}

Configuration props for EKS Fargate Profiles.

Example:

var cluster cluster

eks.NewFargateProfile(this, jsii.String("MyProfile"), &fargateProfileProps{
	cluster: cluster,
	selectors: []selector{
		&selector{
			namespace: jsii.String("default"),
		},
	},
})

type HelmChart

type HelmChart interface {
	constructs.Construct
	// The tree node.
	Node() constructs.Node
	// Returns a string representation of this construct.
	ToString() *string
}

Represents a helm chart within the Kubernetes system.

Applies/deletes the resources using `kubectl` in sync with the resource.

Example:

var cluster cluster

// option 1: use a construct
// option 1: use a construct
eks.NewHelmChart(this, jsii.String("NginxIngress"), &helmChartProps{
	cluster: cluster,
	chart: jsii.String("nginx-ingress"),
	repository: jsii.String("https://helm.nginx.com/stable"),
	namespace: jsii.String("kube-system"),
})

// or, option2: use `addHelmChart`
cluster.addHelmChart(jsii.String("NginxIngress"), &helmChartOptions{
	chart: jsii.String("nginx-ingress"),
	repository: jsii.String("https://helm.nginx.com/stable"),
	namespace: jsii.String("kube-system"),
})

func NewHelmChart

func NewHelmChart(scope constructs.Construct, id *string, props *HelmChartProps) HelmChart

type HelmChartOptions

type HelmChartOptions struct {
	// The name of the chart.
	//
	// Either this or `chartAsset` must be specified.
	Chart *string `field:"optional" json:"chart" yaml:"chart"`
	// The chart in the form of an asset.
	//
	// Either this or `chart` must be specified.
	ChartAsset awss3assets.Asset `field:"optional" json:"chartAsset" yaml:"chartAsset"`
	// create namespace if not exist.
	CreateNamespace *bool `field:"optional" json:"createNamespace" yaml:"createNamespace"`
	// The Kubernetes namespace scope of the requests.
	Namespace *string `field:"optional" json:"namespace" yaml:"namespace"`
	// The name of the release.
	Release *string `field:"optional" json:"release" yaml:"release"`
	// The repository which contains the chart.
	//
	// For example: https://kubernetes-charts.storage.googleapis.com/
	Repository *string `field:"optional" json:"repository" yaml:"repository"`
	// Amount of time to wait for any individual Kubernetes operation.
	//
	// Maximum 15 minutes.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The values to be used by the chart.
	//
	// For nested values use a nested dictionary. For example:
	// values: {
	//   installationCRDs: true,
	//   webhook: { port: 9443 }
	// }.
	Values *map[string]interface{} `field:"optional" json:"values" yaml:"values"`
	// The chart version to install.
	Version *string `field:"optional" json:"version" yaml:"version"`
	// Whether or not Helm should wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful.
	Wait *bool `field:"optional" json:"wait" yaml:"wait"`
}

Helm Chart options.

Example:

import s3Assets "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster

chartAsset := s3Assets.NewAsset(this, jsii.String("ChartAsset"), &assetProps{
	path: jsii.String("/path/to/asset"),
})

cluster.addHelmChart(jsii.String("test-chart"), &helmChartOptions{
	chartAsset: chartAsset,
})

type HelmChartProps

type HelmChartProps struct {
	// The name of the chart.
	//
	// Either this or `chartAsset` must be specified.
	Chart *string `field:"optional" json:"chart" yaml:"chart"`
	// The chart in the form of an asset.
	//
	// Either this or `chart` must be specified.
	ChartAsset awss3assets.Asset `field:"optional" json:"chartAsset" yaml:"chartAsset"`
	// create namespace if not exist.
	CreateNamespace *bool `field:"optional" json:"createNamespace" yaml:"createNamespace"`
	// The Kubernetes namespace scope of the requests.
	Namespace *string `field:"optional" json:"namespace" yaml:"namespace"`
	// The name of the release.
	Release *string `field:"optional" json:"release" yaml:"release"`
	// The repository which contains the chart.
	//
	// For example: https://kubernetes-charts.storage.googleapis.com/
	Repository *string `field:"optional" json:"repository" yaml:"repository"`
	// Amount of time to wait for any individual Kubernetes operation.
	//
	// Maximum 15 minutes.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
	// The values to be used by the chart.
	//
	// For nested values use a nested dictionary. For example:
	// values: {
	//   installationCRDs: true,
	//   webhook: { port: 9443 }
	// }.
	Values *map[string]interface{} `field:"optional" json:"values" yaml:"values"`
	// The chart version to install.
	Version *string `field:"optional" json:"version" yaml:"version"`
	// Whether or not Helm should wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful.
	Wait *bool `field:"optional" json:"wait" yaml:"wait"`
	// The EKS cluster to apply this configuration to.
	//
	// [disable-awslint:ref-via-interface].
	Cluster ICluster `field:"required" json:"cluster" yaml:"cluster"`
}

Helm Chart properties.

Example:

var cluster cluster

// option 1: use a construct
// option 1: use a construct
eks.NewHelmChart(this, jsii.String("NginxIngress"), &helmChartProps{
	cluster: cluster,
	chart: jsii.String("nginx-ingress"),
	repository: jsii.String("https://helm.nginx.com/stable"),
	namespace: jsii.String("kube-system"),
})

// or, option2: use `addHelmChart`
cluster.addHelmChart(jsii.String("NginxIngress"), &helmChartOptions{
	chart: jsii.String("nginx-ingress"),
	repository: jsii.String("https://helm.nginx.com/stable"),
	namespace: jsii.String("kube-system"),
})

type ICluster

type ICluster interface {
	awsec2.IConnectable
	awscdk.IResource
	// Defines a CDK8s chart in this cluster.
	//
	// Returns: a `KubernetesManifest` construct representing the chart.
	AddCdk8sChart(id *string, chart constructs.Construct, options *KubernetesManifestOptions) KubernetesManifest
	// Defines a Helm chart in this cluster.
	//
	// Returns: a `HelmChart` construct.
	AddHelmChart(id *string, options *HelmChartOptions) HelmChart
	// Defines a Kubernetes resource in this cluster.
	//
	// The manifest will be applied/deleted using kubectl as needed.
	//
	// Returns: a `KubernetesManifest` object.
	AddManifest(id *string, manifest ...*map[string]interface{}) KubernetesManifest
	// Creates a new service account with corresponding IAM Role (IRSA).
	AddServiceAccount(id *string, options *ServiceAccountOptions) ServiceAccount
	// Connect capacity in the form of an existing AutoScalingGroup to the EKS cluster.
	//
	// The AutoScalingGroup must be running an EKS-optimized AMI containing the
	// /etc/eks/bootstrap.sh script. This method will configure Security Groups,
	// add the right policies to the instance role, apply the right tags, and add
	// the required user data to the instance's launch configuration.
	//
	// Spot instances will be labeled `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
	// If kubectl is enabled, the
	// [spot interrupt handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler)
	// daemon will be installed on all spot instances to handle
	// [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/).
	//
	// Prefer to use `addAutoScalingGroupCapacity` if possible.
	// See: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
	//
	ConnectAutoScalingGroupCapacity(autoScalingGroup awsautoscaling.AutoScalingGroup, options *AutoScalingGroupOptions)
	// An AWS Lambda layer that contains the `aws` CLI.
	//
	// If not defined, a default layer will be used containing the AWS CLI 1.x.
	AwscliLayer() awslambda.ILayerVersion
	// The unique ARN assigned to the service by AWS in the form of arn:aws:eks:.
	ClusterArn() *string
	// The certificate-authority-data for your cluster.
	ClusterCertificateAuthorityData() *string
	// Amazon Resource Name (ARN) or alias of the customer master key (CMK).
	ClusterEncryptionConfigKeyArn() *string
	// The API Server endpoint URL.
	ClusterEndpoint() *string
	// A security group to associate with the Cluster Handler's Lambdas.
	//
	// The Cluster Handler's Lambdas are responsible for calling AWS's EKS API.
	//
	// Requires `placeClusterHandlerInVpc` to be set to true.
	ClusterHandlerSecurityGroup() awsec2.ISecurityGroup
	// The physical name of the Cluster.
	ClusterName() *string
	// The cluster security group that was created by Amazon EKS for the cluster.
	ClusterSecurityGroup() awsec2.ISecurityGroup
	// The id of the cluster security group that was created by Amazon EKS for the cluster.
	ClusterSecurityGroupId() *string
	// Custom environment variables when running `kubectl` against this cluster.
	KubectlEnvironment() *map[string]*string
	// An IAM role that can perform kubectl operations against this cluster.
	//
	// The role should be mapped to the `system:masters` Kubernetes RBAC role.
	//
	// This role is directly passed to the lambda handler that sends Kube Ctl commands to the cluster.
	KubectlLambdaRole() awsiam.IRole
	// An AWS Lambda layer that includes `kubectl` and `helm`.
	//
	// If not defined, a default layer will be used containing Kubectl 1.20 and Helm 3.8
	KubectlLayer() awslambda.ILayerVersion
	// Amount of memory to allocate to the provider's lambda function.
	KubectlMemory() awscdk.Size
	// Subnets to host the `kubectl` compute resources.
	//
	// If this is undefined, the k8s endpoint is expected to be accessible
	// publicly.
	KubectlPrivateSubnets() *[]awsec2.ISubnet
	// Kubectl Provider for issuing kubectl commands against it.
	//
	// If not defined, a default provider will be used.
	KubectlProvider() IKubectlProvider
	// An IAM role that can perform kubectl operations against this cluster.
	//
	// The role should be mapped to the `system:masters` Kubernetes RBAC role.
	KubectlRole() awsiam.IRole
	// A security group to use for `kubectl` execution.
	//
	// If this is undefined, the k8s endpoint is expected to be accessible
	// publicly.
	KubectlSecurityGroup() awsec2.ISecurityGroup
	// An AWS Lambda layer that includes the NPM dependency `proxy-agent`.
	//
	// If not defined, a default layer will be used.
	OnEventLayer() awslambda.ILayerVersion
	// The Open ID Connect Provider of the cluster used to configure Service Accounts.
	OpenIdConnectProvider() awsiam.IOpenIdConnectProvider
	// Indicates whether Kubernetes resources can be automatically pruned.
	//
	// When
	// this is enabled (default), prune labels will be allocated and injected to
	// each resource. These labels will then be used when issuing the `kubectl
	// apply` operation with the `--prune` switch.
	Prune() *bool
	// The VPC in which this Cluster was created.
	Vpc() awsec2.IVpc
}

An EKS cluster.

func Cluster_FromClusterAttributes

func Cluster_FromClusterAttributes(scope constructs.Construct, id *string, attrs *ClusterAttributes) ICluster

Import an existing cluster.

func FargateCluster_FromClusterAttributes

func FargateCluster_FromClusterAttributes(scope constructs.Construct, id *string, attrs *ClusterAttributes) ICluster

Import an existing cluster.

type IKubectlProvider added in v2.4.0

type IKubectlProvider interface {
	constructs.IConstruct
	// The IAM execution role of the handler.
	HandlerRole() awsiam.IRole
	// The IAM role to assume in order to perform kubectl operations against this cluster.
	RoleArn() *string
	// The custom resource provider's service token.
	ServiceToken() *string
}

Imported KubectlProvider that can be used in place of the default one created by CDK.

func KubectlProvider_FromKubectlProviderAttributes added in v2.4.0

func KubectlProvider_FromKubectlProviderAttributes(scope constructs.Construct, id *string, attrs *KubectlProviderAttributes) IKubectlProvider

Import an existing provider.

func KubectlProvider_GetOrCreate added in v2.4.0

func KubectlProvider_GetOrCreate(scope constructs.Construct, cluster ICluster) IKubectlProvider

Take existing provider or create new based on cluster.

type INodegroup

type INodegroup interface {
	awscdk.IResource
	// Name of the nodegroup.
	NodegroupName() *string
}

NodeGroup interface.

func Nodegroup_FromNodegroupName

func Nodegroup_FromNodegroupName(scope constructs.Construct, id *string, nodegroupName *string) INodegroup

Import the Nodegroup from attributes.

type IngressLoadBalancerAddressOptions

type IngressLoadBalancerAddressOptions struct {
	// The namespace the service belongs to.
	Namespace *string `field:"optional" json:"namespace" yaml:"namespace"`
	// Timeout for waiting on the load balancer address.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
}

Options for fetching an IngressLoadBalancerAddress.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

ingressLoadBalancerAddressOptions := &ingressLoadBalancerAddressOptions{
	namespace: jsii.String("namespace"),
	timeout: cdk.duration.minutes(jsii.Number(30)),
}

type KubectlProvider added in v2.4.0

type KubectlProvider interface {
	awscdk.NestedStack
	IKubectlProvider
	// The AWS account into which this stack will be deployed.
	//
	// This value is resolved according to the following rules:
	//
	// 1. The value provided to `env.account` when the stack is defined. This can
	//     either be a concrete account (e.g. `585695031111`) or the
	//     `Aws.ACCOUNT_ID` token.
	// 3. `Aws.ACCOUNT_ID`, which represents the CloudFormation intrinsic reference
	//     `{ "Ref": "AWS::AccountId" }` encoded as a string token.
	//
	// Preferably, you should use the return value as an opaque string and not
	// attempt to parse it to implement your logic. If you do, you must first
	// check that it is a concerete value an not an unresolved token. If this
	// value is an unresolved token (`Token.isUnresolved(stack.account)` returns
	// `true`), this implies that the user wishes that this stack will synthesize
	// into a **account-agnostic template**. In this case, your code should either
	// fail (throw an error, emit a synth error using `Annotations.of(construct).addError()`) or
	// implement some other region-agnostic behavior.
	Account() *string
	// The ID of the cloud assembly artifact for this stack.
	ArtifactId() *string
	// Returns the list of AZs that are available in the AWS environment (account/region) associated with this stack.
	//
	// If the stack is environment-agnostic (either account and/or region are
	// tokens), this property will return an array with 2 tokens that will resolve
	// at deploy-time to the first two availability zones returned from CloudFormation's
	// `Fn::GetAZs` intrinsic function.
	//
	// If they are not available in the context, returns a set of dummy values and
	// reports them as missing, and let the CLI resolve them by calling EC2
	// `DescribeAvailabilityZones` on the target environment.
	//
	// To specify a different strategy for selecting availability zones override this method.
	AvailabilityZones() *[]*string
	// Indicates whether the stack requires bundling or not.
	BundlingRequired() *bool
	// Return the stacks this stack depends on.
	Dependencies() *[]awscdk.Stack
	// The environment coordinates in which this stack is deployed.
	//
	// In the form
	// `aws://account/region`. Use `stack.account` and `stack.region` to obtain
	// the specific values, no need to parse.
	//
	// You can use this value to determine if two stacks are targeting the same
	// environment.
	//
	// If either `stack.account` or `stack.region` are not concrete values (e.g.
	// `Aws.ACCOUNT_ID` or `Aws.REGION`) the special strings `unknown-account` and/or
	// `unknown-region` will be used respectively to indicate this stack is
	// region/account-agnostic.
	Environment() *string
	// The IAM execution role of the handler.
	HandlerRole() awsiam.IRole
	// Indicates if this is a nested stack, in which case `parentStack` will include a reference to it's parent.
	Nested() *bool
	// If this is a nested stack, returns it's parent stack.
	NestedStackParent() awscdk.Stack
	// If this is a nested stack, this represents its `AWS::CloudFormation::Stack` resource.
	//
	// `undefined` for top-level (non-nested) stacks.
	NestedStackResource() awscdk.CfnResource
	// The tree node.
	Node() constructs.Node
	// Returns the list of notification Amazon Resource Names (ARNs) for the current stack.
	NotificationArns() *[]*string
	// The partition in which this stack is defined.
	Partition() *string
	// The AWS region into which this stack will be deployed (e.g. `us-west-2`).
	//
	// This value is resolved according to the following rules:
	//
	// 1. The value provided to `env.region` when the stack is defined. This can
	//     either be a concerete region (e.g. `us-west-2`) or the `Aws.REGION`
	//     token.
	// 3. `Aws.REGION`, which is represents the CloudFormation intrinsic reference
	//     `{ "Ref": "AWS::Region" }` encoded as a string token.
	//
	// Preferably, you should use the return value as an opaque string and not
	// attempt to parse it to implement your logic. If you do, you must first
	// check that it is a concerete value an not an unresolved token. If this
	// value is an unresolved token (`Token.isUnresolved(stack.region)` returns
	// `true`), this implies that the user wishes that this stack will synthesize
	// into a **region-agnostic template**. In this case, your code should either
	// fail (throw an error, emit a synth error using `Annotations.of(construct).addError()`) or
	// implement some other region-agnostic behavior.
	Region() *string
	// The IAM role to assume in order to perform kubectl operations against this cluster.
	RoleArn() *string
	// The custom resource provider's service token.
	ServiceToken() *string
	// An attribute that represents the ID of the stack.
	//
	// This is a context aware attribute:
	// - If this is referenced from the parent stack, it will return `{ "Ref": "LogicalIdOfNestedStackResource" }`.
	// - If this is referenced from the context of the nested stack, it will return `{ "Ref": "AWS::StackId" }`
	//
	// Example value: `arn:aws:cloudformation:us-east-2:123456789012:stack/mystack-mynestedstack-sggfrhxhum7w/f449b250-b969-11e0-a185-5081d0136786`.
	StackId() *string
	// An attribute that represents the name of the nested stack.
	//
	// This is a context aware attribute:
	// - If this is referenced from the parent stack, it will return a token that parses the name from the stack ID.
	// - If this is referenced from the context of the nested stack, it will return `{ "Ref": "AWS::StackName" }`
	//
	// Example value: `mystack-mynestedstack-sggfrhxhum7w`.
	StackName() *string
	// Synthesis method for this stack.
	Synthesizer() awscdk.IStackSynthesizer
	// Tags to be applied to the stack.
	Tags() awscdk.TagManager
	// The name of the CloudFormation template file emitted to the output directory during synthesis.
	//
	// Example value: `MyStack.template.json`
	TemplateFile() *string
	// Options for CloudFormation template (like version, transform, description).
	TemplateOptions() awscdk.ITemplateOptions
	// Whether termination protection is enabled for this stack.
	TerminationProtection() *bool
	// The Amazon domain suffix for the region in which this stack is defined.
	UrlSuffix() *string
	// Add a dependency between this stack and another stack.
	//
	// This can be used to define dependencies between any two stacks within an
	// app, and also supports nested stacks.
	AddDependency(target awscdk.Stack, reason *string)
	// Adds an arbitary key-value pair, with information you want to record about the stack.
	//
	// These get translated to the Metadata section of the generated template.
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
	//
	AddMetadata(key *string, value interface{})
	// Add a Transform to this stack. A Transform is a macro that AWS CloudFormation uses to process your template.
	//
	// Duplicate values are removed when stack is synthesized.
	//
	// Example:
	//   declare const stack: Stack;
	//
	//   stack.addTransform('AWS::Serverless-2016-10-31')
	//
	// See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html
	//
	AddTransform(transform *string)
	// Returns the naming scheme used to allocate logical IDs.
	//
	// By default, uses
	// the `HashedAddressingScheme` but this method can be overridden to customize
	// this behavior.
	//
	// In order to make sure logical IDs are unique and stable, we hash the resource
	// construct tree path (i.e. toplevel/secondlevel/.../myresource) and add it as
	// a suffix to the path components joined without a separator (CloudFormation
	// IDs only allow alphanumeric characters).
	//
	// The result will be:
	//
	//    <path.join(”)><md5(path.join('/')>
	//      "human"      "hash"
	//
	// If the "human" part of the ID exceeds 240 characters, we simply trim it so
	// the total ID doesn't exceed CloudFormation's 255 character limit.
	//
	// We only take 8 characters from the md5 hash (0.000005 chance of collision).
	//
	// Special cases:
	//
	// - If the path only contains a single component (i.e. it's a top-level
	//    resource), we won't add the hash to it. The hash is not needed for
	//    disamiguation and also, it allows for a more straightforward migration an
	//    existing CloudFormation template to a CDK stack without logical ID changes
	//    (or renames).
	// - For aesthetic reasons, if the last components of the path are the same
	//    (i.e. `L1/L2/Pipeline/Pipeline`), they will be de-duplicated to make the
	//    resulting human portion of the ID more pleasing: `L1L2Pipeline<HASH>`
	//    instead of `L1L2PipelinePipeline<HASH>`
	// - If a component is named "Default" it will be omitted from the path. This
	//    allows refactoring higher level abstractions around constructs without affecting
	//    the IDs of already deployed resources.
	// - If a component is named "Resource" it will be omitted from the user-visible
	//    path, but included in the hash. This reduces visual noise in the human readable
	//    part of the identifier.
	AllocateLogicalId(cfnElement awscdk.CfnElement) *string
	// Create a CloudFormation Export for a value.
	//
	// Returns a string representing the corresponding `Fn.importValue()`
	// expression for this Export. You can control the name for the export by
	// passing the `name` option.
	//
	// If you don't supply a value for `name`, the value you're exporting must be
	// a Resource attribute (for example: `bucket.bucketName`) and it will be
	// given the same name as the automatic cross-stack reference that would be created
	// if you used the attribute in another Stack.
	//
	// One of the uses for this method is to *remove* the relationship between
	// two Stacks established by automatic cross-stack references. It will
	// temporarily ensure that the CloudFormation Export still exists while you
	// remove the reference from the consuming stack. After that, you can remove
	// the resource and the manual export.
	//
	// ## Example
	//
	// Here is how the process works. Let's say there are two stacks,
	// `producerStack` and `consumerStack`, and `producerStack` has a bucket
	// called `bucket`, which is referenced by `consumerStack` (perhaps because
	// an AWS Lambda Function writes into it, or something like that).
	//
	// It is not safe to remove `producerStack.bucket` because as the bucket is being
	// deleted, `consumerStack` might still be using it.
	//
	// Instead, the process takes two deployments:
	//
	// ### Deployment 1: break the relationship
	//
	// - Make sure `consumerStack` no longer references `bucket.bucketName` (maybe the consumer
	//    stack now uses its own bucket, or it writes to an AWS DynamoDB table, or maybe you just
	//    remove the Lambda Function altogether).
	// - In the `ProducerStack` class, call `this.exportValue(this.bucket.bucketName)`. This
	//    will make sure the CloudFormation Export continues to exist while the relationship
	//    between the two stacks is being broken.
	// - Deploy (this will effectively only change the `consumerStack`, but it's safe to deploy both).
	//
	// ### Deployment 2: remove the bucket resource
	//
	// - You are now free to remove the `bucket` resource from `producerStack`.
	// - Don't forget to remove the `exportValue()` call as well.
	// - Deploy again (this time only the `producerStack` will be changed -- the bucket will be deleted).
	ExportValue(exportedValue interface{}, options *awscdk.ExportValueOptions) *string
	// Creates an ARN from components.
	//
	// If `partition`, `region` or `account` are not specified, the stack's
	// partition, region and account will be used.
	//
	// If any component is the empty string, an empty string will be inserted
	// into the generated ARN at the location that component corresponds to.
	//
	// The ARN will be formatted as follows:
	//
	//    arn:{partition}:{service}:{region}:{account}:{resource}{sep}{resource-name}
	//
	// The required ARN pieces that are omitted will be taken from the stack that
	// the 'scope' is attached to. If all ARN pieces are supplied, the supplied scope
	// can be 'undefined'.
	FormatArn(components *awscdk.ArnComponents) *string
	// Allocates a stack-unique CloudFormation-compatible logical identity for a specific resource.
	//
	// This method is called when a `CfnElement` is created and used to render the
	// initial logical identity of resources. Logical ID renames are applied at
	// this stage.
	//
	// This method uses the protected method `allocateLogicalId` to render the
	// logical ID for an element. To modify the naming scheme, extend the `Stack`
	// class and override this method.
	GetLogicalId(element awscdk.CfnElement) *string
	// Look up a fact value for the given fact for the region of this stack.
	//
	// Will return a definite value only if the region of the current stack is resolved.
	// If not, a lookup map will be added to the stack and the lookup will be done at
	// CDK deployment time.
	//
	// What regions will be included in the lookup map is controlled by the
	// `@aws-cdk/core:target-partitions` context value: it must be set to a list
	// of partitions, and only regions from the given partitions will be included.
	// If no such context key is set, all regions will be included.
	//
	// This function is intended to be used by construct library authors. Application
	// builders can rely on the abstractions offered by construct libraries and do
	// not have to worry about regional facts.
	//
	// If `defaultValue` is not given, it is an error if the fact is unknown for
	// the given region.
	RegionalFact(factName *string, defaultValue *string) *string
	// Rename a generated logical identities.
	//
	// To modify the naming scheme strategy, extend the `Stack` class and
	// override the `allocateLogicalId` method.
	RenameLogicalId(oldId *string, newId *string)
	// Indicate that a context key was expected.
	//
	// Contains instructions which will be emitted into the cloud assembly on how
	// the key should be supplied.
	ReportMissingContextKey(report *cloudassemblyschema.MissingContext)
	// Resolve a tokenized value in the context of the current stack.
	Resolve(obj interface{}) interface{}
	// Assign a value to one of the nested stack parameters.
	SetParameter(name *string, value *string)
	// Splits the provided ARN into its components.
	//
	// Works both if 'arn' is a string like 'arn:aws:s3:::bucket',
	// and a Token representing a dynamic CloudFormation expression
	// (in which case the returned components will also be dynamic CloudFormation expressions,
	// encoded as Tokens).
	SplitArn(arn *string, arnFormat awscdk.ArnFormat) *awscdk.ArnComponents
	// Convert an object, potentially containing tokens, to a JSON string.
	ToJsonString(obj interface{}, space *float64) *string
	// Returns a string representation of this construct.
	ToString() *string
}

Implementation of Kubectl Lambda.

Example:

handlerRole := iam.role.fromRoleArn(this, jsii.String("HandlerRole"), jsii.String("arn:aws:iam::123456789012:role/lambda-role"))
kubectlProvider := eks.kubectlProvider.fromKubectlProviderAttributes(this, jsii.String("KubectlProvider"), &kubectlProviderAttributes{
	functionArn: jsii.String("arn:aws:lambda:us-east-2:123456789012:function:my-function:1"),
	kubectlRoleArn: jsii.String("arn:aws:iam::123456789012:role/kubectl-role"),
	handlerRole: handlerRole,
})

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("Cluster"), &clusterAttributes{
	clusterName: jsii.String("cluster"),
	kubectlProvider: kubectlProvider,
})

func NewKubectlProvider added in v2.4.0

func NewKubectlProvider(scope constructs.Construct, id *string, props *KubectlProviderProps) KubectlProvider

type KubectlProviderAttributes added in v2.4.0

type KubectlProviderAttributes struct {
	// The kubectl provider lambda arn.
	FunctionArn *string `field:"required" json:"functionArn" yaml:"functionArn"`
	// The IAM execution role of the handler.
	//
	// This role must be able to assume kubectlRoleArn.
	HandlerRole awsiam.IRole `field:"required" json:"handlerRole" yaml:"handlerRole"`
	// The IAM role to assume in order to perform kubectl operations against this cluster.
	KubectlRoleArn *string `field:"required" json:"kubectlRoleArn" yaml:"kubectlRoleArn"`
}

Kubectl Provider Attributes.

Example:

handlerRole := iam.role.fromRoleArn(this, jsii.String("HandlerRole"), jsii.String("arn:aws:iam::123456789012:role/lambda-role"))
kubectlProvider := eks.kubectlProvider.fromKubectlProviderAttributes(this, jsii.String("KubectlProvider"), &kubectlProviderAttributes{
	functionArn: jsii.String("arn:aws:lambda:us-east-2:123456789012:function:my-function:1"),
	kubectlRoleArn: jsii.String("arn:aws:iam::123456789012:role/kubectl-role"),
	handlerRole: handlerRole,
})

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("Cluster"), &clusterAttributes{
	clusterName: jsii.String("cluster"),
	kubectlProvider: kubectlProvider,
})

type KubectlProviderProps added in v2.4.0

type KubectlProviderProps struct {
	// The cluster to control.
	Cluster ICluster `field:"required" json:"cluster" yaml:"cluster"`
}

Properties for a KubectlProvider.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster

kubectlProviderProps := &kubectlProviderProps{
	cluster: cluster,
}

type KubernetesManifest

type KubernetesManifest interface {
	constructs.Construct
	// The tree node.
	Node() constructs.Node
	// Returns a string representation of this construct.
	ToString() *string
}

Represents a manifest within the Kubernetes system.

Alternatively, you can use `cluster.addManifest(resource[, resource, ...])` to define resources on this cluster.

Applies/deletes the manifest using `kubectl`.

Example:

var cluster cluster

namespace := cluster.addManifest(jsii.String("my-namespace"), map[string]interface{}{
	"apiVersion": jsii.String("v1"),
	"kind": jsii.String("Namespace"),
	"metadata": map[string]*string{
		"name": jsii.String("my-app"),
	},
})

service := cluster.addManifest(jsii.String("my-service"), map[string]interface{}{
	"metadata": map[string]*string{
		"name": jsii.String("myservice"),
		"namespace": jsii.String("my-app"),
	},
	"spec": map[string]interface{}{
	},
})

service.node.addDependency(namespace)

func NewKubernetesManifest

func NewKubernetesManifest(scope constructs.Construct, id *string, props *KubernetesManifestProps) KubernetesManifest

type KubernetesManifestOptions

type KubernetesManifestOptions struct {
	// Automatically detect `Ingress` resources in the manifest and annotate them so they are picked up by an ALB Ingress Controller.
	IngressAlb *bool `field:"optional" json:"ingressAlb" yaml:"ingressAlb"`
	// Specify the ALB scheme that should be applied to `Ingress` resources.
	//
	// Only applicable if `ingressAlb` is set to `true`.
	IngressAlbScheme AlbScheme `field:"optional" json:"ingressAlbScheme" yaml:"ingressAlbScheme"`
	// When a resource is removed from a Kubernetes manifest, it no longer appears in the manifest, and there is no way to know that this resource needs to be deleted.
	//
	// To address this, `kubectl apply` has a `--prune` option which will
	// query the cluster for all resources with a specific label and will remove
	// all the labeld resources that are not part of the applied manifest. If this
	// option is disabled and a resource is removed, it will become "orphaned" and
	// will not be deleted from the cluster.
	//
	// When this option is enabled (default), the construct will inject a label to
	// all Kubernetes resources included in this manifest which will be used to
	// prune resources when the manifest changes via `kubectl apply --prune`.
	//
	// The label name will be `aws.cdk.eks/prune-<ADDR>` where `<ADDR>` is the
	// 42-char unique address of this construct in the construct tree. Value is
	// empty.
	// See: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#alternative-kubectl-apply-f-directory-prune-l-your-label
	//
	Prune *bool `field:"optional" json:"prune" yaml:"prune"`
	// A flag to signify if the manifest validation should be skipped.
	SkipValidation *bool `field:"optional" json:"skipValidation" yaml:"skipValidation"`
}

Options for `KubernetesManifest`.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

kubernetesManifestOptions := &kubernetesManifestOptions{
	ingressAlb: jsii.Boolean(false),
	ingressAlbScheme: awscdk.Aws_eks.albScheme_INTERNAL,
	prune: jsii.Boolean(false),
	skipValidation: jsii.Boolean(false),
}

type KubernetesManifestProps

type KubernetesManifestProps struct {
	// Automatically detect `Ingress` resources in the manifest and annotate them so they are picked up by an ALB Ingress Controller.
	IngressAlb *bool `field:"optional" json:"ingressAlb" yaml:"ingressAlb"`
	// Specify the ALB scheme that should be applied to `Ingress` resources.
	//
	// Only applicable if `ingressAlb` is set to `true`.
	IngressAlbScheme AlbScheme `field:"optional" json:"ingressAlbScheme" yaml:"ingressAlbScheme"`
	// When a resource is removed from a Kubernetes manifest, it no longer appears in the manifest, and there is no way to know that this resource needs to be deleted.
	//
	// To address this, `kubectl apply` has a `--prune` option which will
	// query the cluster for all resources with a specific label and will remove
	// all the labeld resources that are not part of the applied manifest. If this
	// option is disabled and a resource is removed, it will become "orphaned" and
	// will not be deleted from the cluster.
	//
	// When this option is enabled (default), the construct will inject a label to
	// all Kubernetes resources included in this manifest which will be used to
	// prune resources when the manifest changes via `kubectl apply --prune`.
	//
	// The label name will be `aws.cdk.eks/prune-<ADDR>` where `<ADDR>` is the
	// 42-char unique address of this construct in the construct tree. Value is
	// empty.
	// See: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#alternative-kubectl-apply-f-directory-prune-l-your-label
	//
	Prune *bool `field:"optional" json:"prune" yaml:"prune"`
	// A flag to signify if the manifest validation should be skipped.
	SkipValidation *bool `field:"optional" json:"skipValidation" yaml:"skipValidation"`
	// The EKS cluster to apply this manifest to.
	//
	// [disable-awslint:ref-via-interface].
	Cluster ICluster `field:"required" json:"cluster" yaml:"cluster"`
	// The manifest to apply.
	//
	// Consists of any number of child resources.
	//
	// When the resources are created/updated, this manifest will be applied to the
	// cluster through `kubectl apply` and when the resources or the stack is
	// deleted, the resources in the manifest will be deleted through `kubectl delete`.
	//
	// Example:
	//   []map[string]interface{}{
	//   	map[string]interface{}{
	//   		"apiVersion": jsii.String("v1"),
	//   		"kind": jsii.String("Pod"),
	//   		"metadata": map[string]*string{
	//   			"name": jsii.String("mypod"),
	//   		},
	//   		"spec": map[string][]map[string]interface{}{
	//   			"containers": []map[string]interface{}{
	//   				map[string]interface{}{
	//   					"name": jsii.String("hello"),
	//   					"image": jsii.String("paulbouwer/hello-kubernetes:1.5"),
	//   					"ports": []map[string]*f64{
	//   						map[string]*f64{
	//   							"containerPort": jsii.Number(8080),
	//   						},
	//   					},
	//   				},
	//   			},
	//   		},
	//   	},
	//   }
	//
	Manifest *[]*map[string]interface{} `field:"required" json:"manifest" yaml:"manifest"`
	// Overwrite any existing resources.
	//
	// If this is set, we will use `kubectl apply` instead of `kubectl create`
	// when the resource is created. Otherwise, if there is already a resource
	// in the cluster with the same name, the operation will fail.
	Overwrite *bool `field:"optional" json:"overwrite" yaml:"overwrite"`
}

Properties for KubernetesManifest.

Example:

var cluster cluster

appLabel := map[string]*string{
	"app": jsii.String("hello-kubernetes"),
}

deployment := map[string]interface{}{
	"apiVersion": jsii.String("apps/v1"),
	"kind": jsii.String("Deployment"),
	"metadata": map[string]*string{
		"name": jsii.String("hello-kubernetes"),
	},
	"spec": map[string]interface{}{
		"replicas": jsii.Number(3),
		"selector": map[string]map[string]*string{
			"matchLabels": appLabel,
		},
		"template": map[string]map[string]map[string]*string{
			"metadata": map[string]map[string]*string{
				"labels": appLabel,
			},
			"spec": map[string][]map[string]interface{}{
				"containers": []map[string]interface{}{
					map[string]interface{}{
						"name": jsii.String("hello-kubernetes"),
						"image": jsii.String("paulbouwer/hello-kubernetes:1.5"),
						"ports": []map[string]*f64{
							map[string]*f64{
								"containerPort": jsii.Number(8080),
							},
						},
					},
				},
			},
		},
	},
}

service := map[string]interface{}{
	"apiVersion": jsii.String("v1"),
	"kind": jsii.String("Service"),
	"metadata": map[string]*string{
		"name": jsii.String("hello-kubernetes"),
	},
	"spec": map[string]interface{}{
		"type": jsii.String("LoadBalancer"),
		"ports": []map[string]*f64{
			map[string]*f64{
				"port": jsii.Number(80),
				"targetPort": jsii.Number(8080),
			},
		},
		"selector": appLabel,
	},
}

// option 1: use a construct
// option 1: use a construct
eks.NewKubernetesManifest(this, jsii.String("hello-kub"), &kubernetesManifestProps{
	cluster: cluster,
	manifest: []map[string]interface{}{
		deployment,
		service,
	},
})

// or, option2: use `addManifest`
cluster.addManifest(jsii.String("hello-kub"), service, deployment)

type KubernetesObjectValue

type KubernetesObjectValue interface {
	constructs.Construct
	// The tree node.
	Node() constructs.Node
	// The value as a string token.
	Value() *string
	// Returns a string representation of this construct.
	ToString() *string
}

Represents a value of a specific object deployed in the cluster.

Use this to fetch any information available by the `kubectl get` command.

Example:

var cluster cluster

// query the load balancer address
myServiceAddress := eks.NewKubernetesObjectValue(this, jsii.String("LoadBalancerAttribute"), &kubernetesObjectValueProps{
	cluster: cluster,
	objectType: jsii.String("service"),
	objectName: jsii.String("my-service"),
	jsonPath: jsii.String(".status.loadBalancer.ingress[0].hostname"),
})

// pass the address to a lambda function
proxyFunction := lambda.NewFunction(this, jsii.String("ProxyFunction"), &functionProps{
	handler: jsii.String("index.handler"),
	code: lambda.code.fromInline(jsii.String("my-code")),
	runtime: lambda.runtime_NODEJS_14_X(),
	environment: map[string]*string{
		"myServiceAddress": myServiceAddress.value,
	},
})

func NewKubernetesObjectValue

func NewKubernetesObjectValue(scope constructs.Construct, id *string, props *KubernetesObjectValueProps) KubernetesObjectValue

type KubernetesObjectValueProps

type KubernetesObjectValueProps struct {
	// The EKS cluster to fetch attributes from.
	//
	// [disable-awslint:ref-via-interface].
	Cluster ICluster `field:"required" json:"cluster" yaml:"cluster"`
	// JSONPath to the specific value.
	// See: https://kubernetes.io/docs/reference/kubectl/jsonpath/
	//
	JsonPath *string `field:"required" json:"jsonPath" yaml:"jsonPath"`
	// The name of the object to query.
	ObjectName *string `field:"required" json:"objectName" yaml:"objectName"`
	// The object type to query.
	//
	// (e.g 'service', 'pod'...)
	ObjectType *string `field:"required" json:"objectType" yaml:"objectType"`
	// The namespace the object belongs to.
	ObjectNamespace *string `field:"optional" json:"objectNamespace" yaml:"objectNamespace"`
	// Timeout for waiting on a value.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
}

Properties for KubernetesObjectValue.

Example:

var cluster cluster

// query the load balancer address
myServiceAddress := eks.NewKubernetesObjectValue(this, jsii.String("LoadBalancerAttribute"), &kubernetesObjectValueProps{
	cluster: cluster,
	objectType: jsii.String("service"),
	objectName: jsii.String("my-service"),
	jsonPath: jsii.String(".status.loadBalancer.ingress[0].hostname"),
})

// pass the address to a lambda function
proxyFunction := lambda.NewFunction(this, jsii.String("ProxyFunction"), &functionProps{
	handler: jsii.String("index.handler"),
	code: lambda.code.fromInline(jsii.String("my-code")),
	runtime: lambda.runtime_NODEJS_14_X(),
	environment: map[string]*string{
		"myServiceAddress": myServiceAddress.value,
	},
})

type KubernetesPatch

type KubernetesPatch interface {
	constructs.Construct
	// The tree node.
	Node() constructs.Node
	// Returns a string representation of this construct.
	ToString() *string
}

A CloudFormation resource which applies/restores a JSON patch into a Kubernetes resource.

Example:

var cluster cluster

eks.NewKubernetesPatch(this, jsii.String("hello-kub-deployment-label"), &kubernetesPatchProps{
	cluster: cluster,
	resourceName: jsii.String("deployment/hello-kubernetes"),
	applyPatch: map[string]interface{}{
		"spec": map[string]*f64{
			"replicas": jsii.Number(5),
		},
	},
	restorePatch: map[string]interface{}{
		"spec": map[string]*f64{
			"replicas": jsii.Number(3),
		},
	},
})

See: https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/

func NewKubernetesPatch

func NewKubernetesPatch(scope constructs.Construct, id *string, props *KubernetesPatchProps) KubernetesPatch

type KubernetesPatchProps

type KubernetesPatchProps struct {
	// The JSON object to pass to `kubectl patch` when the resource is created/updated.
	ApplyPatch *map[string]interface{} `field:"required" json:"applyPatch" yaml:"applyPatch"`
	// The cluster to apply the patch to.
	//
	// [disable-awslint:ref-via-interface].
	Cluster ICluster `field:"required" json:"cluster" yaml:"cluster"`
	// The full name of the resource to patch (e.g. `deployment/coredns`).
	ResourceName *string `field:"required" json:"resourceName" yaml:"resourceName"`
	// The JSON object to pass to `kubectl patch` when the resource is removed.
	RestorePatch *map[string]interface{} `field:"required" json:"restorePatch" yaml:"restorePatch"`
	// The patch type to pass to `kubectl patch`.
	//
	// The default type used by `kubectl patch` is "strategic".
	PatchType PatchType `field:"optional" json:"patchType" yaml:"patchType"`
	// The kubernetes API namespace.
	ResourceNamespace *string `field:"optional" json:"resourceNamespace" yaml:"resourceNamespace"`
}

Properties for KubernetesPatch.

Example:

var cluster cluster

eks.NewKubernetesPatch(this, jsii.String("hello-kub-deployment-label"), &kubernetesPatchProps{
	cluster: cluster,
	resourceName: jsii.String("deployment/hello-kubernetes"),
	applyPatch: map[string]interface{}{
		"spec": map[string]*f64{
			"replicas": jsii.Number(5),
		},
	},
	restorePatch: map[string]interface{}{
		"spec": map[string]*f64{
			"replicas": jsii.Number(3),
		},
	},
})

type KubernetesVersion

type KubernetesVersion interface {
	// cluster version number.
	Version() *string
}

Kubernetes cluster version.

Example:

cluster := eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	defaultCapacityType: eks.defaultCapacityType_EC2,
})

func KubernetesVersion_Of

func KubernetesVersion_Of(version *string) KubernetesVersion

Custom cluster version.

func KubernetesVersion_V1_14

func KubernetesVersion_V1_14() KubernetesVersion

func KubernetesVersion_V1_15

func KubernetesVersion_V1_15() KubernetesVersion

func KubernetesVersion_V1_16

func KubernetesVersion_V1_16() KubernetesVersion

func KubernetesVersion_V1_17

func KubernetesVersion_V1_17() KubernetesVersion

func KubernetesVersion_V1_18

func KubernetesVersion_V1_18() KubernetesVersion

func KubernetesVersion_V1_19

func KubernetesVersion_V1_19() KubernetesVersion

func KubernetesVersion_V1_20

func KubernetesVersion_V1_20() KubernetesVersion

func KubernetesVersion_V1_21

func KubernetesVersion_V1_21() KubernetesVersion

func KubernetesVersion_V1_22 added in v2.20.0

func KubernetesVersion_V1_22() KubernetesVersion

type LaunchTemplateSpec

type LaunchTemplateSpec struct {
	// The Launch template ID.
	Id *string `field:"required" json:"id" yaml:"id"`
	// The launch template version to be used (optional).
	Version *string `field:"optional" json:"version" yaml:"version"`
}

Launch template property specification.

Example:

var cluster cluster

userData := "MIME-Version: 1.0\nContent-Type: multipart/mixed; boundary=\"==MYBOUNDARY==\"\n\n--==MYBOUNDARY==\nContent-Type: text/x-shellscript; charset=\"us-ascii\"\n\n#!/bin/bash\necho \"Running custom user data script\"\n\n--==MYBOUNDARY==--\\\n"
lt := ec2.NewCfnLaunchTemplate(this, jsii.String("LaunchTemplate"), &cfnLaunchTemplateProps{
	launchTemplateData: &launchTemplateDataProperty{
		instanceType: jsii.String("t3.small"),
		userData: awscdk.Fn.base64(userData),
	},
})

cluster.addNodegroupCapacity(jsii.String("extra-ng"), &nodegroupOptions{
	launchTemplateSpec: &launchTemplateSpec{
		id: lt.ref,
		version: lt.attrLatestVersionNumber,
	},
})

type MachineImageType

type MachineImageType string

The machine image type.

Example:

var cluster cluster

cluster.addAutoScalingGroupCapacity(jsii.String("BottlerocketNodes"), &autoScalingGroupCapacityOptions{
	instanceType: ec2.NewInstanceType(jsii.String("t3.small")),
	minCapacity: jsii.Number(2),
	machineImageType: eks.machineImageType_BOTTLEROCKET,
})
const (
	// Amazon EKS-optimized Linux AMI.
	MachineImageType_AMAZON_LINUX_2 MachineImageType = "AMAZON_LINUX_2"
	// Bottlerocket AMI.
	MachineImageType_BOTTLEROCKET MachineImageType = "BOTTLEROCKET"
)

type NodeType

type NodeType string

Whether the worker nodes should support GPU or just standard instances.

const (
	// Standard instances.
	NodeType_STANDARD NodeType = "STANDARD"
	// GPU instances.
	NodeType_GPU NodeType = "GPU"
	// Inferentia instances.
	NodeType_INFERENTIA NodeType = "INFERENTIA"
)

type Nodegroup

type Nodegroup interface {
	awscdk.Resource
	INodegroup
	// the Amazon EKS cluster resource.
	Cluster() ICluster
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The tree node.
	Node() constructs.Node
	// ARN of the nodegroup.
	NodegroupArn() *string
	// Nodegroup name.
	NodegroupName() *string
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//    cross-environment scenarios.
	PhysicalName() *string
	// IAM role of the instance profile for the nodegroup.
	Role() awsiam.IRole
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

The Nodegroup resource class.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster
var instanceType instanceType
var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter

nodegroup := awscdk.Aws_eks.NewNodegroup(this, jsii.String("MyNodegroup"), &nodegroupProps{
	cluster: cluster,

	// the properties below are optional
	amiType: awscdk.*Aws_eks.nodegroupAmiType_AL2_X86_64,
	capacityType: awscdk.*Aws_eks.capacityType_SPOT,
	desiredSize: jsii.Number(123),
	diskSize: jsii.Number(123),
	forceUpdate: jsii.Boolean(false),
	instanceTypes: []*instanceType{
		instanceType,
	},
	labels: map[string]*string{
		"labelsKey": jsii.String("labels"),
	},
	launchTemplateSpec: &launchTemplateSpec{
		id: jsii.String("id"),

		// the properties below are optional
		version: jsii.String("version"),
	},
	maxSize: jsii.Number(123),
	minSize: jsii.Number(123),
	nodegroupName: jsii.String("nodegroupName"),
	nodeRole: role,
	releaseVersion: jsii.String("releaseVersion"),
	remoteAccess: &nodegroupRemoteAccess{
		sshKeyName: jsii.String("sshKeyName"),

		// the properties below are optional
		sourceSecurityGroups: []iSecurityGroup{
			securityGroup,
		},
	},
	subnets: &subnetSelection{
		availabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		onePerAz: jsii.Boolean(false),
		subnetFilters: []*subnetFilter{
			subnetFilter,
		},
		subnetGroupName: jsii.String("subnetGroupName"),
		subnets: []iSubnet{
			subnet,
		},
		subnetType: awscdk.Aws_ec2.subnetType_PRIVATE_ISOLATED,
	},
	tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
	taints: []taintSpec{
		&taintSpec{
			effect: awscdk.*Aws_eks.taintEffect_NO_SCHEDULE,
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
})

func NewNodegroup

func NewNodegroup(scope constructs.Construct, id *string, props *NodegroupProps) Nodegroup

type NodegroupAmiType

type NodegroupAmiType string

The AMI type for your node group.

GPU instance types should use the `AL2_x86_64_GPU` AMI type, which uses the Amazon EKS-optimized Linux AMI with GPU support. Non-GPU instances should use the `AL2_x86_64` AMI type, which uses the Amazon EKS-optimized Linux AMI.

Example:

cluster := eks.NewCluster(this, jsii.String("HelloEKS"), &clusterProps{
	version: eks.kubernetesVersion_V1_21(),
	defaultCapacity: jsii.Number(0),
})

cluster.addNodegroupCapacity(jsii.String("custom-node-group"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("m5.large")),
	},
	minSize: jsii.Number(4),
	diskSize: jsii.Number(100),
	amiType: eks.nodegroupAmiType_AL2_X86_64_GPU,
})
const (
	// Amazon Linux 2 (x86-64).
	NodegroupAmiType_AL2_X86_64 NodegroupAmiType = "AL2_X86_64"
	// Amazon Linux 2 with GPU support.
	NodegroupAmiType_AL2_X86_64_GPU NodegroupAmiType = "AL2_X86_64_GPU"
	// Amazon Linux 2 (ARM-64).
	NodegroupAmiType_AL2_ARM_64 NodegroupAmiType = "AL2_ARM_64"
	// Bottlerocket Linux(ARM-64).
	NodegroupAmiType_BOTTLEROCKET_ARM_64 NodegroupAmiType = "BOTTLEROCKET_ARM_64"
	// Bottlerocket(x86-64).
	NodegroupAmiType_BOTTLEROCKET_X86_64 NodegroupAmiType = "BOTTLEROCKET_X86_64"
)

type NodegroupOptions

type NodegroupOptions struct {
	// The AMI type for your node group.
	//
	// If you explicitly specify the launchTemplate with custom AMI, do not specify this property, or
	// the node group deployment will fail. In other cases, you will need to specify correct amiType for the nodegroup.
	AmiType NodegroupAmiType `field:"optional" json:"amiType" yaml:"amiType"`
	// The capacity type of the nodegroup.
	CapacityType CapacityType `field:"optional" json:"capacityType" yaml:"capacityType"`
	// The current number of worker nodes that the managed node group should maintain.
	//
	// If not specified,
	// the nodewgroup will initially create `minSize` instances.
	DesiredSize *float64 `field:"optional" json:"desiredSize" yaml:"desiredSize"`
	// The root device disk size (in GiB) for your node group instances.
	DiskSize *float64 `field:"optional" json:"diskSize" yaml:"diskSize"`
	// Force the update if the existing node group's pods are unable to be drained due to a pod disruption budget issue.
	//
	// If an update fails because pods could not be drained, you can force the update after it fails to terminate the old
	// node whether or not any pods are
	// running on the node.
	ForceUpdate *bool `field:"optional" json:"forceUpdate" yaml:"forceUpdate"`
	// The instance types to use for your node group.
	// See: - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-nodegroup.html#cfn-eks-nodegroup-instancetypes
	//
	InstanceTypes *[]awsec2.InstanceType `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The Kubernetes labels to be applied to the nodes in the node group when they are created.
	Labels *map[string]*string `field:"optional" json:"labels" yaml:"labels"`
	// Launch template specification used for the nodegroup.
	// See: - https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html
	//
	LaunchTemplateSpec *LaunchTemplateSpec `field:"optional" json:"launchTemplateSpec" yaml:"launchTemplateSpec"`
	// The maximum number of worker nodes that the managed node group can scale out to.
	//
	// Managed node groups can support up to 100 nodes by default.
	MaxSize *float64 `field:"optional" json:"maxSize" yaml:"maxSize"`
	// The minimum number of worker nodes that the managed node group can scale in to.
	//
	// This number must be greater than or equal to zero.
	MinSize *float64 `field:"optional" json:"minSize" yaml:"minSize"`
	// Name of the Nodegroup.
	NodegroupName *string `field:"optional" json:"nodegroupName" yaml:"nodegroupName"`
	// The IAM role to associate with your node group.
	//
	// The Amazon EKS worker node kubelet daemon
	// makes calls to AWS APIs on your behalf. Worker nodes receive permissions for these API calls through
	// an IAM instance profile and associated policies. Before you can launch worker nodes and register them
	// into a cluster, you must create an IAM role for those worker nodes to use when they are launched.
	NodeRole awsiam.IRole `field:"optional" json:"nodeRole" yaml:"nodeRole"`
	// The AMI version of the Amazon EKS-optimized AMI to use with your node group (for example, `1.14.7-YYYYMMDD`).
	ReleaseVersion *string `field:"optional" json:"releaseVersion" yaml:"releaseVersion"`
	// The remote access (SSH) configuration to use with your node group.
	//
	// Disabled by default, however, if you
	// specify an Amazon EC2 SSH key but do not specify a source security group when you create a managed node group,
	// then port 22 on the worker nodes is opened to the internet (0.0.0.0/0)
	RemoteAccess *NodegroupRemoteAccess `field:"optional" json:"remoteAccess" yaml:"remoteAccess"`
	// The subnets to use for the Auto Scaling group that is created for your node group.
	//
	// By specifying the
	// SubnetSelection, the selected subnets will automatically apply required tags i.e.
	// `kubernetes.io/cluster/CLUSTER_NAME` with a value of `shared`, where `CLUSTER_NAME` is replaced with
	// the name of your cluster.
	Subnets *awsec2.SubnetSelection `field:"optional" json:"subnets" yaml:"subnets"`
	// The metadata to apply to the node group to assist with categorization and organization.
	//
	// Each tag consists of
	// a key and an optional value, both of which you define. Node group tags do not propagate to any other resources
	// associated with the node group, such as the Amazon EC2 instances or subnets.
	Tags *map[string]*string `field:"optional" json:"tags" yaml:"tags"`
	// The Kubernetes taints to be applied to the nodes in the node group when they are created.
	Taints *[]*TaintSpec `field:"optional" json:"taints" yaml:"taints"`
}

The Nodegroup Options for addNodeGroup() method.

Example:

var cluster cluster

cluster.addNodegroupCapacity(jsii.String("extra-ng-spot"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("c5.large")),
		ec2.NewInstanceType(jsii.String("c5a.large")),
		ec2.NewInstanceType(jsii.String("c5d.large")),
	},
	minSize: jsii.Number(3),
	capacityType: eks.capacityType_SPOT,
})

type NodegroupProps

type NodegroupProps struct {
	// The AMI type for your node group.
	//
	// If you explicitly specify the launchTemplate with custom AMI, do not specify this property, or
	// the node group deployment will fail. In other cases, you will need to specify correct amiType for the nodegroup.
	AmiType NodegroupAmiType `field:"optional" json:"amiType" yaml:"amiType"`
	// The capacity type of the nodegroup.
	CapacityType CapacityType `field:"optional" json:"capacityType" yaml:"capacityType"`
	// The current number of worker nodes that the managed node group should maintain.
	//
	// If not specified,
	// the nodewgroup will initially create `minSize` instances.
	DesiredSize *float64 `field:"optional" json:"desiredSize" yaml:"desiredSize"`
	// The root device disk size (in GiB) for your node group instances.
	DiskSize *float64 `field:"optional" json:"diskSize" yaml:"diskSize"`
	// Force the update if the existing node group's pods are unable to be drained due to a pod disruption budget issue.
	//
	// If an update fails because pods could not be drained, you can force the update after it fails to terminate the old
	// node whether or not any pods are
	// running on the node.
	ForceUpdate *bool `field:"optional" json:"forceUpdate" yaml:"forceUpdate"`
	// The instance types to use for your node group.
	// See: - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-nodegroup.html#cfn-eks-nodegroup-instancetypes
	//
	InstanceTypes *[]awsec2.InstanceType `field:"optional" json:"instanceTypes" yaml:"instanceTypes"`
	// The Kubernetes labels to be applied to the nodes in the node group when they are created.
	Labels *map[string]*string `field:"optional" json:"labels" yaml:"labels"`
	// Launch template specification used for the nodegroup.
	// See: - https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html
	//
	LaunchTemplateSpec *LaunchTemplateSpec `field:"optional" json:"launchTemplateSpec" yaml:"launchTemplateSpec"`
	// The maximum number of worker nodes that the managed node group can scale out to.
	//
	// Managed node groups can support up to 100 nodes by default.
	MaxSize *float64 `field:"optional" json:"maxSize" yaml:"maxSize"`
	// The minimum number of worker nodes that the managed node group can scale in to.
	//
	// This number must be greater than or equal to zero.
	MinSize *float64 `field:"optional" json:"minSize" yaml:"minSize"`
	// Name of the Nodegroup.
	NodegroupName *string `field:"optional" json:"nodegroupName" yaml:"nodegroupName"`
	// The IAM role to associate with your node group.
	//
	// The Amazon EKS worker node kubelet daemon
	// makes calls to AWS APIs on your behalf. Worker nodes receive permissions for these API calls through
	// an IAM instance profile and associated policies. Before you can launch worker nodes and register them
	// into a cluster, you must create an IAM role for those worker nodes to use when they are launched.
	NodeRole awsiam.IRole `field:"optional" json:"nodeRole" yaml:"nodeRole"`
	// The AMI version of the Amazon EKS-optimized AMI to use with your node group (for example, `1.14.7-YYYYMMDD`).
	ReleaseVersion *string `field:"optional" json:"releaseVersion" yaml:"releaseVersion"`
	// The remote access (SSH) configuration to use with your node group.
	//
	// Disabled by default, however, if you
	// specify an Amazon EC2 SSH key but do not specify a source security group when you create a managed node group,
	// then port 22 on the worker nodes is opened to the internet (0.0.0.0/0)
	RemoteAccess *NodegroupRemoteAccess `field:"optional" json:"remoteAccess" yaml:"remoteAccess"`
	// The subnets to use for the Auto Scaling group that is created for your node group.
	//
	// By specifying the
	// SubnetSelection, the selected subnets will automatically apply required tags i.e.
	// `kubernetes.io/cluster/CLUSTER_NAME` with a value of `shared`, where `CLUSTER_NAME` is replaced with
	// the name of your cluster.
	Subnets *awsec2.SubnetSelection `field:"optional" json:"subnets" yaml:"subnets"`
	// The metadata to apply to the node group to assist with categorization and organization.
	//
	// Each tag consists of
	// a key and an optional value, both of which you define. Node group tags do not propagate to any other resources
	// associated with the node group, such as the Amazon EC2 instances or subnets.
	Tags *map[string]*string `field:"optional" json:"tags" yaml:"tags"`
	// The Kubernetes taints to be applied to the nodes in the node group when they are created.
	Taints *[]*TaintSpec `field:"optional" json:"taints" yaml:"taints"`
	// Cluster resource.
	Cluster ICluster `field:"required" json:"cluster" yaml:"cluster"`
}

NodeGroup properties interface.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster
var instanceType instanceType
var role role
var securityGroup securityGroup
var subnet subnet
var subnetFilter subnetFilter

nodegroupProps := &nodegroupProps{
	cluster: cluster,

	// the properties below are optional
	amiType: awscdk.Aws_eks.nodegroupAmiType_AL2_X86_64,
	capacityType: awscdk.*Aws_eks.capacityType_SPOT,
	desiredSize: jsii.Number(123),
	diskSize: jsii.Number(123),
	forceUpdate: jsii.Boolean(false),
	instanceTypes: []*instanceType{
		instanceType,
	},
	labels: map[string]*string{
		"labelsKey": jsii.String("labels"),
	},
	launchTemplateSpec: &launchTemplateSpec{
		id: jsii.String("id"),

		// the properties below are optional
		version: jsii.String("version"),
	},
	maxSize: jsii.Number(123),
	minSize: jsii.Number(123),
	nodegroupName: jsii.String("nodegroupName"),
	nodeRole: role,
	releaseVersion: jsii.String("releaseVersion"),
	remoteAccess: &nodegroupRemoteAccess{
		sshKeyName: jsii.String("sshKeyName"),

		// the properties below are optional
		sourceSecurityGroups: []iSecurityGroup{
			securityGroup,
		},
	},
	subnets: &subnetSelection{
		availabilityZones: []*string{
			jsii.String("availabilityZones"),
		},
		onePerAz: jsii.Boolean(false),
		subnetFilters: []*subnetFilter{
			subnetFilter,
		},
		subnetGroupName: jsii.String("subnetGroupName"),
		subnets: []iSubnet{
			subnet,
		},
		subnetType: awscdk.Aws_ec2.subnetType_PRIVATE_ISOLATED,
	},
	tags: map[string]*string{
		"tagsKey": jsii.String("tags"),
	},
	taints: []taintSpec{
		&taintSpec{
			effect: awscdk.*Aws_eks.taintEffect_NO_SCHEDULE,
			key: jsii.String("key"),
			value: jsii.String("value"),
		},
	},
}

type NodegroupRemoteAccess

type NodegroupRemoteAccess struct {
	// The Amazon EC2 SSH key that provides access for SSH communication with the worker nodes in the managed node group.
	SshKeyName *string `field:"required" json:"sshKeyName" yaml:"sshKeyName"`
	// The security groups that are allowed SSH access (port 22) to the worker nodes.
	//
	// If you specify an Amazon EC2 SSH
	// key but do not specify a source security group when you create a managed node group, then port 22 on the worker
	// nodes is opened to the internet (0.0.0.0/0).
	SourceSecurityGroups *[]awsec2.ISecurityGroup `field:"optional" json:"sourceSecurityGroups" yaml:"sourceSecurityGroups"`
}

The remote access (SSH) configuration to use with your node group.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

var securityGroup securityGroup

nodegroupRemoteAccess := &nodegroupRemoteAccess{
	sshKeyName: jsii.String("sshKeyName"),

	// the properties below are optional
	sourceSecurityGroups: []iSecurityGroup{
		securityGroup,
	},
}

See: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-eks-nodegroup-remoteaccess.html

type OpenIdConnectProvider

type OpenIdConnectProvider interface {
	awsiam.OpenIdConnectProvider
	// The environment this resource belongs to.
	//
	// For resources that are created and managed by the CDK
	// (generally, those created by creating new class instances like Role, Bucket, etc.),
	// this is always the same as the environment of the stack they belong to;
	// however, for imported resources
	// (those obtained from static methods like fromRoleArn, fromBucketName, etc.),
	// that might be different than the stack they were imported into.
	Env() *awscdk.ResourceEnvironment
	// The tree node.
	Node() constructs.Node
	// The Amazon Resource Name (ARN) of the IAM OpenID Connect provider.
	OpenIdConnectProviderArn() *string
	// The issuer for OIDC Provider.
	OpenIdConnectProviderIssuer() *string
	// Returns a string-encoded token that resolves to the physical name that should be passed to the CloudFormation resource.
	//
	// This value will resolve to one of the following:
	// - a concrete value (e.g. `"my-awesome-bucket"`)
	// - `undefined`, when a name should be generated by CloudFormation
	// - a concrete name generated automatically during synthesis, in
	//    cross-environment scenarios.
	PhysicalName() *string
	// The stack in which this resource is defined.
	Stack() awscdk.Stack
	// Apply the given removal policy to this resource.
	//
	// The Removal Policy controls what happens to this resource when it stops
	// being managed by CloudFormation, either because you've removed it from the
	// CDK application or because you've made a change that requires the resource
	// to be replaced.
	//
	// The resource can be deleted (`RemovalPolicy.DESTROY`), or left in your AWS
	// account for data recovery and cleanup later (`RemovalPolicy.RETAIN`).
	ApplyRemovalPolicy(policy awscdk.RemovalPolicy)
	GeneratePhysicalName() *string
	// Returns an environment-sensitive token that should be used for the resource's "ARN" attribute (e.g. `bucket.bucketArn`).
	//
	// Normally, this token will resolve to `arnAttr`, but if the resource is
	// referenced across environments, `arnComponents` will be used to synthesize
	// a concrete ARN with the resource's physical name. Make sure to reference
	// `this.physicalName` in `arnComponents`.
	GetResourceArnAttribute(arnAttr *string, arnComponents *awscdk.ArnComponents) *string
	// Returns an environment-sensitive token that should be used for the resource's "name" attribute (e.g. `bucket.bucketName`).
	//
	// Normally, this token will resolve to `nameAttr`, but if the resource is
	// referenced across environments, it will be resolved to `this.physicalName`,
	// which will be a concrete name.
	GetResourceNameAttribute(nameAttr *string) *string
	// Returns a string representation of this construct.
	ToString() *string
}

IAM OIDC identity providers are entities in IAM that describe an external identity provider (IdP) service that supports the OpenID Connect (OIDC) standard, such as Google or Salesforce.

You use an IAM OIDC identity provider when you want to establish trust between an OIDC-compatible IdP and your AWS account.

This implementation has default values for thumbprints and clientIds props that will be compatible with the eks cluster.

Example:

// or create a new one using an existing issuer url
var issuerUrl string
// you can import an existing provider
provider := eks.openIdConnectProvider.fromOpenIdConnectProviderArn(this, jsii.String("Provider"), jsii.String("arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC"))
provider2 := eks.NewOpenIdConnectProvider(this, jsii.String("Provider"), &openIdConnectProviderProps{
	url: issuerUrl,
})

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("MyCluster"), &clusterAttributes{
	clusterName: jsii.String("Cluster"),
	openIdConnectProvider: provider,
	kubectlRoleArn: jsii.String("arn:aws:iam::123456:role/service-role/k8sservicerole"),
})

serviceAccount := cluster.addServiceAccount(jsii.String("MyServiceAccount"))

bucket := s3.NewBucket(this, jsii.String("Bucket"))
bucket.grantReadWrite(serviceAccount)

See: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

func NewOpenIdConnectProvider

func NewOpenIdConnectProvider(scope constructs.Construct, id *string, props *OpenIdConnectProviderProps) OpenIdConnectProvider

Defines an OpenID Connect provider.

type OpenIdConnectProviderProps

type OpenIdConnectProviderProps struct {
	// The URL of the identity provider.
	//
	// The URL must begin with https:// and
	// should correspond to the iss claim in the provider's OpenID Connect ID
	// tokens. Per the OIDC standard, path components are allowed but query
	// parameters are not. Typically the URL consists of only a hostname, like
	// https://server.example.org or https://example.com.
	//
	// You can find your OIDC Issuer URL by:
	// aws eks describe-cluster --name %cluster_name% --query "cluster.identity.oidc.issuer" --output text
	Url *string `field:"required" json:"url" yaml:"url"`
}

Initialization properties for `OpenIdConnectProvider`.

Example:

// or create a new one using an existing issuer url
var issuerUrl string
// you can import an existing provider
provider := eks.openIdConnectProvider.fromOpenIdConnectProviderArn(this, jsii.String("Provider"), jsii.String("arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC"))
provider2 := eks.NewOpenIdConnectProvider(this, jsii.String("Provider"), &openIdConnectProviderProps{
	url: issuerUrl,
})

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("MyCluster"), &clusterAttributes{
	clusterName: jsii.String("Cluster"),
	openIdConnectProvider: provider,
	kubectlRoleArn: jsii.String("arn:aws:iam::123456:role/service-role/k8sservicerole"),
})

serviceAccount := cluster.addServiceAccount(jsii.String("MyServiceAccount"))

bucket := s3.NewBucket(this, jsii.String("Bucket"))
bucket.grantReadWrite(serviceAccount)

type PatchType

type PatchType string

Values for `kubectl patch` --type argument.

const (
	// JSON Patch, RFC 6902.
	PatchType_JSON PatchType = "JSON"
	// JSON Merge patch.
	PatchType_MERGE PatchType = "MERGE"
	// Strategic merge patch.
	PatchType_STRATEGIC PatchType = "STRATEGIC"
)

type Selector

type Selector struct {
	// The Kubernetes namespace that the selector should match.
	//
	// You must specify a namespace for a selector. The selector only matches pods
	// that are created in this namespace, but you can create multiple selectors
	// to target multiple namespaces.
	Namespace *string `field:"required" json:"namespace" yaml:"namespace"`
	// The Kubernetes labels that the selector should match.
	//
	// A pod must contain
	// all of the labels that are specified in the selector for it to be
	// considered a match.
	Labels *map[string]*string `field:"optional" json:"labels" yaml:"labels"`
}

Fargate profile selector.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

selector := &selector{
	namespace: jsii.String("namespace"),

	// the properties below are optional
	labels: map[string]*string{
		"labelsKey": jsii.String("labels"),
	},
}

type ServiceAccount

type ServiceAccount interface {
	constructs.Construct
	awsiam.IPrincipal
	// When this Principal is used in an AssumeRole policy, the action to use.
	AssumeRoleAction() *string
	// The principal to grant permissions to.
	GrantPrincipal() awsiam.IPrincipal
	// The tree node.
	Node() constructs.Node
	// Return the policy fragment that identifies this principal in a Policy.
	PolicyFragment() awsiam.PrincipalPolicyFragment
	// The role which is linked to the service account.
	Role() awsiam.IRole
	// The name of the service account.
	ServiceAccountName() *string
	// The namespace where the service account is located in.
	ServiceAccountNamespace() *string
	// Add to the policy of this principal.
	AddToPrincipalPolicy(statement awsiam.PolicyStatement) *awsiam.AddToPrincipalPolicyResult
	// Returns a string representation of this construct.
	ToString() *string
}

Service Account.

Example:

// or create a new one using an existing issuer url
var issuerUrl string
// you can import an existing provider
provider := eks.openIdConnectProvider.fromOpenIdConnectProviderArn(this, jsii.String("Provider"), jsii.String("arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC"))
provider2 := eks.NewOpenIdConnectProvider(this, jsii.String("Provider"), &openIdConnectProviderProps{
	url: issuerUrl,
})

cluster := eks.cluster.fromClusterAttributes(this, jsii.String("MyCluster"), &clusterAttributes{
	clusterName: jsii.String("Cluster"),
	openIdConnectProvider: provider,
	kubectlRoleArn: jsii.String("arn:aws:iam::123456:role/service-role/k8sservicerole"),
})

serviceAccount := cluster.addServiceAccount(jsii.String("MyServiceAccount"))

bucket := s3.NewBucket(this, jsii.String("Bucket"))
bucket.grantReadWrite(serviceAccount)

func NewServiceAccount

func NewServiceAccount(scope constructs.Construct, id *string, props *ServiceAccountProps) ServiceAccount

type ServiceAccountOptions

type ServiceAccountOptions struct {
	// Additional annotations of the service account.
	Annotations *map[string]*string `field:"optional" json:"annotations" yaml:"annotations"`
	// Additional labels of the service account.
	Labels *map[string]*string `field:"optional" json:"labels" yaml:"labels"`
	// The name of the service account.
	//
	// The name of a ServiceAccount object must be a valid DNS subdomain name.
	// https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
	Name *string `field:"optional" json:"name" yaml:"name"`
	// The namespace of the service account.
	//
	// All namespace names must be valid RFC 1123 DNS labels.
	// https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#namespaces-and-dns
	Namespace *string `field:"optional" json:"namespace" yaml:"namespace"`
}

Options for `ServiceAccount`.

Example:

var cluster cluster

// add service account with annotations and labels
serviceAccount := cluster.addServiceAccount(jsii.String("MyServiceAccount"), &serviceAccountOptions{
	annotations: map[string]*string{
		"eks.amazonaws.com/sts-regional-endpoints": jsii.String("false"),
	},
	labels: map[string]*string{
		"some-label": jsii.String("with-some-value"),
	},
})

type ServiceAccountProps

type ServiceAccountProps struct {
	// Additional annotations of the service account.
	Annotations *map[string]*string `field:"optional" json:"annotations" yaml:"annotations"`
	// Additional labels of the service account.
	Labels *map[string]*string `field:"optional" json:"labels" yaml:"labels"`
	// The name of the service account.
	//
	// The name of a ServiceAccount object must be a valid DNS subdomain name.
	// https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
	Name *string `field:"optional" json:"name" yaml:"name"`
	// The namespace of the service account.
	//
	// All namespace names must be valid RFC 1123 DNS labels.
	// https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#namespaces-and-dns
	Namespace *string `field:"optional" json:"namespace" yaml:"namespace"`
	// The cluster to apply the patch to.
	Cluster ICluster `field:"required" json:"cluster" yaml:"cluster"`
}

Properties for defining service accounts.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

var cluster cluster

serviceAccountProps := &serviceAccountProps{
	cluster: cluster,

	// the properties below are optional
	annotations: map[string]*string{
		"annotationsKey": jsii.String("annotations"),
	},
	labels: map[string]*string{
		"labelsKey": jsii.String("labels"),
	},
	name: jsii.String("name"),
	namespace: jsii.String("namespace"),
}

type ServiceLoadBalancerAddressOptions

type ServiceLoadBalancerAddressOptions struct {
	// The namespace the service belongs to.
	Namespace *string `field:"optional" json:"namespace" yaml:"namespace"`
	// Timeout for waiting on the load balancer address.
	Timeout awscdk.Duration `field:"optional" json:"timeout" yaml:"timeout"`
}

Options for fetching a ServiceLoadBalancerAddress.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import cdk "github.com/aws/aws-cdk-go/awscdk"
import "github.com/aws/aws-cdk-go/awscdk"

serviceLoadBalancerAddressOptions := &serviceLoadBalancerAddressOptions{
	namespace: jsii.String("namespace"),
	timeout: cdk.duration.minutes(jsii.Number(30)),
}

type TaintEffect

type TaintEffect string

Effect types of kubernetes node taint.

Example:

var cluster cluster

cluster.addNodegroupCapacity(jsii.String("custom-node-group"), &nodegroupOptions{
	instanceTypes: []instanceType{
		ec2.NewInstanceType(jsii.String("m5.large")),
	},
	taints: []taintSpec{
		&taintSpec{
			effect: eks.taintEffect_NO_SCHEDULE,
			key: jsii.String("foo"),
			value: jsii.String("bar"),
		},
	},
})
const (
	// NoSchedule.
	TaintEffect_NO_SCHEDULE TaintEffect = "NO_SCHEDULE"
	// PreferNoSchedule.
	TaintEffect_PREFER_NO_SCHEDULE TaintEffect = "PREFER_NO_SCHEDULE"
	// NoExecute.
	TaintEffect_NO_EXECUTE TaintEffect = "NO_EXECUTE"
)

type TaintSpec

type TaintSpec struct {
	// Effect type.
	Effect TaintEffect `field:"optional" json:"effect" yaml:"effect"`
	// Taint key.
	Key *string `field:"optional" json:"key" yaml:"key"`
	// Taint value.
	Value *string `field:"optional" json:"value" yaml:"value"`
}

Taint interface.

Example:

// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import "github.com/aws/aws-cdk-go/awscdk"

taintSpec := &taintSpec{
	effect: awscdk.Aws_eks.taintEffect_NO_SCHEDULE,
	key: jsii.String("key"),
	value: jsii.String("value"),
}

Source Files

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL