types

package
v1.18.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 18, 2024 License: Apache-2.0 Imports: 4 Imported by: 2

Documentation

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AssignPublicIp

type AssignPublicIp string
const (
	AssignPublicIpEnabled  AssignPublicIp = "ENABLED"
	AssignPublicIpDisabled AssignPublicIp = "DISABLED"
)

Enum values for AssignPublicIp

func (AssignPublicIp) Values

func (AssignPublicIp) Values() []AssignPublicIp

Values returns all known values for AssignPublicIp. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type AwsVpcConfiguration

type AwsVpcConfiguration struct {

	// Specifies the subnets associated with the task. These subnets must all be in
	// the same VPC. You can specify as many as 16 subnets.
	//
	// This member is required.
	Subnets []string

	// Specifies whether the task's elastic network interface receives a public IP
	// address. You can specify ENABLED only when LaunchType in EcsParameters is set
	// to FARGATE .
	AssignPublicIp AssignPublicIp

	// Specifies the security groups associated with the task. These security groups
	// must all be in the same VPC. You can specify as many as five security groups. If
	// you do not specify a security group, the default security group for the VPC is
	// used.
	SecurityGroups []string
	// contains filtered or unexported fields
}

This structure specifies the VPC subnets and security groups for the task, and whether a public IP address is to be used. This structure is relevant only for ECS tasks that use the awsvpc network mode.

type BatchArrayProperties

type BatchArrayProperties struct {

	// The size of the array, if this is an array batch job.
	Size *int32
	// contains filtered or unexported fields
}

The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. This parameter is used only if the target is an Batch job.

type BatchContainerOverrides

type BatchContainerOverrides struct {

	// The command to send to the container that overrides the default command from
	// the Docker image or the task definition.
	Command []string

	// The environment variables to send to the container. You can add new environment
	// variables, which are added to the container at launch, or you can override the
	// existing environment variables from the Docker image or the task definition.
	//
	// Environment variables cannot start with " Batch ". This naming convention is
	// reserved for variables that Batch sets.
	Environment []BatchEnvironmentVariable

	// The instance type to use for a multi-node parallel job.
	//
	// This parameter isn't applicable to single-node container jobs or jobs that run
	// on Fargate resources, and shouldn't be provided.
	InstanceType *string

	// The type and amount of resources to assign to a container. This overrides the
	// settings in the job definition. The supported resources include GPU , MEMORY ,
	// and VCPU .
	ResourceRequirements []BatchResourceRequirement
	// contains filtered or unexported fields
}

The overrides that are sent to a container.

type BatchEnvironmentVariable

type BatchEnvironmentVariable struct {

	// The name of the key-value pair. For environment variables, this is the name of
	// the environment variable.
	Name *string

	// The value of the key-value pair. For environment variables, this is the value
	// of the environment variable.
	Value *string
	// contains filtered or unexported fields
}

The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition.

Environment variables cannot start with " Batch ". This naming convention is reserved for variables that Batch sets.

type BatchJobDependency

type BatchJobDependency struct {

	// The job ID of the Batch job that's associated with this dependency.
	JobId *string

	// The type of the job dependency.
	Type BatchJobDependencyType
	// contains filtered or unexported fields
}

An object that represents an Batch job dependency.

type BatchJobDependencyType

type BatchJobDependencyType string
const (
	BatchJobDependencyTypeNToN       BatchJobDependencyType = "N_TO_N"
	BatchJobDependencyTypeSequential BatchJobDependencyType = "SEQUENTIAL"
)

Enum values for BatchJobDependencyType

func (BatchJobDependencyType) Values

Values returns all known values for BatchJobDependencyType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type BatchResourceRequirement

type BatchResourceRequirement struct {

	// The type of resource to assign to a container. The supported resources include
	// GPU , MEMORY , and VCPU .
	//
	// This member is required.
	Type BatchResourceRequirementType

	// The quantity of the specified resource to reserve for the container. The values
	// vary based on the type specified.
	//
	// type="GPU" The number of physical GPUs to reserve for the container. Make sure
	// that the number of GPUs reserved for all containers in a job doesn't exceed the
	// number of available GPUs on the compute resource that the job is launched on.
	//
	// GPUs aren't available for jobs that are running on Fargate resources.
	//
	// type="MEMORY" The memory hard limit (in MiB) present to the container. This
	// parameter is supported for jobs that are running on EC2 resources. If your
	// container attempts to exceed the memory specified, the container is terminated.
	// This parameter maps to Memory in the [Create a container] section of the [Docker Remote API] and the --memory option
	// to [docker run]. You must specify at least 4 MiB of memory for a job. This is required but
	// can be specified in several places for multi-node parallel (MNP) jobs. It must
	// be specified for each node at least once. This parameter maps to Memory in the [Create a container]
	// section of the [Docker Remote API]and the --memory option to [docker run].
	//
	// If you're trying to maximize your resource utilization by providing your jobs
	// as much memory as possible for a particular instance type, see [Memory management]in the Batch
	// User Guide.
	//
	// For jobs that are running on Fargate resources, then value is the hard limit
	// (in MiB), and must match one of the supported values and the VCPU values must
	// be one of the values supported for that memory value.
	//
	// value = 512 VCPU = 0.25
	//
	// value = 1024 VCPU = 0.25 or 0.5
	//
	// value = 2048 VCPU = 0.25, 0.5, or 1
	//
	// value = 3072 VCPU = 0.5, or 1
	//
	// value = 4096 VCPU = 0.5, 1, or 2
	//
	// value = 5120, 6144, or 7168 VCPU = 1 or 2
	//
	// value = 8192 VCPU = 1, 2, 4, or 8
	//
	// value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360 VCPU = 2 or 4
	//
	// value = 16384 VCPU = 2, 4, or 8
	//
	// value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696,
	// or 30720 VCPU = 4
	//
	// value = 20480, 24576, or 28672 VCPU = 4 or 8
	//
	// value = 36864, 45056, 53248, or 61440 VCPU = 8
	//
	// value = 32768, 40960, 49152, or 57344 VCPU = 8 or 16
	//
	// value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880 VCPU = 16
	//
	// type="VCPU" The number of vCPUs reserved for the container. This parameter maps
	// to CpuShares in the [Create a container] section of the [Docker Remote API] and the --cpu-shares option to [docker run]. Each vCPU
	// is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least
	// one vCPU. This is required but can be specified in several places; it must be
	// specified for each node at least once.
	//
	// The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For
	// more information about Fargate quotas, see [Fargate quotas]in the Amazon Web Services General
	// Reference.
	//
	// For jobs that are running on Fargate resources, then value must match one of
	// the supported values and the MEMORY values must be one of the values supported
	// for that VCPU value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16
	//
	// value = 0.25 MEMORY = 512, 1024, or 2048
	//
	// value = 0.5 MEMORY = 1024, 2048, 3072, or 4096
	//
	// value = 1 MEMORY = 2048, 3072, 4096, 5120, 6144, 7168, or 8192
	//
	// value = 2 MEMORY = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288,
	// 13312, 14336, 15360, or 16384
	//
	// value = 4 MEMORY = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384,
	// 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648,
	// 28672, 29696, or 30720
	//
	// value = 8 MEMORY = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056,
	// 49152, 53248, 57344, or 61440
	//
	// value = 16 MEMORY = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112,
	// 98304, 106496, 114688, or 122880
	//
	// [docker run]: https://docs.docker.com/engine/reference/run/
	// [Create a container]: https://docs.docker.com/engine/api/v1.23/#create-a-container
	// [Memory management]: https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
	// [Docker Remote API]: https://docs.docker.com/engine/api/v1.23/
	// [Fargate quotas]: https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate
	//
	// This member is required.
	Value *string
	// contains filtered or unexported fields
}

The type and amount of a resource to assign to a container. The supported resources include GPU , MEMORY , and VCPU .

type BatchResourceRequirementType

type BatchResourceRequirementType string
const (
	BatchResourceRequirementTypeGpu    BatchResourceRequirementType = "GPU"
	BatchResourceRequirementTypeMemory BatchResourceRequirementType = "MEMORY"
	BatchResourceRequirementTypeVcpu   BatchResourceRequirementType = "VCPU"
)

Enum values for BatchResourceRequirementType

func (BatchResourceRequirementType) Values

Values returns all known values for BatchResourceRequirementType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type BatchRetryStrategy

type BatchRetryStrategy struct {

	// The number of times to move a job to the RUNNABLE status. If the value of
	// attempts is greater than one, the job is retried on failure the same number of
	// attempts as the value.
	Attempts *int32
	// contains filtered or unexported fields
}

The retry strategy that's associated with a job. For more information, see Automated job retries in the Batch User Guide.

type CapacityProviderStrategyItem

type CapacityProviderStrategyItem struct {

	// The short name of the capacity provider.
	//
	// This member is required.
	CapacityProvider *string

	// The base value designates how many tasks, at a minimum, to run on the specified
	// capacity provider. Only one capacity provider in a capacity provider strategy
	// can have a base defined. If no value is specified, the default value of 0 is
	// used.
	Base int32

	// The weight value designates the relative percentage of the total number of
	// tasks launched that should use the specified capacity provider. The weight value
	// is taken into consideration after the base value, if defined, is satisfied.
	Weight int32
	// contains filtered or unexported fields
}

The details of a capacity provider strategy. To learn more, see CapacityProviderStrategyItem in the Amazon ECS API Reference.

type CloudwatchLogsLogDestination added in v1.7.0

type CloudwatchLogsLogDestination struct {

	// The Amazon Web Services Resource Name (ARN) for the CloudWatch log group to
	// which EventBridge sends the log records.
	LogGroupArn *string
	// contains filtered or unexported fields
}

The Amazon CloudWatch Logs logging configuration settings for the pipe.

type CloudwatchLogsLogDestinationParameters added in v1.7.0

type CloudwatchLogsLogDestinationParameters struct {

	// The Amazon Web Services Resource Name (ARN) for the CloudWatch log group to
	// which EventBridge sends the log records.
	//
	// This member is required.
	LogGroupArn *string
	// contains filtered or unexported fields
}

The Amazon CloudWatch Logs logging configuration settings for the pipe.

type ConflictException

type ConflictException struct {
	Message *string

	ErrorCodeOverride *string

	ResourceId   *string
	ResourceType *string
	// contains filtered or unexported fields
}

An action you attempted resulted in an exception.

func (*ConflictException) Error

func (e *ConflictException) Error() string

func (*ConflictException) ErrorCode

func (e *ConflictException) ErrorCode() string

func (*ConflictException) ErrorFault

func (e *ConflictException) ErrorFault() smithy.ErrorFault

func (*ConflictException) ErrorMessage

func (e *ConflictException) ErrorMessage() string

type DeadLetterConfig

type DeadLetterConfig struct {

	// The ARN of the specified target for the dead-letter queue.
	//
	// For Amazon Kinesis stream and Amazon DynamoDB stream sources, specify either an
	// Amazon SNS topic or Amazon SQS queue ARN.
	Arn *string
	// contains filtered or unexported fields
}

A DeadLetterConfig object that contains information about a dead-letter queue configuration.

type DimensionMapping added in v1.12.0

type DimensionMapping struct {

	// The metadata attributes of the time series. For example, the name and
	// Availability Zone of an Amazon EC2 instance or the name of the manufacturer of a
	// wind turbine are dimensions.
	//
	// This member is required.
	DimensionName *string

	// Dynamic path to the dimension value in the source event.
	//
	// This member is required.
	DimensionValue *string

	// The data type of the dimension for the time-series data.
	//
	// This member is required.
	DimensionValueType DimensionValueType
	// contains filtered or unexported fields
}

Maps source data to a dimension in the target Timestream for LiveAnalytics table.

For more information, see Amazon Timestream for LiveAnalytics concepts

type DimensionValueType added in v1.12.0

type DimensionValueType string
const (
	DimensionValueTypeVarchar DimensionValueType = "VARCHAR"
)

Enum values for DimensionValueType

func (DimensionValueType) Values added in v1.12.0

Values returns all known values for DimensionValueType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type DynamoDBStreamStartPosition

type DynamoDBStreamStartPosition string
const (
	DynamoDBStreamStartPositionTrimHorizon DynamoDBStreamStartPosition = "TRIM_HORIZON"
	DynamoDBStreamStartPositionLatest      DynamoDBStreamStartPosition = "LATEST"
)

Enum values for DynamoDBStreamStartPosition

func (DynamoDBStreamStartPosition) Values

Values returns all known values for DynamoDBStreamStartPosition. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type EcsContainerOverride

type EcsContainerOverride struct {

	// The command to send to the container that overrides the default command from
	// the Docker image or the task definition. You must also specify a container name.
	Command []string

	// The number of cpu units reserved for the container, instead of the default
	// value from the task definition. You must also specify a container name.
	Cpu *int32

	// The environment variables to send to the container. You can add new environment
	// variables, which are added to the container at launch, or you can override the
	// existing environment variables from the Docker image or the task definition. You
	// must also specify a container name.
	Environment []EcsEnvironmentVariable

	// A list of files containing the environment variables to pass to a container,
	// instead of the value from the container definition.
	EnvironmentFiles []EcsEnvironmentFile

	// The hard limit (in MiB) of memory to present to the container, instead of the
	// default value from the task definition. If your container attempts to exceed the
	// memory specified here, the container is killed. You must also specify a
	// container name.
	Memory *int32

	// The soft limit (in MiB) of memory to reserve for the container, instead of the
	// default value from the task definition. You must also specify a container name.
	MemoryReservation *int32

	// The name of the container that receives the override. This parameter is
	// required if any override is specified.
	Name *string

	// The type and amount of a resource to assign to a container, instead of the
	// default value from the task definition. The only supported resource is a GPU.
	ResourceRequirements []EcsResourceRequirement
	// contains filtered or unexported fields
}

The overrides that are sent to a container. An empty container override can be passed in. An example of an empty container override is {"containerOverrides": [ ] } . If a non-empty container override is specified, the name parameter must be included.

type EcsEnvironmentFile

type EcsEnvironmentFile struct {

	// The file type to use. The only supported value is s3 .
	//
	// This member is required.
	Type EcsEnvironmentFileType

	// The Amazon Resource Name (ARN) of the Amazon S3 object containing the
	// environment variable file.
	//
	// This member is required.
	Value *string
	// contains filtered or unexported fields
}

A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored. For more information about the environment variable file syntax, see Declare default environment variables in file.

If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Specifying environment variablesin the Amazon Elastic Container Service Developer Guide.

This parameter is only supported for tasks hosted on Fargate using the following platform versions:

  • Linux platform version 1.4.0 or later.

  • Windows platform version 1.0.0 or later.

type EcsEnvironmentFileType

type EcsEnvironmentFileType string
const (
	EcsEnvironmentFileTypeS3 EcsEnvironmentFileType = "s3"
)

Enum values for EcsEnvironmentFileType

func (EcsEnvironmentFileType) Values

Values returns all known values for EcsEnvironmentFileType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type EcsEnvironmentVariable

type EcsEnvironmentVariable struct {

	// The name of the key-value pair. For environment variables, this is the name of
	// the environment variable.
	Name *string

	// The value of the key-value pair. For environment variables, this is the value
	// of the environment variable.
	Value *string
	// contains filtered or unexported fields
}

The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.

type EcsEphemeralStorage

type EcsEphemeralStorage struct {

	// The total amount, in GiB, of ephemeral storage to set for the task. The minimum
	// supported value is 21 GiB and the maximum supported value is 200 GiB.
	//
	// This member is required.
	SizeInGiB *int32
	// contains filtered or unexported fields
}

The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate. For more information, see Fargate task storagein the Amazon ECS User Guide for Fargate.

This parameter is only supported for tasks hosted on Fargate using Linux platform version 1.4.0 or later. This parameter is not supported for Windows containers on Fargate.

type EcsInferenceAcceleratorOverride

type EcsInferenceAcceleratorOverride struct {

	// The Elastic Inference accelerator device name to override for the task. This
	// parameter must match a deviceName specified in the task definition.
	DeviceName *string

	// The Elastic Inference accelerator type to use.
	DeviceType *string
	// contains filtered or unexported fields
}

Details on an Elastic Inference accelerator task override. This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECSin the Amazon Elastic Container Service Developer Guide.

type EcsResourceRequirement

type EcsResourceRequirement struct {

	// The type of resource to assign to a container. The supported values are GPU or
	// InferenceAccelerator .
	//
	// This member is required.
	Type EcsResourceRequirementType

	// The value for the specified resource type.
	//
	// If the GPU type is used, the value is the number of physical GPUs the Amazon
	// ECS container agent reserves for the container. The number of GPUs that's
	// reserved for all containers in a task can't exceed the number of available GPUs
	// on the container instance that the task is launched on.
	//
	// If the InferenceAccelerator type is used, the value matches the deviceName for
	// an InferenceAccelerator specified in a task definition.
	//
	// This member is required.
	Value *string
	// contains filtered or unexported fields
}

The type and amount of a resource to assign to a container. The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECSor Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide

type EcsResourceRequirementType

type EcsResourceRequirementType string
const (
	EcsResourceRequirementTypeGpu                  EcsResourceRequirementType = "GPU"
	EcsResourceRequirementTypeInferenceAccelerator EcsResourceRequirementType = "InferenceAccelerator"
)

Enum values for EcsResourceRequirementType

func (EcsResourceRequirementType) Values

Values returns all known values for EcsResourceRequirementType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type EcsTaskOverride

type EcsTaskOverride struct {

	// One or more container overrides that are sent to a task.
	ContainerOverrides []EcsContainerOverride

	// The cpu override for the task.
	Cpu *string

	// The ephemeral storage setting override for the task.
	//
	// This parameter is only supported for tasks hosted on Fargate that use the
	// following platform versions:
	//
	//   - Linux platform version 1.4.0 or later.
	//
	//   - Windows platform version 1.0.0 or later.
	EphemeralStorage *EcsEphemeralStorage

	// The Amazon Resource Name (ARN) of the task execution IAM role override for the
	// task. For more information, see [Amazon ECS task execution IAM role]in the Amazon Elastic Container Service
	// Developer Guide.
	//
	// [Amazon ECS task execution IAM role]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html
	ExecutionRoleArn *string

	// The Elastic Inference accelerator override for the task.
	InferenceAcceleratorOverrides []EcsInferenceAcceleratorOverride

	// The memory override for the task.
	Memory *string

	// The Amazon Resource Name (ARN) of the IAM role that containers in this task can
	// assume. All containers in this task are granted the permissions that are
	// specified in this role. For more information, see [IAM Role for Tasks]in the Amazon Elastic
	// Container Service Developer Guide.
	//
	// [IAM Role for Tasks]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
	TaskRoleArn *string
	// contains filtered or unexported fields
}

The overrides that are associated with a task.

type EpochTimeUnit added in v1.12.0

type EpochTimeUnit string
const (
	EpochTimeUnitMilliseconds EpochTimeUnit = "MILLISECONDS"
	EpochTimeUnitSeconds      EpochTimeUnit = "SECONDS"
	EpochTimeUnitMicroseconds EpochTimeUnit = "MICROSECONDS"
	EpochTimeUnitNanoseconds  EpochTimeUnit = "NANOSECONDS"
)

Enum values for EpochTimeUnit

func (EpochTimeUnit) Values added in v1.12.0

func (EpochTimeUnit) Values() []EpochTimeUnit

Values returns all known values for EpochTimeUnit. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type Filter

type Filter struct {

	// The event pattern.
	Pattern *string
	// contains filtered or unexported fields
}

Filter events using an event pattern. For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

type FilterCriteria

type FilterCriteria struct {

	// The event patterns.
	Filters []Filter
	// contains filtered or unexported fields
}

The collection of event patterns used to filter events.

To remove a filter, specify a FilterCriteria object with an empty array of Filter objects.

For more information, see Events and Event Patterns in the Amazon EventBridge User Guide.

type FirehoseLogDestination added in v1.7.0

type FirehoseLogDestination struct {

	// The Amazon Resource Name (ARN) of the Firehose delivery stream to which
	// EventBridge delivers the pipe log records.
	DeliveryStreamArn *string
	// contains filtered or unexported fields
}

The Amazon Data Firehose logging configuration settings for the pipe.

type FirehoseLogDestinationParameters added in v1.7.0

type FirehoseLogDestinationParameters struct {

	// Specifies the Amazon Resource Name (ARN) of the Firehose delivery stream to
	// which EventBridge delivers the pipe log records.
	//
	// This member is required.
	DeliveryStreamArn *string
	// contains filtered or unexported fields
}

The Amazon Data Firehose logging configuration settings for the pipe.

type IncludeExecutionDataOption added in v1.7.0

type IncludeExecutionDataOption string
const (
	IncludeExecutionDataOptionAll IncludeExecutionDataOption = "ALL"
)

Enum values for IncludeExecutionDataOption

func (IncludeExecutionDataOption) Values added in v1.7.0

Values returns all known values for IncludeExecutionDataOption. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type InternalException

type InternalException struct {
	Message *string

	ErrorCodeOverride *string

	RetryAfterSeconds *int32
	// contains filtered or unexported fields
}

This exception occurs due to unexpected causes.

func (*InternalException) Error

func (e *InternalException) Error() string

func (*InternalException) ErrorCode

func (e *InternalException) ErrorCode() string

func (*InternalException) ErrorFault

func (e *InternalException) ErrorFault() smithy.ErrorFault

func (*InternalException) ErrorMessage

func (e *InternalException) ErrorMessage() string

type KinesisStreamStartPosition

type KinesisStreamStartPosition string
const (
	KinesisStreamStartPositionTrimHorizon KinesisStreamStartPosition = "TRIM_HORIZON"
	KinesisStreamStartPositionLatest      KinesisStreamStartPosition = "LATEST"
	KinesisStreamStartPositionAtTimestamp KinesisStreamStartPosition = "AT_TIMESTAMP"
)

Enum values for KinesisStreamStartPosition

func (KinesisStreamStartPosition) Values

Values returns all known values for KinesisStreamStartPosition. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type LaunchType

type LaunchType string
const (
	LaunchTypeEc2      LaunchType = "EC2"
	LaunchTypeFargate  LaunchType = "FARGATE"
	LaunchTypeExternal LaunchType = "EXTERNAL"
)

Enum values for LaunchType

func (LaunchType) Values

func (LaunchType) Values() []LaunchType

Values returns all known values for LaunchType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type LogLevel added in v1.7.0

type LogLevel string
const (
	LogLevelOff   LogLevel = "OFF"
	LogLevelError LogLevel = "ERROR"
	LogLevelInfo  LogLevel = "INFO"
	LogLevelTrace LogLevel = "TRACE"
)

Enum values for LogLevel

func (LogLevel) Values added in v1.7.0

func (LogLevel) Values() []LogLevel

Values returns all known values for LogLevel. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type MQBrokerAccessCredentials

type MQBrokerAccessCredentials interface {
	// contains filtered or unexported methods
}

The Secrets Manager secret that stores your broker credentials.

The following types satisfy this interface:

MQBrokerAccessCredentialsMemberBasicAuth
Example (OutputUsage)
package main

import (
	"fmt"
	"github.com/aws/aws-sdk-go-v2/service/pipes/types"
)

func main() {
	var union types.MQBrokerAccessCredentials
	// type switches can be used to check the union value
	switch v := union.(type) {
	case *types.MQBrokerAccessCredentialsMemberBasicAuth:
		_ = v.Value // Value is string

	case *types.UnknownUnionMember:
		fmt.Println("unknown tag:", v.Tag)

	default:
		fmt.Println("union is nil or unknown type")

	}
}
Output:

type MQBrokerAccessCredentialsMemberBasicAuth

type MQBrokerAccessCredentialsMemberBasicAuth struct {
	Value string
	// contains filtered or unexported fields
}

The ARN of the Secrets Manager secret.

type MSKAccessCredentials

type MSKAccessCredentials interface {
	// contains filtered or unexported methods
}

The Secrets Manager secret that stores your stream credentials.

The following types satisfy this interface:

MSKAccessCredentialsMemberClientCertificateTlsAuth
MSKAccessCredentialsMemberSaslScram512Auth
Example (OutputUsage)
package main

import (
	"fmt"
	"github.com/aws/aws-sdk-go-v2/service/pipes/types"
)

func main() {
	var union types.MSKAccessCredentials
	// type switches can be used to check the union value
	switch v := union.(type) {
	case *types.MSKAccessCredentialsMemberClientCertificateTlsAuth:
		_ = v.Value // Value is string

	case *types.MSKAccessCredentialsMemberSaslScram512Auth:
		_ = v.Value // Value is string

	case *types.UnknownUnionMember:
		fmt.Println("unknown tag:", v.Tag)

	default:
		fmt.Println("union is nil or unknown type")

	}
}
Output:

type MSKAccessCredentialsMemberClientCertificateTlsAuth

type MSKAccessCredentialsMemberClientCertificateTlsAuth struct {
	Value string
	// contains filtered or unexported fields
}

The ARN of the Secrets Manager secret.

type MSKAccessCredentialsMemberSaslScram512Auth

type MSKAccessCredentialsMemberSaslScram512Auth struct {
	Value string
	// contains filtered or unexported fields
}

The ARN of the Secrets Manager secret.

type MSKStartPosition

type MSKStartPosition string
const (
	MSKStartPositionTrimHorizon MSKStartPosition = "TRIM_HORIZON"
	MSKStartPositionLatest      MSKStartPosition = "LATEST"
)

Enum values for MSKStartPosition

func (MSKStartPosition) Values

Values returns all known values for MSKStartPosition. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type MeasureValueType added in v1.12.0

type MeasureValueType string
const (
	MeasureValueTypeDouble    MeasureValueType = "DOUBLE"
	MeasureValueTypeBigint    MeasureValueType = "BIGINT"
	MeasureValueTypeVarchar   MeasureValueType = "VARCHAR"
	MeasureValueTypeBoolean   MeasureValueType = "BOOLEAN"
	MeasureValueTypeTimestamp MeasureValueType = "TIMESTAMP"
)

Enum values for MeasureValueType

func (MeasureValueType) Values added in v1.12.0

Values returns all known values for MeasureValueType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type MultiMeasureAttributeMapping added in v1.12.0

type MultiMeasureAttributeMapping struct {

	// Dynamic path to the measurement attribute in the source event.
	//
	// This member is required.
	MeasureValue *string

	// Data type of the measurement attribute in the source event.
	//
	// This member is required.
	MeasureValueType MeasureValueType

	// Target measure name to be used.
	//
	// This member is required.
	MultiMeasureAttributeName *string
	// contains filtered or unexported fields
}

A mapping of a source event data field to a measure in a Timestream for LiveAnalytics record.

type MultiMeasureMapping added in v1.12.0

type MultiMeasureMapping struct {

	// Mappings that represent multiple source event fields mapped to measures in the
	// same Timestream for LiveAnalytics record.
	//
	// This member is required.
	MultiMeasureAttributeMappings []MultiMeasureAttributeMapping

	// The name of the multiple measurements per record (multi-measure).
	//
	// This member is required.
	MultiMeasureName *string
	// contains filtered or unexported fields
}

Maps multiple measures from the source event to the same Timestream for LiveAnalytics record.

For more information, see Amazon Timestream for LiveAnalytics concepts

type NetworkConfiguration

type NetworkConfiguration struct {

	// Use this structure to specify the VPC subnets and security groups for the task,
	// and whether a public IP address is to be used. This structure is relevant only
	// for ECS tasks that use the awsvpc network mode.
	AwsvpcConfiguration *AwsVpcConfiguration
	// contains filtered or unexported fields
}

This structure specifies the network configuration for an Amazon ECS task.

type NotFoundException

type NotFoundException struct {
	Message *string

	ErrorCodeOverride *string
	// contains filtered or unexported fields
}

An entity that you specified does not exist.

func (*NotFoundException) Error

func (e *NotFoundException) Error() string

func (*NotFoundException) ErrorCode

func (e *NotFoundException) ErrorCode() string

func (*NotFoundException) ErrorFault

func (e *NotFoundException) ErrorFault() smithy.ErrorFault

func (*NotFoundException) ErrorMessage

func (e *NotFoundException) ErrorMessage() string

type OnPartialBatchItemFailureStreams

type OnPartialBatchItemFailureStreams string
const (
	OnPartialBatchItemFailureStreamsAutomaticBisect OnPartialBatchItemFailureStreams = "AUTOMATIC_BISECT"
)

Enum values for OnPartialBatchItemFailureStreams

func (OnPartialBatchItemFailureStreams) Values

Values returns all known values for OnPartialBatchItemFailureStreams. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type Pipe

type Pipe struct {

	// The ARN of the pipe.
	Arn *string

	// The time the pipe was created.
	CreationTime *time.Time

	// The state the pipe is in.
	CurrentState PipeState

	// The state the pipe should be in.
	DesiredState RequestedPipeState

	// The ARN of the enrichment resource.
	Enrichment *string

	// When the pipe was last updated, in [ISO-8601 format] (YYYY-MM-DDThh:mm:ss.sTZD).
	//
	// [ISO-8601 format]: https://www.w3.org/TR/NOTE-datetime
	LastModifiedTime *time.Time

	// The name of the pipe.
	Name *string

	// The ARN of the source resource.
	Source *string

	// The reason the pipe is in its current state.
	StateReason *string

	// The ARN of the target resource.
	Target *string
	// contains filtered or unexported fields
}

An object that represents a pipe. Amazon EventBridgePipes connect event sources to targets and reduces the need for specialized knowledge and integration code.

type PipeEnrichmentHttpParameters

type PipeEnrichmentHttpParameters struct {

	// The headers that need to be sent as part of request invoking the API Gateway
	// REST API or EventBridge ApiDestination.
	HeaderParameters map[string]string

	// The path parameter values to be used to populate API Gateway REST API or
	// EventBridge ApiDestination path wildcards ("*").
	PathParameterValues []string

	// The query string keys/values that need to be sent as part of request invoking
	// the API Gateway REST API or EventBridge ApiDestination.
	QueryStringParameters map[string]string
	// contains filtered or unexported fields
}

These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations. In the latter case, these are merged with any InvocationParameters specified on the Connection, with any values from the Connection taking precedence.

type PipeEnrichmentParameters

type PipeEnrichmentParameters struct {

	// Contains the HTTP parameters to use when the target is a API Gateway REST
	// endpoint or EventBridge ApiDestination.
	//
	// If you specify an API Gateway REST API or EventBridge ApiDestination as a
	// target, you can use this parameter to specify headers, path parameters, and
	// query string keys/values as part of your target invoking request. If you're
	// using ApiDestinations, the corresponding Connection can also have these values
	// configured. In case of any conflicting keys, values from the Connection take
	// precedence.
	HttpParameters *PipeEnrichmentHttpParameters

	// Valid JSON text passed to the enrichment. In this case, nothing from the event
	// itself is passed to the enrichment. For more information, see [The JavaScript Object Notation (JSON) Data Interchange Format].
	//
	// To remove an input template, specify an empty string.
	//
	// [The JavaScript Object Notation (JSON) Data Interchange Format]: http://www.rfc-editor.org/rfc/rfc7159.txt
	InputTemplate *string
	// contains filtered or unexported fields
}

The parameters required to set up enrichment on your pipe.

type PipeLogConfiguration added in v1.7.0

type PipeLogConfiguration struct {

	// The Amazon CloudWatch Logs logging configuration settings for the pipe.
	CloudwatchLogsLogDestination *CloudwatchLogsLogDestination

	// The Amazon Data Firehose logging configuration settings for the pipe.
	FirehoseLogDestination *FirehoseLogDestination

	// Whether the execution data (specifically, the payload , awsRequest , and
	// awsResponse fields) is included in the log messages for this pipe.
	//
	// This applies to all log destinations for the pipe.
	//
	// For more information, see [Including execution data in logs] in the Amazon EventBridge User Guide.
	//
	// [Including execution data in logs]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html#eb-pipes-logs-execution-data
	IncludeExecutionData []IncludeExecutionDataOption

	// The level of logging detail to include. This applies to all log destinations
	// for the pipe.
	Level LogLevel

	// The Amazon S3 logging configuration settings for the pipe.
	S3LogDestination *S3LogDestination
	// contains filtered or unexported fields
}

The logging configuration settings for the pipe.

type PipeLogConfigurationParameters added in v1.7.0

type PipeLogConfigurationParameters struct {

	// The level of logging detail to include. This applies to all log destinations
	// for the pipe.
	//
	// For more information, see [Specifying EventBridge Pipes log level] in the Amazon EventBridge User Guide.
	//
	// [Specifying EventBridge Pipes log level]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html#eb-pipes-logs-level
	//
	// This member is required.
	Level LogLevel

	// The Amazon CloudWatch Logs logging configuration settings for the pipe.
	CloudwatchLogsLogDestination *CloudwatchLogsLogDestinationParameters

	// The Amazon Data Firehose logging configuration settings for the pipe.
	FirehoseLogDestination *FirehoseLogDestinationParameters

	// Specify ALL to include the execution data (specifically, the payload ,
	// awsRequest , and awsResponse fields) in the log messages for this pipe.
	//
	// This applies to all log destinations for the pipe.
	//
	// For more information, see [Including execution data in logs] in the Amazon EventBridge User Guide.
	//
	// By default, execution data is not included.
	//
	// [Including execution data in logs]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html#eb-pipes-logs-execution-data
	IncludeExecutionData []IncludeExecutionDataOption

	// The Amazon S3 logging configuration settings for the pipe.
	S3LogDestination *S3LogDestinationParameters
	// contains filtered or unexported fields
}

Specifies the logging configuration settings for the pipe.

When you call UpdatePipe , EventBridge updates the fields in the PipeLogConfigurationParameters object atomically as one and overrides existing values. This is by design. If you don't specify an optional field in any of the Amazon Web Services service parameters objects ( CloudwatchLogsLogDestinationParameters , FirehoseLogDestinationParameters , or S3LogDestinationParameters ), EventBridge sets that field to its system-default value during the update.

For example, suppose when you created the pipe you specified a Firehose stream log destination. You then update the pipe to add an Amazon S3 log destination. In addition to specifying the S3LogDestinationParameters for the new log destination, you must also specify the fields in the FirehoseLogDestinationParameters object in order to retain the Firehose stream log destination.

For more information on generating pipe log records, see Log EventBridge Pipes in the Amazon EventBridge User Guide.

type PipeSourceActiveMQBrokerParameters

type PipeSourceActiveMQBrokerParameters struct {

	// The credentials needed to access the resource.
	//
	// This member is required.
	Credentials MQBrokerAccessCredentials

	// The name of the destination queue to consume.
	//
	// This member is required.
	QueueName *string

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32
	// contains filtered or unexported fields
}

The parameters for using an Active MQ broker as a source.

type PipeSourceDynamoDBStreamParameters

type PipeSourceDynamoDBStreamParameters struct {

	// The position in a stream from which to start reading.
	//
	// This member is required.
	StartingPosition DynamoDBStreamStartPosition

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// Define the target queue to send dead-letter queue events to.
	DeadLetterConfig *DeadLetterConfig

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// Discard records older than the specified age. The default value is -1, which
	// sets the maximum age to infinite. When the value is set to infinite, EventBridge
	// never discards old records.
	MaximumRecordAgeInSeconds *int32

	// Discard records after the specified number of retries. The default value is -1,
	// which sets the maximum number of retries to infinite. When MaximumRetryAttempts
	// is infinite, EventBridge retries failed records until the record expires in the
	// event source.
	MaximumRetryAttempts *int32

	// Define how to handle item process failures. AUTOMATIC_BISECT halves each batch
	// and retry each half until all the records are processed or there is one failed
	// message left in the batch.
	OnPartialBatchItemFailure OnPartialBatchItemFailureStreams

	// The number of batches to process concurrently from each shard. The default
	// value is 1.
	ParallelizationFactor *int32
	// contains filtered or unexported fields
}

The parameters for using a DynamoDB stream as a source.

type PipeSourceKinesisStreamParameters

type PipeSourceKinesisStreamParameters struct {

	// The position in a stream from which to start reading.
	//
	// This member is required.
	StartingPosition KinesisStreamStartPosition

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// Define the target queue to send dead-letter queue events to.
	DeadLetterConfig *DeadLetterConfig

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// Discard records older than the specified age. The default value is -1, which
	// sets the maximum age to infinite. When the value is set to infinite, EventBridge
	// never discards old records.
	MaximumRecordAgeInSeconds *int32

	// Discard records after the specified number of retries. The default value is -1,
	// which sets the maximum number of retries to infinite. When MaximumRetryAttempts
	// is infinite, EventBridge retries failed records until the record expires in the
	// event source.
	MaximumRetryAttempts *int32

	// Define how to handle item process failures. AUTOMATIC_BISECT halves each batch
	// and retry each half until all the records are processed or there is one failed
	// message left in the batch.
	OnPartialBatchItemFailure OnPartialBatchItemFailureStreams

	// The number of batches to process concurrently from each shard. The default
	// value is 1.
	ParallelizationFactor *int32

	// With StartingPosition set to AT_TIMESTAMP , the time from which to start
	// reading, in Unix time seconds.
	StartingPositionTimestamp *time.Time
	// contains filtered or unexported fields
}

The parameters for using a Kinesis stream as a source.

type PipeSourceManagedStreamingKafkaParameters

type PipeSourceManagedStreamingKafkaParameters struct {

	// The name of the topic that the pipe will read from.
	//
	// This member is required.
	TopicName *string

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The name of the destination queue to consume.
	ConsumerGroupID *string

	// The credentials needed to access the resource.
	Credentials MSKAccessCredentials

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// The position in a stream from which to start reading.
	StartingPosition MSKStartPosition
	// contains filtered or unexported fields
}

The parameters for using an MSK stream as a source.

type PipeSourceParameters

type PipeSourceParameters struct {

	// The parameters for using an Active MQ broker as a source.
	ActiveMQBrokerParameters *PipeSourceActiveMQBrokerParameters

	// The parameters for using a DynamoDB stream as a source.
	DynamoDBStreamParameters *PipeSourceDynamoDBStreamParameters

	// The collection of event patterns used to filter events.
	//
	// To remove a filter, specify a FilterCriteria object with an empty array of
	// Filter objects.
	//
	// For more information, see [Events and Event Patterns] in the Amazon EventBridge User Guide.
	//
	// [Events and Event Patterns]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html
	FilterCriteria *FilterCriteria

	// The parameters for using a Kinesis stream as a source.
	KinesisStreamParameters *PipeSourceKinesisStreamParameters

	// The parameters for using an MSK stream as a source.
	ManagedStreamingKafkaParameters *PipeSourceManagedStreamingKafkaParameters

	// The parameters for using a Rabbit MQ broker as a source.
	RabbitMQBrokerParameters *PipeSourceRabbitMQBrokerParameters

	// The parameters for using a self-managed Apache Kafka stream as a source.
	//
	// A self managed cluster refers to any Apache Kafka cluster not hosted by Amazon
	// Web Services. This includes both clusters you manage yourself, as well as those
	// hosted by a third-party provider, such as [Confluent Cloud], [CloudKarafka], or [Redpanda]. For more information, see [Apache Kafka streams as a source]
	// in the Amazon EventBridge User Guide.
	//
	// [CloudKarafka]: https://www.cloudkarafka.com/
	// [Apache Kafka streams as a source]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-kafka.html
	// [Confluent Cloud]: https://www.confluent.io/
	// [Redpanda]: https://redpanda.com/
	SelfManagedKafkaParameters *PipeSourceSelfManagedKafkaParameters

	// The parameters for using a Amazon SQS stream as a source.
	SqsQueueParameters *PipeSourceSqsQueueParameters
	// contains filtered or unexported fields
}

The parameters required to set up a source for your pipe.

type PipeSourceRabbitMQBrokerParameters

type PipeSourceRabbitMQBrokerParameters struct {

	// The credentials needed to access the resource.
	//
	// This member is required.
	Credentials MQBrokerAccessCredentials

	// The name of the destination queue to consume.
	//
	// This member is required.
	QueueName *string

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// The name of the virtual host associated with the source broker.
	VirtualHost *string
	// contains filtered or unexported fields
}

The parameters for using a Rabbit MQ broker as a source.

type PipeSourceSelfManagedKafkaParameters

type PipeSourceSelfManagedKafkaParameters struct {

	// The name of the topic that the pipe will read from.
	//
	// This member is required.
	TopicName *string

	// An array of server URLs.
	AdditionalBootstrapServers []string

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The name of the destination queue to consume.
	ConsumerGroupID *string

	// The credentials needed to access the resource.
	Credentials SelfManagedKafkaAccessConfigurationCredentials

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// The ARN of the Secrets Manager secret used for certification.
	ServerRootCaCertificate *string

	// The position in a stream from which to start reading.
	StartingPosition SelfManagedKafkaStartPosition

	// This structure specifies the VPC subnets and security groups for the stream,
	// and whether a public IP address is to be used.
	Vpc *SelfManagedKafkaAccessConfigurationVpc
	// contains filtered or unexported fields
}

The parameters for using a self-managed Apache Kafka stream as a source.

A self managed cluster refers to any Apache Kafka cluster not hosted by Amazon Web Services. This includes both clusters you manage yourself, as well as those hosted by a third-party provider, such as Confluent Cloud, CloudKarafka, or Redpanda. For more information, see Apache Kafka streams as a source in the Amazon EventBridge User Guide.

type PipeSourceSqsQueueParameters

type PipeSourceSqsQueueParameters struct {

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32
	// contains filtered or unexported fields
}

The parameters for using a Amazon SQS stream as a source.

type PipeState

type PipeState string
const (
	PipeStateRunning              PipeState = "RUNNING"
	PipeStateStopped              PipeState = "STOPPED"
	PipeStateCreating             PipeState = "CREATING"
	PipeStateUpdating             PipeState = "UPDATING"
	PipeStateDeleting             PipeState = "DELETING"
	PipeStateStarting             PipeState = "STARTING"
	PipeStateStopping             PipeState = "STOPPING"
	PipeStateCreateFailed         PipeState = "CREATE_FAILED"
	PipeStateUpdateFailed         PipeState = "UPDATE_FAILED"
	PipeStateStartFailed          PipeState = "START_FAILED"
	PipeStateStopFailed           PipeState = "STOP_FAILED"
	PipeStateDeleteFailed         PipeState = "DELETE_FAILED"
	PipeStateCreateRollbackFailed PipeState = "CREATE_ROLLBACK_FAILED"
	PipeStateDeleteRollbackFailed PipeState = "DELETE_ROLLBACK_FAILED"
	PipeStateUpdateRollbackFailed PipeState = "UPDATE_ROLLBACK_FAILED"
)

Enum values for PipeState

func (PipeState) Values

func (PipeState) Values() []PipeState

Values returns all known values for PipeState. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type PipeTargetBatchJobParameters

type PipeTargetBatchJobParameters struct {

	// The job definition used by this job. This value can be one of name ,
	// name:revision , or the Amazon Resource Name (ARN) for the job definition. If
	// name is specified without a revision then the latest active revision is used.
	//
	// This member is required.
	JobDefinition *string

	// The name of the job. It can be up to 128 letters long. The first character must
	// be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens
	// (-), and underscores (_).
	//
	// This member is required.
	JobName *string

	// The array properties for the submitted job, such as the size of the array. The
	// array size can be between 2 and 10,000. If you specify array properties for a
	// job, it becomes an array job. This parameter is used only if the target is an
	// Batch job.
	ArrayProperties *BatchArrayProperties

	// The overrides that are sent to a container.
	ContainerOverrides *BatchContainerOverrides

	// A list of dependencies for the job. A job can depend upon a maximum of 20 jobs.
	// You can specify a SEQUENTIAL type dependency without specifying a job ID for
	// array jobs so that each child array job completes sequentially, starting at
	// index 0. You can also specify an N_TO_N type dependency with a job ID for array
	// jobs. In that case, each index child of this job must wait for the corresponding
	// index child of each dependency to complete before it can begin.
	DependsOn []BatchJobDependency

	// Additional parameters passed to the job that replace parameter substitution
	// placeholders that are set in the job definition. Parameters are specified as a
	// key and value pair mapping. Parameters included here override any corresponding
	// parameter defaults from the job definition.
	Parameters map[string]string

	// The retry strategy to use for failed jobs. When a retry strategy is specified
	// here, it overrides the retry strategy defined in the job definition.
	RetryStrategy *BatchRetryStrategy
	// contains filtered or unexported fields
}

The parameters for using an Batch job as a target.

type PipeTargetCloudWatchLogsParameters

type PipeTargetCloudWatchLogsParameters struct {

	// The name of the log stream.
	LogStreamName *string

	// The time the event occurred, expressed as the number of milliseconds after Jan
	// 1, 1970 00:00:00 UTC.
	Timestamp *string
	// contains filtered or unexported fields
}

The parameters for using an CloudWatch Logs log stream as a target.

type PipeTargetEcsTaskParameters

type PipeTargetEcsTaskParameters struct {

	// The ARN of the task definition to use if the event target is an Amazon ECS
	// task.
	//
	// This member is required.
	TaskDefinitionArn *string

	// The capacity provider strategy to use for the task.
	//
	// If a capacityProviderStrategy is specified, the launchType parameter must be
	// omitted. If no capacityProviderStrategy or launchType is specified, the
	// defaultCapacityProviderStrategy for the cluster is used.
	CapacityProviderStrategy []CapacityProviderStrategyItem

	// Specifies whether to enable Amazon ECS managed tags for the task. For more
	// information, see [Tagging Your Amazon ECS Resources]in the Amazon Elastic Container Service Developer Guide.
	//
	// [Tagging Your Amazon ECS Resources]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-using-tags.html
	EnableECSManagedTags bool

	// Whether or not to enable the execute command functionality for the containers
	// in this task. If true, this enables execute command functionality on all
	// containers in the task.
	EnableExecuteCommand bool

	// Specifies an Amazon ECS task group for the task. The maximum length is 255
	// characters.
	Group *string

	// Specifies the launch type on which your task is running. The launch type that
	// you specify here must match one of the launch type (compatibilities) of the
	// target task. The FARGATE value is supported only in the Regions where Fargate
	// with Amazon ECS is supported. For more information, see [Fargate on Amazon ECS]in the Amazon Elastic
	// Container Service Developer Guide.
	//
	// [Fargate on Amazon ECS]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS-Fargate.html
	LaunchType LaunchType

	// Use this structure if the Amazon ECS task uses the awsvpc network mode. This
	// structure specifies the VPC subnets and security groups associated with the
	// task, and whether a public IP address is to be used. This structure is required
	// if LaunchType is FARGATE because the awsvpc mode is required for Fargate tasks.
	//
	// If you specify NetworkConfiguration when the target ECS task does not use the
	// awsvpc network mode, the task fails.
	NetworkConfiguration *NetworkConfiguration

	// The overrides that are associated with a task.
	Overrides *EcsTaskOverride

	// An array of placement constraint objects to use for the task. You can specify
	// up to 10 constraints per task (including constraints in the task definition and
	// those specified at runtime).
	PlacementConstraints []PlacementConstraint

	// The placement strategy objects to use for the task. You can specify a maximum
	// of five strategy rules per task.
	PlacementStrategy []PlacementStrategy

	// Specifies the platform version for the task. Specify only the numeric portion
	// of the platform version, such as 1.1.0 .
	//
	// This structure is used only if LaunchType is FARGATE . For more information
	// about valid platform versions, see [Fargate Platform Versions]in the Amazon Elastic Container Service
	// Developer Guide.
	//
	// [Fargate Platform Versions]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html
	PlatformVersion *string

	// Specifies whether to propagate the tags from the task definition to the task.
	// If no value is specified, the tags are not propagated. Tags can only be
	// propagated to the task during task creation. To add tags to a task after task
	// creation, use the TagResource API action.
	PropagateTags PropagateTags

	// The reference ID to use for the task.
	ReferenceId *string

	// The metadata that you apply to the task to help you categorize and organize
	// them. Each tag consists of a key and an optional value, both of which you
	// define. To learn more, see [RunTask]in the Amazon ECS API Reference.
	//
	// [RunTask]: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html#ECS-RunTask-request-tags
	Tags []Tag

	// The number of tasks to create based on TaskDefinition . The default is 1.
	TaskCount *int32
	// contains filtered or unexported fields
}

The parameters for using an Amazon ECS task as a target.

type PipeTargetEventBridgeEventBusParameters

type PipeTargetEventBridgeEventBusParameters struct {

	// A free-form string, with a maximum of 128 characters, used to decide what
	// fields to expect in the event detail.
	DetailType *string

	// The URL subdomain of the endpoint. For example, if the URL for Endpoint is
	// https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId is
	// abcde.veo .
	EndpointId *string

	// Amazon Web Services resources, identified by Amazon Resource Name (ARN), which
	// the event primarily concerns. Any number, including zero, may be present.
	Resources []string

	// The source of the event.
	Source *string

	// The time stamp of the event, per [RFC3339]. If no time stamp is provided, the time stamp
	// of the [PutEvents]call is used.
	//
	// [PutEvents]: https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEvents.html
	// [RFC3339]: https://www.rfc-editor.org/rfc/rfc3339.txt
	Time *string
	// contains filtered or unexported fields
}

The parameters for using an EventBridge event bus as a target.

type PipeTargetHttpParameters

type PipeTargetHttpParameters struct {

	// The headers that need to be sent as part of request invoking the API Gateway
	// REST API or EventBridge ApiDestination.
	HeaderParameters map[string]string

	// The path parameter values to be used to populate API Gateway REST API or
	// EventBridge ApiDestination path wildcards ("*").
	PathParameterValues []string

	// The query string keys/values that need to be sent as part of request invoking
	// the API Gateway REST API or EventBridge ApiDestination.
	QueryStringParameters map[string]string
	// contains filtered or unexported fields
}

These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations.

type PipeTargetInvocationType

type PipeTargetInvocationType string
const (
	PipeTargetInvocationTypeRequestResponse PipeTargetInvocationType = "REQUEST_RESPONSE"
	PipeTargetInvocationTypeFireAndForget   PipeTargetInvocationType = "FIRE_AND_FORGET"
)

Enum values for PipeTargetInvocationType

func (PipeTargetInvocationType) Values

Values returns all known values for PipeTargetInvocationType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type PipeTargetKinesisStreamParameters

type PipeTargetKinesisStreamParameters struct {

	// Determines which shard in the stream the data record is assigned to. Partition
	// keys are Unicode strings with a maximum length limit of 256 characters for each
	// key. Amazon Kinesis Data Streams uses the partition key as input to a hash
	// function that maps the partition key and associated data to a specific shard.
	// Specifically, an MD5 hash function is used to map partition keys to 128-bit
	// integer values and to map associated data records to shards. As a result of this
	// hashing mechanism, all data records with the same partition key map to the same
	// shard within the stream.
	//
	// This member is required.
	PartitionKey *string
	// contains filtered or unexported fields
}

The parameters for using a Kinesis stream as a target.

type PipeTargetLambdaFunctionParameters

type PipeTargetLambdaFunctionParameters struct {

	// Specify whether to invoke the function synchronously or asynchronously.
	//
	//   - REQUEST_RESPONSE (default) - Invoke synchronously. This corresponds to the
	//   RequestResponse option in the InvocationType parameter for the Lambda [Invoke]API.
	//
	//   - FIRE_AND_FORGET - Invoke asynchronously. This corresponds to the Event
	//   option in the InvocationType parameter for the Lambda [Invoke]API.
	//
	// For more information, see [Invocation types] in the Amazon EventBridge User Guide.
	//
	// [Invocation types]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html#pipes-invocation
	// [Invoke]: https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax
	InvocationType PipeTargetInvocationType
	// contains filtered or unexported fields
}

The parameters for using a Lambda function as a target.

type PipeTargetParameters

type PipeTargetParameters struct {

	// The parameters for using an Batch job as a target.
	BatchJobParameters *PipeTargetBatchJobParameters

	// The parameters for using an CloudWatch Logs log stream as a target.
	CloudWatchLogsParameters *PipeTargetCloudWatchLogsParameters

	// The parameters for using an Amazon ECS task as a target.
	EcsTaskParameters *PipeTargetEcsTaskParameters

	// The parameters for using an EventBridge event bus as a target.
	EventBridgeEventBusParameters *PipeTargetEventBridgeEventBusParameters

	// These are custom parameter to be used when the target is an API Gateway REST
	// APIs or EventBridge ApiDestinations.
	HttpParameters *PipeTargetHttpParameters

	// Valid JSON text passed to the target. In this case, nothing from the event
	// itself is passed to the target. For more information, see [The JavaScript Object Notation (JSON) Data Interchange Format].
	//
	// To remove an input template, specify an empty string.
	//
	// [The JavaScript Object Notation (JSON) Data Interchange Format]: http://www.rfc-editor.org/rfc/rfc7159.txt
	InputTemplate *string

	// The parameters for using a Kinesis stream as a target.
	KinesisStreamParameters *PipeTargetKinesisStreamParameters

	// The parameters for using a Lambda function as a target.
	LambdaFunctionParameters *PipeTargetLambdaFunctionParameters

	// These are custom parameters to be used when the target is a Amazon Redshift
	// cluster to invoke the Amazon Redshift Data API BatchExecuteStatement.
	RedshiftDataParameters *PipeTargetRedshiftDataParameters

	// The parameters for using a SageMaker pipeline as a target.
	SageMakerPipelineParameters *PipeTargetSageMakerPipelineParameters

	// The parameters for using a Amazon SQS stream as a target.
	SqsQueueParameters *PipeTargetSqsQueueParameters

	// The parameters for using a Step Functions state machine as a target.
	StepFunctionStateMachineParameters *PipeTargetStateMachineParameters

	// The parameters for using a Timestream for LiveAnalytics table as a target.
	TimestreamParameters *PipeTargetTimestreamParameters
	// contains filtered or unexported fields
}

The parameters required to set up a target for your pipe.

For more information about pipe target parameters, including how to use dynamic path parameters, see Target parametersin the Amazon EventBridge User Guide.

type PipeTargetRedshiftDataParameters

type PipeTargetRedshiftDataParameters struct {

	// The name of the database. Required when authenticating using temporary
	// credentials.
	//
	// This member is required.
	Database *string

	// The SQL statement text to run.
	//
	// This member is required.
	Sqls []string

	// The database user name. Required when authenticating using temporary
	// credentials.
	DbUser *string

	// The name or ARN of the secret that enables access to the database. Required
	// when authenticating using Secrets Manager.
	SecretManagerArn *string

	// The name of the SQL statement. You can name the SQL statement when you create
	// it to identify the query.
	StatementName *string

	// Indicates whether to send an event back to EventBridge after the SQL statement
	// runs.
	WithEvent bool
	// contains filtered or unexported fields
}

These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API BatchExecuteStatement.

type PipeTargetSageMakerPipelineParameters

type PipeTargetSageMakerPipelineParameters struct {

	// List of Parameter names and values for SageMaker Model Building Pipeline
	// execution.
	PipelineParameterList []SageMakerPipelineParameter
	// contains filtered or unexported fields
}

The parameters for using a SageMaker pipeline as a target.

type PipeTargetSqsQueueParameters

type PipeTargetSqsQueueParameters struct {

	// This parameter applies only to FIFO (first-in-first-out) queues.
	//
	// The token used for deduplication of sent messages.
	MessageDeduplicationId *string

	// The FIFO message group ID to use as the target.
	MessageGroupId *string
	// contains filtered or unexported fields
}

The parameters for using a Amazon SQS stream as a target.

type PipeTargetStateMachineParameters

type PipeTargetStateMachineParameters struct {

	// Specify whether to invoke the Step Functions state machine synchronously or
	// asynchronously.
	//
	//   - REQUEST_RESPONSE (default) - Invoke synchronously. For more information, see [StartSyncExecution]
	//   in the Step Functions API Reference.
	//
	// REQUEST_RESPONSE is not supported for STANDARD state machine workflows.
	//
	//   - FIRE_AND_FORGET - Invoke asynchronously. For more information, see [StartExecution]in the
	//   Step Functions API Reference.
	//
	// For more information, see [Invocation types] in the Amazon EventBridge User Guide.
	//
	// [StartExecution]: https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html
	// [StartSyncExecution]: https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartSyncExecution.html
	// [Invocation types]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html#pipes-invocation
	InvocationType PipeTargetInvocationType
	// contains filtered or unexported fields
}

The parameters for using a Step Functions state machine as a target.

type PipeTargetTimestreamParameters added in v1.12.0

type PipeTargetTimestreamParameters struct {

	// Map source data to dimensions in the target Timestream for LiveAnalytics table.
	//
	// For more information, see [Amazon Timestream for LiveAnalytics concepts]
	//
	// [Amazon Timestream for LiveAnalytics concepts]: https://docs.aws.amazon.com/timestream/latest/developerguide/concepts.html
	//
	// This member is required.
	DimensionMappings []DimensionMapping

	// Dynamic path to the source data field that represents the time value for your
	// data.
	//
	// This member is required.
	TimeValue *string

	// 64 bit version value or source data field that represents the version value for
	// your data.
	//
	// Write requests with a higher version number will update the existing measure
	// values of the record and version. In cases where the measure value is the same,
	// the version will still be updated.
	//
	// Default value is 1.
	//
	// Timestream for LiveAnalytics does not support updating partial measure values
	// in a record.
	//
	// Write requests for duplicate data with a higher version number will update the
	// existing measure value and version. In cases where the measure value is the
	// same, Version will still be updated. Default value is 1 .
	//
	// Version must be 1 or greater, or you will receive a ValidationException error.
	//
	// This member is required.
	VersionValue *string

	// The granularity of the time units used. Default is MILLISECONDS .
	//
	// Required if TimeFieldType is specified as EPOCH .
	EpochTimeUnit EpochTimeUnit

	// Maps multiple measures from the source event to the same record in the
	// specified Timestream for LiveAnalytics table.
	MultiMeasureMappings []MultiMeasureMapping

	// Mappings of single source data fields to individual records in the specified
	// Timestream for LiveAnalytics table.
	SingleMeasureMappings []SingleMeasureMapping

	// The type of time value used.
	//
	// The default is EPOCH .
	TimeFieldType TimeFieldType

	// How to format the timestamps. For example, yyyy-MM-dd'T'HH:mm:ss'Z' .
	//
	// Required if TimeFieldType is specified as TIMESTAMP_FORMAT .
	TimestampFormat *string
	// contains filtered or unexported fields
}

The parameters for using a Timestream for LiveAnalytics table as a target.

type PlacementConstraint

type PlacementConstraint struct {

	// A cluster query language expression to apply to the constraint. You cannot
	// specify an expression if the constraint type is distinctInstance . To learn
	// more, see [Cluster Query Language]in the Amazon Elastic Container Service Developer Guide.
	//
	// [Cluster Query Language]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query-language.html
	Expression *string

	// The type of constraint. Use distinctInstance to ensure that each task in a
	// particular group is running on a different container instance. Use memberOf to
	// restrict the selection to a group of valid candidates.
	Type PlacementConstraintType
	// contains filtered or unexported fields
}

An object representing a constraint on task placement. To learn more, see Task Placement Constraints in the Amazon Elastic Container Service Developer Guide.

type PlacementConstraintType

type PlacementConstraintType string
const (
	PlacementConstraintTypeDistinctInstance PlacementConstraintType = "distinctInstance"
	PlacementConstraintTypeMemberOf         PlacementConstraintType = "memberOf"
)

Enum values for PlacementConstraintType

func (PlacementConstraintType) Values

Values returns all known values for PlacementConstraintType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type PlacementStrategy

type PlacementStrategy struct {

	// The field to apply the placement strategy against. For the spread placement
	// strategy, valid values are instanceId (or host, which has the same effect), or
	// any platform or custom attribute that is applied to a container instance, such
	// as attribute:ecs.availability-zone. For the binpack placement strategy, valid
	// values are cpu and memory. For the random placement strategy, this field is not
	// used.
	Field *string

	// The type of placement strategy. The random placement strategy randomly places
	// tasks on available candidates. The spread placement strategy spreads placement
	// across available candidates evenly based on the field parameter. The binpack
	// strategy places tasks on available candidates that have the least available
	// amount of the resource that is specified with the field parameter. For example,
	// if you binpack on memory, a task is placed on the instance with the least amount
	// of remaining memory (but still enough to run the task).
	Type PlacementStrategyType
	// contains filtered or unexported fields
}

The task placement strategy for a task or service. To learn more, see Task Placement Strategies in the Amazon Elastic Container Service Service Developer Guide.

type PlacementStrategyType

type PlacementStrategyType string
const (
	PlacementStrategyTypeRandom  PlacementStrategyType = "random"
	PlacementStrategyTypeSpread  PlacementStrategyType = "spread"
	PlacementStrategyTypeBinpack PlacementStrategyType = "binpack"
)

Enum values for PlacementStrategyType

func (PlacementStrategyType) Values

Values returns all known values for PlacementStrategyType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type PropagateTags

type PropagateTags string
const (
	PropagateTagsTaskDefinition PropagateTags = "TASK_DEFINITION"
)

Enum values for PropagateTags

func (PropagateTags) Values

func (PropagateTags) Values() []PropagateTags

Values returns all known values for PropagateTags. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type RequestedPipeState

type RequestedPipeState string
const (
	RequestedPipeStateRunning RequestedPipeState = "RUNNING"
	RequestedPipeStateStopped RequestedPipeState = "STOPPED"
)

Enum values for RequestedPipeState

func (RequestedPipeState) Values

Values returns all known values for RequestedPipeState. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type RequestedPipeStateDescribeResponse

type RequestedPipeStateDescribeResponse string
const (
	RequestedPipeStateDescribeResponseRunning RequestedPipeStateDescribeResponse = "RUNNING"
	RequestedPipeStateDescribeResponseStopped RequestedPipeStateDescribeResponse = "STOPPED"
	RequestedPipeStateDescribeResponseDeleted RequestedPipeStateDescribeResponse = "DELETED"
)

Enum values for RequestedPipeStateDescribeResponse

func (RequestedPipeStateDescribeResponse) Values

Values returns all known values for RequestedPipeStateDescribeResponse. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type S3LogDestination added in v1.7.0

type S3LogDestination struct {

	// The name of the Amazon S3 bucket to which EventBridge delivers the log records
	// for the pipe.
	BucketName *string

	// The Amazon Web Services account that owns the Amazon S3 bucket to which
	// EventBridge delivers the log records for the pipe.
	BucketOwner *string

	// The format EventBridge uses for the log records.
	//
	// EventBridge currently only supports json formatting.
	OutputFormat S3OutputFormat

	// The prefix text with which to begin Amazon S3 log object names.
	//
	// For more information, see [Organizing objects using prefixes] in the Amazon Simple Storage Service User Guide.
	//
	// [Organizing objects using prefixes]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html
	Prefix *string
	// contains filtered or unexported fields
}

The Amazon S3 logging configuration settings for the pipe.

type S3LogDestinationParameters added in v1.7.0

type S3LogDestinationParameters struct {

	// Specifies the name of the Amazon S3 bucket to which EventBridge delivers the
	// log records for the pipe.
	//
	// This member is required.
	BucketName *string

	// Specifies the Amazon Web Services account that owns the Amazon S3 bucket to
	// which EventBridge delivers the log records for the pipe.
	//
	// This member is required.
	BucketOwner *string

	// How EventBridge should format the log records.
	//
	// EventBridge currently only supports json formatting.
	OutputFormat S3OutputFormat

	// Specifies any prefix text with which to begin Amazon S3 log object names.
	//
	// You can use prefixes to organize the data that you store in Amazon S3 buckets.
	// A prefix is a string of characters at the beginning of the object key name. A
	// prefix can be any length, subject to the maximum length of the object key name
	// (1,024 bytes). For more information, see [Organizing objects using prefixes]in the Amazon Simple Storage Service
	// User Guide.
	//
	// [Organizing objects using prefixes]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html
	Prefix *string
	// contains filtered or unexported fields
}

The Amazon S3 logging configuration settings for the pipe.

type S3OutputFormat added in v1.7.0

type S3OutputFormat string
const (
	S3OutputFormatJson  S3OutputFormat = "json"
	S3OutputFormatPlain S3OutputFormat = "plain"
	S3OutputFormatW3c   S3OutputFormat = "w3c"
)

Enum values for S3OutputFormat

func (S3OutputFormat) Values added in v1.7.0

func (S3OutputFormat) Values() []S3OutputFormat

Values returns all known values for S3OutputFormat. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type SageMakerPipelineParameter

type SageMakerPipelineParameter struct {

	// Name of parameter to start execution of a SageMaker Model Building Pipeline.
	//
	// This member is required.
	Name *string

	// Value of parameter to start execution of a SageMaker Model Building Pipeline.
	//
	// This member is required.
	Value *string
	// contains filtered or unexported fields
}

Name/Value pair of a parameter to start execution of a SageMaker Model Building Pipeline.

type SelfManagedKafkaAccessConfigurationCredentials

type SelfManagedKafkaAccessConfigurationCredentials interface {
	// contains filtered or unexported methods
}

The Secrets Manager secret that stores your stream credentials.

The following types satisfy this interface:

SelfManagedKafkaAccessConfigurationCredentialsMemberBasicAuth
SelfManagedKafkaAccessConfigurationCredentialsMemberClientCertificateTlsAuth
SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram256Auth
SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram512Auth
Example (OutputUsage)
package main

import (
	"fmt"
	"github.com/aws/aws-sdk-go-v2/service/pipes/types"
)

func main() {
	var union types.SelfManagedKafkaAccessConfigurationCredentials
	// type switches can be used to check the union value
	switch v := union.(type) {
	case *types.SelfManagedKafkaAccessConfigurationCredentialsMemberBasicAuth:
		_ = v.Value // Value is string

	case *types.SelfManagedKafkaAccessConfigurationCredentialsMemberClientCertificateTlsAuth:
		_ = v.Value // Value is string

	case *types.SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram256Auth:
		_ = v.Value // Value is string

	case *types.SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram512Auth:
		_ = v.Value // Value is string

	case *types.UnknownUnionMember:
		fmt.Println("unknown tag:", v.Tag)

	default:
		fmt.Println("union is nil or unknown type")

	}
}
Output:

type SelfManagedKafkaAccessConfigurationCredentialsMemberBasicAuth

type SelfManagedKafkaAccessConfigurationCredentialsMemberBasicAuth struct {
	Value string
	// contains filtered or unexported fields
}

The ARN of the Secrets Manager secret.

type SelfManagedKafkaAccessConfigurationCredentialsMemberClientCertificateTlsAuth

type SelfManagedKafkaAccessConfigurationCredentialsMemberClientCertificateTlsAuth struct {
	Value string
	// contains filtered or unexported fields
}

The ARN of the Secrets Manager secret.

type SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram256Auth

type SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram256Auth struct {
	Value string
	// contains filtered or unexported fields
}

The ARN of the Secrets Manager secret.

type SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram512Auth

type SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram512Auth struct {
	Value string
	// contains filtered or unexported fields
}

The ARN of the Secrets Manager secret.

type SelfManagedKafkaAccessConfigurationVpc

type SelfManagedKafkaAccessConfigurationVpc struct {

	// Specifies the security groups associated with the stream. These security groups
	// must all be in the same VPC. You can specify as many as five security groups.
	SecurityGroup []string

	// Specifies the subnets associated with the stream. These subnets must all be in
	// the same VPC. You can specify as many as 16 subnets.
	Subnets []string
	// contains filtered or unexported fields
}

This structure specifies the VPC subnets and security groups for the stream, and whether a public IP address is to be used.

type SelfManagedKafkaStartPosition

type SelfManagedKafkaStartPosition string
const (
	SelfManagedKafkaStartPositionTrimHorizon SelfManagedKafkaStartPosition = "TRIM_HORIZON"
	SelfManagedKafkaStartPositionLatest      SelfManagedKafkaStartPosition = "LATEST"
)

Enum values for SelfManagedKafkaStartPosition

func (SelfManagedKafkaStartPosition) Values

Values returns all known values for SelfManagedKafkaStartPosition. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type ServiceQuotaExceededException

type ServiceQuotaExceededException struct {
	Message *string

	ErrorCodeOverride *string

	ResourceId   *string
	ResourceType *string
	ServiceCode  *string
	QuotaCode    *string
	// contains filtered or unexported fields
}

A quota has been exceeded.

func (*ServiceQuotaExceededException) Error

func (*ServiceQuotaExceededException) ErrorCode

func (e *ServiceQuotaExceededException) ErrorCode() string

func (*ServiceQuotaExceededException) ErrorFault

func (*ServiceQuotaExceededException) ErrorMessage

func (e *ServiceQuotaExceededException) ErrorMessage() string

type SingleMeasureMapping added in v1.12.0

type SingleMeasureMapping struct {

	// Target measure name for the measurement attribute in the Timestream table.
	//
	// This member is required.
	MeasureName *string

	// Dynamic path of the source field to map to the measure in the record.
	//
	// This member is required.
	MeasureValue *string

	// Data type of the source field.
	//
	// This member is required.
	MeasureValueType MeasureValueType
	// contains filtered or unexported fields
}

Maps a single source data field to a single record in the specified Timestream for LiveAnalytics table.

For more information, see Amazon Timestream for LiveAnalytics concepts

type Tag

type Tag struct {

	// A string you can use to assign a value. The combination of tag keys and values
	// can help you organize and categorize your resources.
	//
	// This member is required.
	Key *string

	// The value for the specified tag key.
	//
	// This member is required.
	Value *string
	// contains filtered or unexported fields
}

A key-value pair associated with an Amazon Web Services resource. In EventBridge, rules and event buses support tagging.

type ThrottlingException

type ThrottlingException struct {
	Message *string

	ErrorCodeOverride *string

	ServiceCode       *string
	QuotaCode         *string
	RetryAfterSeconds *int32
	// contains filtered or unexported fields
}

An action was throttled.

func (*ThrottlingException) Error

func (e *ThrottlingException) Error() string

func (*ThrottlingException) ErrorCode

func (e *ThrottlingException) ErrorCode() string

func (*ThrottlingException) ErrorFault

func (e *ThrottlingException) ErrorFault() smithy.ErrorFault

func (*ThrottlingException) ErrorMessage

func (e *ThrottlingException) ErrorMessage() string

type TimeFieldType added in v1.12.0

type TimeFieldType string
const (
	TimeFieldTypeEpoch           TimeFieldType = "EPOCH"
	TimeFieldTypeTimestampFormat TimeFieldType = "TIMESTAMP_FORMAT"
)

Enum values for TimeFieldType

func (TimeFieldType) Values added in v1.12.0

func (TimeFieldType) Values() []TimeFieldType

Values returns all known values for TimeFieldType. Note that this can be expanded in the future, and so it is only as up to date as the client.

The ordering of this slice is not guaranteed to be stable across updates.

type UnknownUnionMember

type UnknownUnionMember struct {
	Tag   string
	Value []byte
	// contains filtered or unexported fields
}

UnknownUnionMember is returned when a union member is returned over the wire, but has an unknown tag.

type UpdatePipeSourceActiveMQBrokerParameters

type UpdatePipeSourceActiveMQBrokerParameters struct {

	// The credentials needed to access the resource.
	//
	// This member is required.
	Credentials MQBrokerAccessCredentials

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32
	// contains filtered or unexported fields
}

The parameters for using an Active MQ broker as a source.

type UpdatePipeSourceDynamoDBStreamParameters

type UpdatePipeSourceDynamoDBStreamParameters struct {

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// Define the target queue to send dead-letter queue events to.
	DeadLetterConfig *DeadLetterConfig

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// Discard records older than the specified age. The default value is -1, which
	// sets the maximum age to infinite. When the value is set to infinite, EventBridge
	// never discards old records.
	MaximumRecordAgeInSeconds *int32

	// Discard records after the specified number of retries. The default value is -1,
	// which sets the maximum number of retries to infinite. When MaximumRetryAttempts
	// is infinite, EventBridge retries failed records until the record expires in the
	// event source.
	MaximumRetryAttempts *int32

	// Define how to handle item process failures. AUTOMATIC_BISECT halves each batch
	// and retry each half until all the records are processed or there is one failed
	// message left in the batch.
	OnPartialBatchItemFailure OnPartialBatchItemFailureStreams

	// The number of batches to process concurrently from each shard. The default
	// value is 1.
	ParallelizationFactor *int32
	// contains filtered or unexported fields
}

The parameters for using a DynamoDB stream as a source.

type UpdatePipeSourceKinesisStreamParameters

type UpdatePipeSourceKinesisStreamParameters struct {

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// Define the target queue to send dead-letter queue events to.
	DeadLetterConfig *DeadLetterConfig

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// Discard records older than the specified age. The default value is -1, which
	// sets the maximum age to infinite. When the value is set to infinite, EventBridge
	// never discards old records.
	MaximumRecordAgeInSeconds *int32

	// Discard records after the specified number of retries. The default value is -1,
	// which sets the maximum number of retries to infinite. When MaximumRetryAttempts
	// is infinite, EventBridge retries failed records until the record expires in the
	// event source.
	MaximumRetryAttempts *int32

	// Define how to handle item process failures. AUTOMATIC_BISECT halves each batch
	// and retry each half until all the records are processed or there is one failed
	// message left in the batch.
	OnPartialBatchItemFailure OnPartialBatchItemFailureStreams

	// The number of batches to process concurrently from each shard. The default
	// value is 1.
	ParallelizationFactor *int32
	// contains filtered or unexported fields
}

The parameters for using a Kinesis stream as a source.

type UpdatePipeSourceManagedStreamingKafkaParameters

type UpdatePipeSourceManagedStreamingKafkaParameters struct {

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The credentials needed to access the resource.
	Credentials MSKAccessCredentials

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32
	// contains filtered or unexported fields
}

The parameters for using an MSK stream as a source.

type UpdatePipeSourceParameters

type UpdatePipeSourceParameters struct {

	// The parameters for using an Active MQ broker as a source.
	ActiveMQBrokerParameters *UpdatePipeSourceActiveMQBrokerParameters

	// The parameters for using a DynamoDB stream as a source.
	DynamoDBStreamParameters *UpdatePipeSourceDynamoDBStreamParameters

	// The collection of event patterns used to filter events.
	//
	// To remove a filter, specify a FilterCriteria object with an empty array of
	// Filter objects.
	//
	// For more information, see [Events and Event Patterns] in the Amazon EventBridge User Guide.
	//
	// [Events and Event Patterns]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-and-event-patterns.html
	FilterCriteria *FilterCriteria

	// The parameters for using a Kinesis stream as a source.
	KinesisStreamParameters *UpdatePipeSourceKinesisStreamParameters

	// The parameters for using an MSK stream as a source.
	ManagedStreamingKafkaParameters *UpdatePipeSourceManagedStreamingKafkaParameters

	// The parameters for using a Rabbit MQ broker as a source.
	RabbitMQBrokerParameters *UpdatePipeSourceRabbitMQBrokerParameters

	// The parameters for using a self-managed Apache Kafka stream as a source.
	//
	// A self managed cluster refers to any Apache Kafka cluster not hosted by Amazon
	// Web Services. This includes both clusters you manage yourself, as well as those
	// hosted by a third-party provider, such as [Confluent Cloud], [CloudKarafka], or [Redpanda]. For more information, see [Apache Kafka streams as a source]
	// in the Amazon EventBridge User Guide.
	//
	// [CloudKarafka]: https://www.cloudkarafka.com/
	// [Apache Kafka streams as a source]: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-kafka.html
	// [Confluent Cloud]: https://www.confluent.io/
	// [Redpanda]: https://redpanda.com/
	SelfManagedKafkaParameters *UpdatePipeSourceSelfManagedKafkaParameters

	// The parameters for using a Amazon SQS stream as a source.
	SqsQueueParameters *UpdatePipeSourceSqsQueueParameters
	// contains filtered or unexported fields
}

The parameters required to set up a source for your pipe.

type UpdatePipeSourceRabbitMQBrokerParameters

type UpdatePipeSourceRabbitMQBrokerParameters struct {

	// The credentials needed to access the resource.
	//
	// This member is required.
	Credentials MQBrokerAccessCredentials

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32
	// contains filtered or unexported fields
}

The parameters for using a Rabbit MQ broker as a source.

type UpdatePipeSourceSelfManagedKafkaParameters

type UpdatePipeSourceSelfManagedKafkaParameters struct {

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The credentials needed to access the resource.
	Credentials SelfManagedKafkaAccessConfigurationCredentials

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32

	// The ARN of the Secrets Manager secret used for certification.
	ServerRootCaCertificate *string

	// This structure specifies the VPC subnets and security groups for the stream,
	// and whether a public IP address is to be used.
	Vpc *SelfManagedKafkaAccessConfigurationVpc
	// contains filtered or unexported fields
}

The parameters for using a self-managed Apache Kafka stream as a source.

A self managed cluster refers to any Apache Kafka cluster not hosted by Amazon Web Services. This includes both clusters you manage yourself, as well as those hosted by a third-party provider, such as Confluent Cloud, CloudKarafka, or Redpanda. For more information, see Apache Kafka streams as a source in the Amazon EventBridge User Guide.

type UpdatePipeSourceSqsQueueParameters

type UpdatePipeSourceSqsQueueParameters struct {

	// The maximum number of records to include in each batch.
	BatchSize *int32

	// The maximum length of a time to wait for events.
	MaximumBatchingWindowInSeconds *int32
	// contains filtered or unexported fields
}

The parameters for using a Amazon SQS stream as a source.

type ValidationException

type ValidationException struct {
	Message *string

	ErrorCodeOverride *string

	FieldList []ValidationExceptionField
	// contains filtered or unexported fields
}

Indicates that an error has occurred while performing a validate operation.

func (*ValidationException) Error

func (e *ValidationException) Error() string

func (*ValidationException) ErrorCode

func (e *ValidationException) ErrorCode() string

func (*ValidationException) ErrorFault

func (e *ValidationException) ErrorFault() smithy.ErrorFault

func (*ValidationException) ErrorMessage

func (e *ValidationException) ErrorMessage() string

type ValidationExceptionField

type ValidationExceptionField struct {

	// The message of the exception.
	//
	// This member is required.
	Message *string

	// The name of the exception.
	//
	// This member is required.
	Name *string
	// contains filtered or unexported fields
}

Indicates that an error has occurred while performing a validate operation.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL