Documentation ¶
Index ¶
- type AccessDeniedFault
- type AccountQuota
- type AssessmentReportType
- type AuthMechanismValue
- type AuthTypeValue
- type AvailabilityZone
- type BatchStartRecommendationsErrorEntry
- type CannedAclForObjectsValue
- type Certificate
- type CharLengthSemantics
- type CollectorHealthCheck
- type CollectorNotFoundFault
- type CollectorResponse
- type CollectorShortInfoResponse
- type CollectorStatus
- type CompressionTypeValue
- type ComputeConfig
- type Connection
- type DataFormatValue
- type DataMigration
- type DataMigrationSettings
- type DataMigrationStatistics
- type DataProvider
- type DataProviderDescriptor
- type DataProviderDescriptorDefinition
- type DataProviderSettings
- type DataProviderSettingsMemberDocDbSettings
- type DataProviderSettingsMemberMariaDbSettings
- type DataProviderSettingsMemberMicrosoftSqlServerSettings
- type DataProviderSettingsMemberMongoDbSettings
- type DataProviderSettingsMemberMySqlSettings
- type DataProviderSettingsMemberOracleSettings
- type DataProviderSettingsMemberPostgreSqlSettings
- type DataProviderSettingsMemberRedshiftSettings
- type DatabaseInstanceSoftwareDetailsResponse
- type DatabaseMode
- type DatabaseResponse
- type DatabaseShortInfoResponse
- type DatePartitionDelimiterValue
- type DatePartitionSequenceValue
- type DefaultErrorDetails
- type DmsSslModeValue
- type DmsTransferSettings
- type DocDbDataProviderSettings
- type DocDbSettings
- type DynamoDbSettings
- type ElasticsearchSettings
- type EncodingTypeValue
- type EncryptionModeValue
- type Endpoint
- type EndpointSetting
- type EndpointSettingTypeValue
- type EngineVersion
- type ErrorDetails
- type ErrorDetailsMemberDefaultErrorDetails
- type Event
- type EventCategoryGroup
- type EventSubscription
- type ExportMetadataModelAssessmentResultEntry
- type ExportSqlDetails
- type FailedDependencyFault
- type Filter
- type FleetAdvisorLsaAnalysisResponse
- type FleetAdvisorSchemaObjectResponse
- type GcpMySQLSettings
- type IBMDb2Settings
- type InstanceProfile
- type InsufficientResourceCapacityFault
- type InvalidCertificateFault
- type InvalidOperationFault
- type InvalidResourceStateFault
- type InvalidSubnet
- type InventoryData
- type KMSAccessDeniedFault
- type KMSDisabledFault
- type KMSFault
- type KMSInvalidStateFault
- type KMSKeyNotAccessibleFault
- type KMSNotFoundFault
- type KMSThrottlingFault
- type KafkaSaslMechanism
- type KafkaSecurityProtocol
- type KafkaSettings
- type KafkaSslEndpointIdentificationAlgorithm
- type KinesisSettings
- type Limitation
- type LongVarcharMappingType
- type MariaDbDataProviderSettings
- type MessageFormatValue
- type MicrosoftSQLServerSettings
- type MicrosoftSqlServerDataProviderSettings
- type MigrationProject
- type MigrationTypeValue
- type MongoDbDataProviderSettings
- type MongoDbSettings
- type MySQLSettings
- type MySqlDataProviderSettings
- type NeptuneSettings
- type NestingLevelValue
- type OracleDataProviderSettings
- type OracleSettings
- type OrderableReplicationInstance
- type OriginTypeValue
- type ParquetVersionValue
- type PendingMaintenanceAction
- type PluginNameValue
- type PostgreSQLSettings
- type PostgreSqlDataProviderSettings
- type ProvisionData
- type RdsConfiguration
- type RdsRecommendation
- type RdsRequirements
- type Recommendation
- type RecommendationData
- type RecommendationSettings
- type RedisAuthTypeValue
- type RedisSettings
- type RedshiftDataProviderSettings
- type RedshiftSettings
- type RefreshSchemasStatus
- type RefreshSchemasStatusTypeValue
- type ReleaseStatusValues
- type ReloadOptionValue
- type Replication
- type ReplicationConfig
- type ReplicationEndpointTypeValue
- type ReplicationInstance
- type ReplicationInstanceTaskLog
- type ReplicationPendingModifiedValues
- type ReplicationStats
- type ReplicationSubnetGroup
- type ReplicationSubnetGroupDoesNotCoverEnoughAZs
- func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) Error() string
- func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorCode() string
- func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorFault() smithy.ErrorFault
- func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorMessage() string
- type ReplicationTask
- type ReplicationTaskAssessmentResult
- type ReplicationTaskAssessmentRun
- type ReplicationTaskAssessmentRunProgress
- type ReplicationTaskAssessmentRunResultStatistic
- type ReplicationTaskIndividualAssessment
- type ReplicationTaskStats
- type ResourceAlreadyExistsFault
- type ResourceNotFoundFault
- type ResourcePendingMaintenanceActions
- type ResourceQuotaExceededFault
- type S3AccessDeniedFault
- type S3ResourceNotFoundFault
- type S3Settings
- type SCApplicationAttributes
- type SNSInvalidTopicFault
- type SNSNoAuthorizationFault
- type SafeguardPolicy
- type SchemaConversionRequest
- type SchemaResponse
- type SchemaShortInfoResponse
- type ServerShortInfoResponse
- type SourceDataSetting
- type SourceType
- type SslSecurityProtocolValue
- type StartRecommendationsRequestEntry
- type StartReplicationMigrationTypeValue
- type StartReplicationTaskTypeValue
- type StorageQuotaExceededFault
- type Subnet
- type SubnetAlreadyInUse
- type SupportedEndpointType
- type SybaseSettings
- type TableStatistics
- type TableToReload
- type Tag
- type TargetDbType
- type TimestreamSettings
- type TlogAccessMode
- type UnknownUnionMember
- type UpgradeDependencyFailureFault
- type VersionStatus
- type VpcSecurityGroupMembership
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AccessDeniedFault ¶
type AccessDeniedFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
DMS was denied access to the endpoint. Check that the role is correctly configured.
func (*AccessDeniedFault) Error ¶
func (e *AccessDeniedFault) Error() string
func (*AccessDeniedFault) ErrorCode ¶
func (e *AccessDeniedFault) ErrorCode() string
func (*AccessDeniedFault) ErrorFault ¶
func (e *AccessDeniedFault) ErrorFault() smithy.ErrorFault
func (*AccessDeniedFault) ErrorMessage ¶
func (e *AccessDeniedFault) ErrorMessage() string
type AccountQuota ¶
type AccountQuota struct { // The name of the DMS quota for this Amazon Web Services account. AccountQuotaName *string // The maximum allowed value for the quota. Max int64 // The amount currently used toward the quota maximum. Used int64 // contains filtered or unexported fields }
Describes a quota for an Amazon Web Services account, for example the number of replication instances allowed.
type AssessmentReportType ¶ added in v1.30.0
type AssessmentReportType string
const ( AssessmentReportTypePdf AssessmentReportType = "pdf" AssessmentReportTypeCsv AssessmentReportType = "csv" )
Enum values for AssessmentReportType
func (AssessmentReportType) Values ¶ added in v1.30.0
func (AssessmentReportType) Values() []AssessmentReportType
Values returns all known values for AssessmentReportType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type AuthMechanismValue ¶
type AuthMechanismValue string
const ( AuthMechanismValueDefault AuthMechanismValue = "default" AuthMechanismValueMongodbCr AuthMechanismValue = "mongodb_cr" AuthMechanismValueScramSha1 AuthMechanismValue = "scram_sha_1" )
Enum values for AuthMechanismValue
func (AuthMechanismValue) Values ¶ added in v0.29.0
func (AuthMechanismValue) Values() []AuthMechanismValue
Values returns all known values for AuthMechanismValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type AuthTypeValue ¶
type AuthTypeValue string
const ( AuthTypeValueNo AuthTypeValue = "no" AuthTypeValuePassword AuthTypeValue = "password" )
Enum values for AuthTypeValue
func (AuthTypeValue) Values ¶ added in v0.29.0
func (AuthTypeValue) Values() []AuthTypeValue
Values returns all known values for AuthTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type AvailabilityZone ¶
type AvailabilityZone struct { // The name of the Availability Zone. Name *string // contains filtered or unexported fields }
The name of an Availability Zone for use during database migration. AvailabilityZone is an optional parameter to the CreateReplicationInstanceCreateReplicationInstance operation, and it’s value relates to the Amazon Web Services Region of an endpoint. For example, the availability zone of an endpoint in the us-east-1 region might be us-east-1a, us-east-1b, us-east-1c, or us-east-1d.
type BatchStartRecommendationsErrorEntry ¶ added in v1.24.0
type BatchStartRecommendationsErrorEntry struct { // The code of an error that occurred during the analysis of the source database. Code *string // The identifier of the source database. DatabaseId *string // The information about the error. Message *string // contains filtered or unexported fields }
Provides information about the errors that occurred during the analysis of the source database.
type CannedAclForObjectsValue ¶ added in v1.7.0
type CannedAclForObjectsValue string
const ( CannedAclForObjectsValueNone CannedAclForObjectsValue = "none" CannedAclForObjectsValuePrivate CannedAclForObjectsValue = "private" CannedAclForObjectsValuePublicRead CannedAclForObjectsValue = "public-read" CannedAclForObjectsValuePublicReadWrite CannedAclForObjectsValue = "public-read-write" CannedAclForObjectsValueAuthenticatedRead CannedAclForObjectsValue = "authenticated-read" CannedAclForObjectsValueAwsExecRead CannedAclForObjectsValue = "aws-exec-read" CannedAclForObjectsValueBucketOwnerRead CannedAclForObjectsValue = "bucket-owner-read" CannedAclForObjectsValueBucketOwnerFullControl CannedAclForObjectsValue = "bucket-owner-full-control" )
Enum values for CannedAclForObjectsValue
func (CannedAclForObjectsValue) Values ¶ added in v1.7.0
func (CannedAclForObjectsValue) Values() []CannedAclForObjectsValue
Values returns all known values for CannedAclForObjectsValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Certificate ¶
type Certificate struct { // The Amazon Resource Name (ARN) for the certificate. CertificateArn *string // The date that the certificate was created. CertificateCreationDate *time.Time // A customer-assigned name for the certificate. Identifiers must begin with a // letter and must contain only ASCII letters, digits, and hyphens. They can't end // with a hyphen or contain two consecutive hyphens. CertificateIdentifier *string // The owner of the certificate. CertificateOwner *string // The contents of a .pem file, which contains an X.509 certificate. CertificatePem *string // The location of an imported Oracle Wallet certificate for use with SSL. // Example: filebase64("${path.root}/rds-ca-2019-root.sso") CertificateWallet []byte // The key length of the cryptographic algorithm being used. KeyLength *int32 // The signing algorithm for the certificate. SigningAlgorithm *string // The beginning date that the certificate is valid. ValidFromDate *time.Time // The final date that the certificate is valid. ValidToDate *time.Time // contains filtered or unexported fields }
The SSL certificate that can be used to encrypt connections between the endpoints and the replication instance.
type CharLengthSemantics ¶ added in v0.29.0
type CharLengthSemantics string
const ( CharLengthSemanticsDefault CharLengthSemantics = "default" CharLengthSemanticsChar CharLengthSemantics = "char" CharLengthSemanticsByte CharLengthSemantics = "byte" )
Enum values for CharLengthSemantics
func (CharLengthSemantics) Values ¶ added in v0.29.0
func (CharLengthSemantics) Values() []CharLengthSemantics
Values returns all known values for CharLengthSemantics. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type CollectorHealthCheck ¶ added in v1.19.0
type CollectorHealthCheck struct { // The status of the Fleet Advisor collector. CollectorStatus CollectorStatus // Whether the local collector can access its Amazon S3 bucket. LocalCollectorS3Access *bool // Whether the role that you provided when creating the Fleet Advisor collector // has sufficient permissions to access the Fleet Advisor web collector. WebCollectorGrantedRoleBasedAccess *bool // Whether the web collector can access its Amazon S3 bucket. WebCollectorS3Access *bool // contains filtered or unexported fields }
Describes the last Fleet Advisor collector health check.
type CollectorNotFoundFault ¶ added in v1.19.0
type CollectorNotFoundFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The specified collector doesn't exist.
func (*CollectorNotFoundFault) Error ¶ added in v1.19.0
func (e *CollectorNotFoundFault) Error() string
func (*CollectorNotFoundFault) ErrorCode ¶ added in v1.19.0
func (e *CollectorNotFoundFault) ErrorCode() string
func (*CollectorNotFoundFault) ErrorFault ¶ added in v1.19.0
func (e *CollectorNotFoundFault) ErrorFault() smithy.ErrorFault
func (*CollectorNotFoundFault) ErrorMessage ¶ added in v1.19.0
func (e *CollectorNotFoundFault) ErrorMessage() string
type CollectorResponse ¶ added in v1.19.0
type CollectorResponse struct { // Describes the last Fleet Advisor collector health check. CollectorHealthCheck *CollectorHealthCheck // The name of the Fleet Advisor collector . CollectorName *string // The reference ID of the Fleet Advisor collector. CollectorReferencedId *string // The version of your Fleet Advisor collector, in semantic versioning format, for // example 1.0.2 CollectorVersion *string // The timestamp when you created the collector, in the following format: // 2022-01-24T19:04:02.596113Z CreatedDate *string // A summary description of the Fleet Advisor collector. Description *string // Describes a Fleet Advisor collector inventory. InventoryData *InventoryData // The timestamp of the last time the collector received data, in the following // format: 2022-01-24T19:04:02.596113Z LastDataReceived *string // The timestamp when DMS last modified the collector, in the following format: // 2022-01-24T19:04:02.596113Z ModifiedDate *string // The timestamp when DMS registered the collector, in the following format: // 2022-01-24T19:04:02.596113Z RegisteredDate *string // The Amazon S3 bucket that the Fleet Advisor collector uses to store inventory // metadata. S3BucketName *string // The IAM role that grants permissions to access the specified Amazon S3 bucket. ServiceAccessRoleArn *string // Whether the collector version is up to date. VersionStatus VersionStatus // contains filtered or unexported fields }
Describes a Fleet Advisor collector.
type CollectorShortInfoResponse ¶ added in v1.19.0
type CollectorShortInfoResponse struct { // The name of the Fleet Advisor collector. CollectorName *string // The reference ID of the Fleet Advisor collector. CollectorReferencedId *string // contains filtered or unexported fields }
Briefly describes a Fleet Advisor collector.
type CollectorStatus ¶ added in v1.19.0
type CollectorStatus string
const ( CollectorStatusUnregistered CollectorStatus = "UNREGISTERED" CollectorStatusActive CollectorStatus = "ACTIVE" )
Enum values for CollectorStatus
func (CollectorStatus) Values ¶ added in v1.19.0
func (CollectorStatus) Values() []CollectorStatus
Values returns all known values for CollectorStatus. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type CompressionTypeValue ¶
type CompressionTypeValue string
const ( CompressionTypeValueNone CompressionTypeValue = "none" CompressionTypeValueGzip CompressionTypeValue = "gzip" )
Enum values for CompressionTypeValue
func (CompressionTypeValue) Values ¶ added in v0.29.0
func (CompressionTypeValue) Values() []CompressionTypeValue
Values returns all known values for CompressionTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ComputeConfig ¶ added in v1.26.0
type ComputeConfig struct { // The Availability Zone where the DMS Serverless replication using this // configuration will run. The default value is a random, system-chosen // Availability Zone in the configuration's Amazon Web Services Region, for // example, "us-west-2" . You can't set this parameter if the MultiAZ parameter is // set to true . AvailabilityZone *string // A list of custom DNS name servers supported for the DMS Serverless replication // to access your source or target database. This list overrides the default name // servers supported by the DMS Serverless replication. You can specify a // comma-separated list of internet addresses for up to four DNS name servers. For // example: "1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4" DnsNameServers *string // An Key Management Service (KMS) key Amazon Resource Name (ARN) that is used to // encrypt the data during DMS Serverless replication. // // If you don't specify a value for the KmsKeyId parameter, DMS uses your default // encryption key. // // KMS creates the default encryption key for your Amazon Web Services account. // Your Amazon Web Services account has a different default encryption key for each // Amazon Web Services Region. KmsKeyId *string // Specifies the maximum value of the DMS capacity units (DCUs) for which a given // DMS Serverless replication can be provisioned. A single DCU is 2GB of RAM, with // 1 DCU as the minimum value allowed. The list of valid DCU values includes 1, 2, // 4, 8, 16, 32, 64, 128, 192, 256, and 384. So, the maximum value that you can // specify for DMS Serverless is 384. The MaxCapacityUnits parameter is the only // DCU parameter you are required to specify. MaxCapacityUnits *int32 // Specifies the minimum value of the DMS capacity units (DCUs) for which a given // DMS Serverless replication can be provisioned. A single DCU is 2GB of RAM, with // 1 DCU as the minimum value allowed. The list of valid DCU values includes 1, 2, // 4, 8, 16, 32, 64, 128, 192, 256, and 384. So, the minimum DCU value that you can // specify for DMS Serverless is 1. If you don't set this value, DMS sets this // parameter to the minimum DCU value allowed, 1. If there is no current source // activity, DMS scales down your replication until it reaches the value specified // in MinCapacityUnits . MinCapacityUnits *int32 // Specifies whether the DMS Serverless replication is a Multi-AZ deployment. You // can't set the AvailabilityZone parameter if the MultiAZ parameter is set to true // . MultiAZ *bool // The weekly time range during which system maintenance can occur for the DMS // Serverless replication, in Universal Coordinated Time (UTC). The format is // ddd:hh24:mi-ddd:hh24:mi . // // The default is a 30-minute window selected at random from an 8-hour block of // time per Amazon Web Services Region. This maintenance occurs on a random day of // the week. Valid values for days of the week include Mon , Tue , Wed , Thu , Fri // , Sat , and Sun . // // Constraints include a minimum 30-minute window. PreferredMaintenanceWindow *string // Specifies a subnet group identifier to associate with the DMS Serverless // replication. ReplicationSubnetGroupId *string // Specifies the virtual private cloud (VPC) security group to use with the DMS // Serverless replication. The VPC security group must work with the VPC containing // the replication. VpcSecurityGroupIds []string // contains filtered or unexported fields }
Configuration parameters for provisioning an DMS Serverless replication.
type Connection ¶
type Connection struct { // The ARN string that uniquely identifies the endpoint. EndpointArn *string // The identifier of the endpoint. Identifiers must begin with a letter and must // contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or // contain two consecutive hyphens. EndpointIdentifier *string // The error message when the connection last failed. LastFailureMessage *string // The ARN of the replication instance. ReplicationInstanceArn *string // The replication instance identifier. This parameter is stored as a lowercase // string. ReplicationInstanceIdentifier *string // The connection status. This parameter can return one of the following values: // // - "successful" // // - "testing" // // - "failed" // // - "deleting" Status *string // contains filtered or unexported fields }
Status of the connection between an endpoint and a replication instance, including Amazon Resource Names (ARNs) and the last error message issued.
type DataFormatValue ¶
type DataFormatValue string
const ( DataFormatValueCsv DataFormatValue = "csv" DataFormatValueParquet DataFormatValue = "parquet" )
Enum values for DataFormatValue
func (DataFormatValue) Values ¶ added in v0.29.0
func (DataFormatValue) Values() []DataFormatValue
Values returns all known values for DataFormatValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type DataMigration ¶ added in v1.43.0
type DataMigration struct { // The Amazon Resource Name (ARN) that identifies this replication. DataMigrationArn *string // The CIDR blocks of the endpoints for the data migration. DataMigrationCidrBlocks []string // The UTC time when DMS created the data migration. DataMigrationCreateTime *time.Time // The UTC time when data migration ended. DataMigrationEndTime *time.Time // The user-friendly name for the data migration. DataMigrationName *string // Specifies CloudWatch settings and selection rules for the data migration. DataMigrationSettings *DataMigrationSettings // The UTC time when DMS started the data migration. DataMigrationStartTime *time.Time // Provides information about the data migration's run, including start and stop // time, latency, and data migration progress. DataMigrationStatistics *DataMigrationStatistics // The current status of the data migration. DataMigrationStatus *string // Specifies whether the data migration is full-load only, change data capture // (CDC) only, or full-load and CDC. DataMigrationType MigrationTypeValue // Information about the data migration's most recent error or failure. LastFailureMessage *string // The Amazon Resource Name (ARN) of the data migration's associated migration // project. MigrationProjectArn *string // The IP addresses of the endpoints for the data migration. PublicIpAddresses []string // The IAM role that the data migration uses to access Amazon Web Services // resources. ServiceAccessRoleArn *string // Specifies information about the data migration's source data provider. SourceDataSettings []SourceDataSetting // The reason the data migration last stopped. StopReason *string // contains filtered or unexported fields }
This object provides information about a DMS data migration.
type DataMigrationSettings ¶ added in v1.43.0
type DataMigrationSettings struct { // Whether to enable CloudWatch logging for the data migration. CloudwatchLogsEnabled *bool // The number of parallel jobs that trigger parallel threads to unload the tables // from the source, and then load them to the target. NumberOfJobs *int32 // A JSON-formatted string that defines what objects to include and exclude from // the migration. SelectionRules *string // contains filtered or unexported fields }
Options for configuring a data migration, including whether to enable CloudWatch logs, and the selection rules to use to include or exclude database objects from the migration.
type DataMigrationStatistics ¶ added in v1.43.0
type DataMigrationStatistics struct { // The current latency of the change data capture (CDC) operation. CDCLatency int32 // The elapsed duration of the data migration run. ElapsedTimeMillis int64 // The data migration's progress in the full-load migration phase. FullLoadPercentage int32 // The time when the migration started. StartTime *time.Time // The time when the migration stopped or failed. StopTime *time.Time // The number of tables that DMS failed to process. TablesErrored int32 // The number of tables loaded in the current data migration run. TablesLoaded int32 // The data migration's table loading progress. TablesLoading int32 // The number of tables that are waiting for processing. TablesQueued int32 // contains filtered or unexported fields }
Information about the data migration run, including start and stop time, latency, and migration progress.
type DataProvider ¶ added in v1.30.0
type DataProvider struct { // The Amazon Resource Name (ARN) string that uniquely identifies the data // provider. DataProviderArn *string // The time the data provider was created. DataProviderCreationTime *time.Time // The name of the data provider. DataProviderName *string // A description of the data provider. Descriptions can have up to 31 characters. // A description can contain only ASCII letters, digits, and hyphens ('-'). Also, // it can't end with a hyphen or contain two consecutive hyphens, and can only // begin with a letter. Description *string // The type of database engine for the data provider. Valid values include "aurora" // , "aurora-postgresql" , "mysql" , "oracle" , "postgres" , "sqlserver" , redshift // , mariadb , mongodb , and docdb . A value of "aurora" represents Amazon Aurora // MySQL-Compatible Edition. Engine *string // The settings in JSON format for a data provider. Settings DataProviderSettings // contains filtered or unexported fields }
Provides information that defines a data provider.
type DataProviderDescriptor ¶ added in v1.30.0
type DataProviderDescriptor struct { // The Amazon Resource Name (ARN) of the data provider. DataProviderArn *string // The user-friendly name of the data provider. DataProviderName *string // The ARN of the role used to access Amazon Web Services Secrets Manager. SecretsManagerAccessRoleArn *string // The identifier of the Amazon Web Services Secrets Manager Secret used to store // access credentials for the data provider. SecretsManagerSecretId *string // contains filtered or unexported fields }
Information about a data provider.
type DataProviderDescriptorDefinition ¶ added in v1.30.0
type DataProviderDescriptorDefinition struct { // The name or Amazon Resource Name (ARN) of the data provider. // // This member is required. DataProviderIdentifier *string // The ARN of the role used to access Amazon Web Services Secrets Manager. SecretsManagerAccessRoleArn *string // The identifier of the Amazon Web Services Secrets Manager Secret used to store // access credentials for the data provider. SecretsManagerSecretId *string // contains filtered or unexported fields }
Information about a data provider.
type DataProviderSettings ¶ added in v1.30.0
type DataProviderSettings interface {
// contains filtered or unexported methods
}
Provides information that defines a data provider.
The following types satisfy this interface:
DataProviderSettingsMemberDocDbSettings DataProviderSettingsMemberMariaDbSettings DataProviderSettingsMemberMicrosoftSqlServerSettings DataProviderSettingsMemberMongoDbSettings DataProviderSettingsMemberMySqlSettings DataProviderSettingsMemberOracleSettings DataProviderSettingsMemberPostgreSqlSettings DataProviderSettingsMemberRedshiftSettings
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/databasemigrationservice/types" ) func main() { var union types.DataProviderSettings // type switches can be used to check the union value switch v := union.(type) { case *types.DataProviderSettingsMemberDocDbSettings: _ = v.Value // Value is types.DocDbDataProviderSettings case *types.DataProviderSettingsMemberMariaDbSettings: _ = v.Value // Value is types.MariaDbDataProviderSettings case *types.DataProviderSettingsMemberMicrosoftSqlServerSettings: _ = v.Value // Value is types.MicrosoftSqlServerDataProviderSettings case *types.DataProviderSettingsMemberMongoDbSettings: _ = v.Value // Value is types.MongoDbDataProviderSettings case *types.DataProviderSettingsMemberMySqlSettings: _ = v.Value // Value is types.MySqlDataProviderSettings case *types.DataProviderSettingsMemberOracleSettings: _ = v.Value // Value is types.OracleDataProviderSettings case *types.DataProviderSettingsMemberPostgreSqlSettings: _ = v.Value // Value is types.PostgreSqlDataProviderSettings case *types.DataProviderSettingsMemberRedshiftSettings: _ = v.Value // Value is types.RedshiftDataProviderSettings case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type DataProviderSettingsMemberDocDbSettings ¶ added in v1.31.0
type DataProviderSettingsMemberDocDbSettings struct { Value DocDbDataProviderSettings // contains filtered or unexported fields }
Provides information that defines a DocumentDB data provider.
type DataProviderSettingsMemberMariaDbSettings ¶ added in v1.31.0
type DataProviderSettingsMemberMariaDbSettings struct { Value MariaDbDataProviderSettings // contains filtered or unexported fields }
Provides information that defines a MariaDB data provider.
type DataProviderSettingsMemberMicrosoftSqlServerSettings ¶ added in v1.30.0
type DataProviderSettingsMemberMicrosoftSqlServerSettings struct { Value MicrosoftSqlServerDataProviderSettings // contains filtered or unexported fields }
Provides information that defines a Microsoft SQL Server data provider.
type DataProviderSettingsMemberMongoDbSettings ¶ added in v1.31.0
type DataProviderSettingsMemberMongoDbSettings struct { Value MongoDbDataProviderSettings // contains filtered or unexported fields }
Provides information that defines a MongoDB data provider.
type DataProviderSettingsMemberMySqlSettings ¶ added in v1.30.0
type DataProviderSettingsMemberMySqlSettings struct { Value MySqlDataProviderSettings // contains filtered or unexported fields }
Provides information that defines a MySQL data provider.
type DataProviderSettingsMemberOracleSettings ¶ added in v1.30.0
type DataProviderSettingsMemberOracleSettings struct { Value OracleDataProviderSettings // contains filtered or unexported fields }
Provides information that defines an Oracle data provider.
type DataProviderSettingsMemberPostgreSqlSettings ¶ added in v1.30.0
type DataProviderSettingsMemberPostgreSqlSettings struct { Value PostgreSqlDataProviderSettings // contains filtered or unexported fields }
Provides information that defines a PostgreSQL data provider.
type DataProviderSettingsMemberRedshiftSettings ¶ added in v1.31.0
type DataProviderSettingsMemberRedshiftSettings struct { Value RedshiftDataProviderSettings // contains filtered or unexported fields }
Provides information that defines an Amazon Redshift data provider.
type DatabaseInstanceSoftwareDetailsResponse ¶ added in v1.19.0
type DatabaseInstanceSoftwareDetailsResponse struct { // The database engine of a database in a Fleet Advisor collector inventory, for // example Microsoft SQL Server . Engine *string // The database engine edition of a database in a Fleet Advisor collector // inventory, for example Express . EngineEdition *string // The database engine version of a database in a Fleet Advisor collector // inventory, for example 2019 . EngineVersion *string // The operating system architecture of the database. OsArchitecture *int32 // The service pack level of the database. ServicePack *string // The support level of the database, for example Mainstream support . SupportLevel *string // Information about the database engine software, for example Mainstream support // ends on November 14th, 2024 . Tooltip *string // contains filtered or unexported fields }
Describes an inventory database instance for a Fleet Advisor collector.
type DatabaseMode ¶ added in v1.27.0
type DatabaseMode string
const ( DatabaseModeDefault DatabaseMode = "default" DatabaseModeBabelfish DatabaseMode = "babelfish" )
Enum values for DatabaseMode
func (DatabaseMode) Values ¶ added in v1.27.0
func (DatabaseMode) Values() []DatabaseMode
Values returns all known values for DatabaseMode. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type DatabaseResponse ¶ added in v1.19.0
type DatabaseResponse struct { // A list of collectors associated with the database. Collectors []CollectorShortInfoResponse // The ID of a database in a Fleet Advisor collector inventory. DatabaseId *string // The name of a database in a Fleet Advisor collector inventory. DatabaseName *string // The IP address of a database in a Fleet Advisor collector inventory. IpAddress *string // The number of schemas in a Fleet Advisor collector inventory database. NumberOfSchemas *int64 // The server name of a database in a Fleet Advisor collector inventory. Server *ServerShortInfoResponse // The software details of a database in a Fleet Advisor collector inventory, such // as database engine and version. SoftwareDetails *DatabaseInstanceSoftwareDetailsResponse // contains filtered or unexported fields }
Describes a database in a Fleet Advisor collector inventory.
type DatabaseShortInfoResponse ¶ added in v1.19.0
type DatabaseShortInfoResponse struct { // The database engine of a database in a Fleet Advisor collector inventory, for // example PostgreSQL . DatabaseEngine *string // The ID of a database in a Fleet Advisor collector inventory. DatabaseId *string // The IP address of a database in a Fleet Advisor collector inventory. DatabaseIpAddress *string // The name of a database in a Fleet Advisor collector inventory. DatabaseName *string // contains filtered or unexported fields }
Describes a database in a Fleet Advisor collector inventory.
type DatePartitionDelimiterValue ¶ added in v0.29.0
type DatePartitionDelimiterValue string
const ( DatePartitionDelimiterValueSlash DatePartitionDelimiterValue = "SLASH" DatePartitionDelimiterValueUnderscore DatePartitionDelimiterValue = "UNDERSCORE" DatePartitionDelimiterValueDash DatePartitionDelimiterValue = "DASH" DatePartitionDelimiterValueNone DatePartitionDelimiterValue = "NONE" )
Enum values for DatePartitionDelimiterValue
func (DatePartitionDelimiterValue) Values ¶ added in v0.29.0
func (DatePartitionDelimiterValue) Values() []DatePartitionDelimiterValue
Values returns all known values for DatePartitionDelimiterValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type DatePartitionSequenceValue ¶ added in v0.29.0
type DatePartitionSequenceValue string
const ( DatePartitionSequenceValueYyyymmdd DatePartitionSequenceValue = "YYYYMMDD" DatePartitionSequenceValueYyyymmddhh DatePartitionSequenceValue = "YYYYMMDDHH" DatePartitionSequenceValueYyyymm DatePartitionSequenceValue = "YYYYMM" DatePartitionSequenceValueMmyyyydd DatePartitionSequenceValue = "MMYYYYDD" DatePartitionSequenceValueDdmmyyyy DatePartitionSequenceValue = "DDMMYYYY" )
Enum values for DatePartitionSequenceValue
func (DatePartitionSequenceValue) Values ¶ added in v0.29.0
func (DatePartitionSequenceValue) Values() []DatePartitionSequenceValue
Values returns all known values for DatePartitionSequenceValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type DefaultErrorDetails ¶ added in v1.30.0
type DefaultErrorDetails struct { // The error message. Message *string // contains filtered or unexported fields }
Provides error information about a schema conversion operation.
type DmsSslModeValue ¶
type DmsSslModeValue string
const ( DmsSslModeValueNone DmsSslModeValue = "none" DmsSslModeValueRequire DmsSslModeValue = "require" DmsSslModeValueVerifyCa DmsSslModeValue = "verify-ca" DmsSslModeValueVerifyFull DmsSslModeValue = "verify-full" )
Enum values for DmsSslModeValue
func (DmsSslModeValue) Values ¶ added in v0.29.0
func (DmsSslModeValue) Values() []DmsSslModeValue
Values returns all known values for DmsSslModeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type DmsTransferSettings ¶
type DmsTransferSettings struct { // The name of the S3 bucket to use. BucketName *string // The Amazon Resource Name (ARN) used by the service access IAM role. The role // must allow the iam:PassRole action. ServiceAccessRoleArn *string // contains filtered or unexported fields }
The settings in JSON format for the DMS Transfer type source endpoint.
type DocDbDataProviderSettings ¶ added in v1.31.0
type DocDbDataProviderSettings struct { // The Amazon Resource Name (ARN) of the certificate used for SSL connection. CertificateArn *string // The database name on the DocumentDB data provider. DatabaseName *string // The port value for the DocumentDB data provider. Port *int32 // The name of the source DocumentDB server. ServerName *string // The SSL mode used to connect to the DocumentDB data provider. The default value // is none . SslMode DmsSslModeValue // contains filtered or unexported fields }
Provides information that defines a DocumentDB data provider.
type DocDbSettings ¶ added in v0.29.0
type DocDbSettings struct { // The database name on the DocumentDB source endpoint. DatabaseName *string // Indicates the number of documents to preview to determine the document // organization. Use this setting when NestingLevel is set to "one" . // // Must be a positive value greater than 0 . Default value is 1000 . DocsToInvestigate *int32 // Specifies the document ID. Use this setting when NestingLevel is set to "none" // . // // Default value is "false" . ExtractDocId *bool // The KMS key identifier that is used to encrypt the content on the replication // instance. If you don't specify a value for the KmsKeyId parameter, then DMS // uses your default encryption key. KMS creates the default encryption key for // your Amazon Web Services account. Your Amazon Web Services account has a // different default encryption key for each Amazon Web Services Region. KmsKeyId *string // Specifies either document or table mode. // // Default value is "none" . Specify "none" to use document mode. Specify "one" to // use table mode. NestingLevel NestingLevelValue // The password for the user account you use to access the DocumentDB source // endpoint. Password *string // The port value for the DocumentDB source endpoint. Port *int32 // If true , DMS replicates data to shard collections. DMS only uses this setting // if the target endpoint is a DocumentDB elastic cluster. // // When this setting is true , note the following: // // - You must set TargetTablePrepMode to nothing . // // - DMS automatically sets useUpdateLookup to false . ReplicateShardCollections *bool // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the DocumentDB endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the DocumentDB endpoint connection details. SecretsManagerSecretId *string // The name of the server on the DocumentDB source endpoint. ServerName *string // If true , DMS retrieves the entire document from the DocumentDB source during // migration. This may cause a migration failure if the server response exceeds // bandwidth limits. To fetch only updates and deletes during migration, set this // parameter to false . UseUpdateLookUp *bool // The user name you use to access the DocumentDB source endpoint. Username *string // contains filtered or unexported fields }
Provides information that defines a DocumentDB endpoint.
type DynamoDbSettings ¶
type DynamoDbSettings struct { // The Amazon Resource Name (ARN) used by the service to access the IAM role. The // role must allow the iam:PassRole action. // // This member is required. ServiceAccessRoleArn *string // contains filtered or unexported fields }
Provides the Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role used to define an Amazon DynamoDB target endpoint.
type ElasticsearchSettings ¶
type ElasticsearchSettings struct { // The endpoint for the OpenSearch cluster. DMS uses HTTPS if a transport protocol // (http/https) is not specified. // // This member is required. EndpointUri *string // The Amazon Resource Name (ARN) used by the service to access the IAM role. The // role must allow the iam:PassRole action. // // This member is required. ServiceAccessRoleArn *string // The maximum number of seconds for which DMS retries failed API requests to the // OpenSearch cluster. ErrorRetryDuration *int32 // The maximum percentage of records that can fail to be written before a full // load operation stops. // // To avoid early failure, this counter is only effective after 1000 records are // transferred. OpenSearch also has the concept of error monitoring during the last // 10 minutes of an Observation Window. If transfer of all records fail in the last // 10 minutes, the full load operation stops. FullLoadErrorPercentage *int32 // Set this option to true for DMS to migrate documentation using the // documentation type _doc . OpenSearch and an Elasticsearch cluster only support // the _doc documentation type in versions 7. x and later. The default value is // false . UseNewMappingType *bool // contains filtered or unexported fields }
Provides information that defines an OpenSearch endpoint.
type EncodingTypeValue ¶
type EncodingTypeValue string
const ( EncodingTypeValuePlain EncodingTypeValue = "plain" EncodingTypeValuePlainDictionary EncodingTypeValue = "plain-dictionary" EncodingTypeValueRleDictionary EncodingTypeValue = "rle-dictionary" )
Enum values for EncodingTypeValue
func (EncodingTypeValue) Values ¶ added in v0.29.0
func (EncodingTypeValue) Values() []EncodingTypeValue
Values returns all known values for EncodingTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type EncryptionModeValue ¶
type EncryptionModeValue string
const ( EncryptionModeValueSseS3 EncryptionModeValue = "sse-s3" EncryptionModeValueSseKms EncryptionModeValue = "sse-kms" )
Enum values for EncryptionModeValue
func (EncryptionModeValue) Values ¶ added in v0.29.0
func (EncryptionModeValue) Values() []EncryptionModeValue
Values returns all known values for EncryptionModeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Endpoint ¶
type Endpoint struct { // The Amazon Resource Name (ARN) used for SSL connection to the endpoint. CertificateArn *string // The name of the database at the endpoint. DatabaseName *string // The settings for the DMS Transfer type source. For more information, see the // DmsTransferSettings structure. DmsTransferSettings *DmsTransferSettings // Provides information that defines a DocumentDB endpoint. DocDbSettings *DocDbSettings // The settings for the DynamoDB target endpoint. For more information, see the // DynamoDBSettings structure. DynamoDbSettings *DynamoDbSettings // The settings for the OpenSearch source endpoint. For more information, see the // ElasticsearchSettings structure. ElasticsearchSettings *ElasticsearchSettings // The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. EndpointArn *string // The database endpoint identifier. Identifiers must begin with a letter and must // contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or // contain two consecutive hyphens. EndpointIdentifier *string // The type of endpoint. Valid values are source and target . EndpointType ReplicationEndpointTypeValue // The expanded name for the engine name. For example, if the EngineName parameter // is "aurora", this value would be "Amazon Aurora MySQL". EngineDisplayName *string // The database engine name. Valid values, depending on the EndpointType, include // "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , // "redshift" , "redshift-serverless" , "s3" , "db2" , "db2-zos" , "azuredb" , // "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , // "documentdb" , "sqlserver" , "neptune" , and "babelfish" . EngineName *string // Value returned by a call to CreateEndpoint that can be used for cross-account // validation. Use it on a subsequent call to CreateEndpoint to create the endpoint // with a cross-account. ExternalId *string // The external table definition. ExternalTableDefinition *string // Additional connection attributes used to connect to the endpoint. ExtraConnectionAttributes *string // Settings in JSON format for the source GCP MySQL endpoint. GcpMySQLSettings *GcpMySQLSettings // The settings for the IBM Db2 LUW source endpoint. For more information, see the // IBMDb2Settings structure. IBMDb2Settings *IBMDb2Settings // The settings for the Apache Kafka target endpoint. For more information, see // the KafkaSettings structure. KafkaSettings *KafkaSettings // The settings for the Amazon Kinesis target endpoint. For more information, see // the KinesisSettings structure. KinesisSettings *KinesisSettings // An KMS key identifier that is used to encrypt the connection parameters for the // endpoint. // // If you don't specify a value for the KmsKeyId parameter, then DMS uses your // default encryption key. // // KMS creates the default encryption key for your Amazon Web Services account. // Your Amazon Web Services account has a different default encryption key for each // Amazon Web Services Region. KmsKeyId *string // The settings for the Microsoft SQL Server source and target endpoint. For more // information, see the MicrosoftSQLServerSettings structure. MicrosoftSQLServerSettings *MicrosoftSQLServerSettings // The settings for the MongoDB source endpoint. For more information, see the // MongoDbSettings structure. MongoDbSettings *MongoDbSettings // The settings for the MySQL source and target endpoint. For more information, // see the MySQLSettings structure. MySQLSettings *MySQLSettings // The settings for the Amazon Neptune target endpoint. For more information, see // the NeptuneSettings structure. NeptuneSettings *NeptuneSettings // The settings for the Oracle source and target endpoint. For more information, // see the OracleSettings structure. OracleSettings *OracleSettings // The port value used to access the endpoint. Port *int32 // The settings for the PostgreSQL source and target endpoint. For more // information, see the PostgreSQLSettings structure. PostgreSQLSettings *PostgreSQLSettings // The settings for the Redis target endpoint. For more information, see the // RedisSettings structure. RedisSettings *RedisSettings // Settings for the Amazon Redshift endpoint. RedshiftSettings *RedshiftSettings // The settings for the S3 target endpoint. For more information, see the // S3Settings structure. S3Settings *S3Settings // The name of the server at the endpoint. ServerName *string // The Amazon Resource Name (ARN) used by the service to access the IAM role. The // role must allow the iam:PassRole action. ServiceAccessRoleArn *string // The SSL mode used to connect to the endpoint. The default value is none . SslMode DmsSslModeValue // The status of the endpoint. Status *string // The settings for the SAP ASE source and target endpoint. For more information, // see the SybaseSettings structure. SybaseSettings *SybaseSettings // The settings for the Amazon Timestream target endpoint. For more information, // see the TimestreamSettings structure. TimestreamSettings *TimestreamSettings // The user name used to connect to the endpoint. Username *string // contains filtered or unexported fields }
Describes an endpoint of a database instance in response to operations such as the following:
CreateEndpoint
DescribeEndpoint
ModifyEndpoint
type EndpointSetting ¶ added in v1.3.0
type EndpointSetting struct { // The relevance or validity of an endpoint setting for an engine name and its // endpoint type. Applicability *string // The default value of the endpoint setting if no value is specified using // CreateEndpoint or ModifyEndpoint . DefaultValue *string // Enumerated values to use for this endpoint. EnumValues []string // The maximum value of an endpoint setting that is of type int . IntValueMax *int32 // The minimum value of an endpoint setting that is of type int . IntValueMin *int32 // The name that you want to give the endpoint settings. Name *string // A value that marks this endpoint setting as sensitive. Sensitive *bool // The type of endpoint. Valid values are source and target . Type EndpointSettingTypeValue // The unit of measure for this endpoint setting. Units *string // contains filtered or unexported fields }
Endpoint settings.
type EndpointSettingTypeValue ¶ added in v1.3.0
type EndpointSettingTypeValue string
const ( EndpointSettingTypeValueString EndpointSettingTypeValue = "string" EndpointSettingTypeValueBoolean EndpointSettingTypeValue = "boolean" EndpointSettingTypeValueInteger EndpointSettingTypeValue = "integer" EndpointSettingTypeValueEnum EndpointSettingTypeValue = "enum" )
Enum values for EndpointSettingTypeValue
func (EndpointSettingTypeValue) Values ¶ added in v1.3.0
func (EndpointSettingTypeValue) Values() []EndpointSettingTypeValue
Values returns all known values for EndpointSettingTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type EngineVersion ¶ added in v1.29.0
type EngineVersion struct { // The date when the replication instance will be automatically upgraded. This // setting only applies if the auto-minor-version setting is enabled. AutoUpgradeDate *time.Time // The list of valid replication instance versions that you can upgrade to. AvailableUpgrades []string // The date when the replication instance version will be deprecated and can no // longer be requested. DeprecationDate *time.Time // The date when the replication instance will have a version upgrade forced. ForceUpgradeDate *time.Time // The date when the replication instance version became publicly available. LaunchDate *time.Time // The lifecycle status of the replication instance version. Valid values are // DEPRECATED , DEFAULT_VERSION , and ACTIVE . Lifecycle *string // The release status of the replication instance version. ReleaseStatus ReleaseStatusValues // The version number of the replication instance. Version *string // contains filtered or unexported fields }
Provides information about a replication instance version.
type ErrorDetails ¶ added in v1.30.0
type ErrorDetails interface {
// contains filtered or unexported methods
}
Provides error information about a project.
The following types satisfy this interface:
ErrorDetailsMemberDefaultErrorDetails
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/databasemigrationservice/types" ) func main() { var union types.ErrorDetails // type switches can be used to check the union value switch v := union.(type) { case *types.ErrorDetailsMemberDefaultErrorDetails: _ = v.Value // Value is types.DefaultErrorDetails case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type ErrorDetailsMemberDefaultErrorDetails ¶ added in v1.30.0
type ErrorDetailsMemberDefaultErrorDetails struct { Value DefaultErrorDetails // contains filtered or unexported fields }
Error information about a project.
type Event ¶
type Event struct { // The date of the event. Date *time.Time // The event categories available for the specified source type. EventCategories []string // The event message. Message *string // The identifier of an event source. SourceIdentifier *string // The type of DMS resource that generates events. // // Valid values: replication-instance | endpoint | replication-task SourceType SourceType // contains filtered or unexported fields }
Describes an identifiable significant activity that affects a replication instance or task. This object can provide the message, the available event categories, the date and source of the event, and the DMS resource type.
type EventCategoryGroup ¶
type EventCategoryGroup struct { // A list of event categories from a source type that you've chosen. EventCategories []string // The type of DMS resource that generates events. // // Valid values: replication-instance | replication-server | security-group | // replication-task SourceType *string // contains filtered or unexported fields }
Lists categories of events subscribed to, and generated by, the applicable DMS resource type. This data type appears in response to the DescribeEventCategoriesDescribeEventCategories action.
type EventSubscription ¶
type EventSubscription struct { // The DMS event notification subscription Id. CustSubscriptionId *string // The Amazon Web Services customer account associated with the DMS event // notification subscription. CustomerAwsId *string // Boolean value that indicates if the event subscription is enabled. Enabled bool // A lists of event categories. EventCategoriesList []string // The topic ARN of the DMS event notification subscription. SnsTopicArn *string // A list of source Ids for the event subscription. SourceIdsList []string // The type of DMS resource that generates events. // // Valid values: replication-instance | replication-server | security-group | // replication-task SourceType *string // The status of the DMS event notification subscription. // // Constraints: // // Can be one of the following: creating | modifying | deleting | active | // no-permission | topic-not-exist // // The status "no-permission" indicates that DMS no longer has permission to post // to the SNS topic. The status "topic-not-exist" indicates that the topic was // deleted after the subscription was created. Status *string // The time the DMS event notification subscription was created. SubscriptionCreationTime *string // contains filtered or unexported fields }
Describes an event notification subscription created by the CreateEventSubscription operation.
type ExportMetadataModelAssessmentResultEntry ¶ added in v1.30.0
type ExportMetadataModelAssessmentResultEntry struct { // The URL for the object containing the exported metadata model assessment. ObjectURL *string // The object key for the object containing the exported metadata model assessment. S3ObjectKey *string // contains filtered or unexported fields }
Provides information about an exported metadata model assessment.
type ExportSqlDetails ¶ added in v1.30.0
type ExportSqlDetails struct { // The URL for the object containing the exported metadata model assessment. ObjectURL *string // The Amazon S3 object key for the object containing the exported metadata model // assessment. S3ObjectKey *string // contains filtered or unexported fields }
Provides information about a metadata model assessment exported to SQL.
type FailedDependencyFault ¶ added in v1.43.0
type FailedDependencyFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
A dependency threw an exception.
func (*FailedDependencyFault) Error ¶ added in v1.43.0
func (e *FailedDependencyFault) Error() string
func (*FailedDependencyFault) ErrorCode ¶ added in v1.43.0
func (e *FailedDependencyFault) ErrorCode() string
func (*FailedDependencyFault) ErrorFault ¶ added in v1.43.0
func (e *FailedDependencyFault) ErrorFault() smithy.ErrorFault
func (*FailedDependencyFault) ErrorMessage ¶ added in v1.43.0
func (e *FailedDependencyFault) ErrorMessage() string
type Filter ¶
type Filter struct { // The name of the filter as specified for a Describe* or similar operation. // // This member is required. Name *string // The filter value, which can specify one or more values used to narrow the // returned results. // // This member is required. Values []string // contains filtered or unexported fields }
Identifies the name and value of a filter object. This filter is used to limit the number and type of DMS objects that are returned for a particular Describe* call or similar operation. Filters are used as an optional parameter for certain API operations.
type FleetAdvisorLsaAnalysisResponse ¶ added in v1.19.0
type FleetAdvisorLsaAnalysisResponse struct { // The ID of an LSA analysis run by a Fleet Advisor collector. LsaAnalysisId *string // The status of an LSA analysis run by a Fleet Advisor collector. Status *string // contains filtered or unexported fields }
Describes a large-scale assessment (LSA) analysis run by a Fleet Advisor collector.
type FleetAdvisorSchemaObjectResponse ¶ added in v1.19.0
type FleetAdvisorSchemaObjectResponse struct { // The number of lines of code in a schema object in a Fleet Advisor collector // inventory. CodeLineCount *int64 // The size level of the code in a schema object in a Fleet Advisor collector // inventory. CodeSize *int64 // The number of objects in a schema object in a Fleet Advisor collector inventory. NumberOfObjects *int64 // The type of the schema object, as reported by the database engine. Examples // include the following: // // - function // // - trigger // // - SYSTEM_TABLE // // - QUEUE ObjectType *string // The ID of a schema object in a Fleet Advisor collector inventory. SchemaId *string // contains filtered or unexported fields }
Describes a schema object in a Fleet Advisor collector inventory.
type GcpMySQLSettings ¶ added in v1.12.0
type GcpMySQLSettings struct { // Specifies a script to run immediately after DMS connects to the endpoint. The // migration task continues running regardless if the SQL statement succeeds or // fails. // // For this parameter, provide the code of the script itself, not the name of a // file containing the script. AfterConnectScript *string // Cleans and recreates table metadata information on the replication instance // when a mismatch occurs. For example, in a situation where running an alter DDL // on the table could result in different information about the table cached in the // replication instance. CleanSourceMetadataOnMismatch *bool // Database name for the endpoint. For a MySQL source or target endpoint, don't // explicitly specify the database using the DatabaseName request parameter on // either the CreateEndpoint or ModifyEndpoint API call. Specifying DatabaseName // when you create or modify a MySQL endpoint replicates all the task tables to // this single database. For MySQL endpoints, you specify the database only when // you specify the schema in the table-mapping rules of the DMS task. DatabaseName *string // Specifies how often to check the binary log for new changes/events when the // database is idle. The default is five seconds. // // Example: eventsPollInterval=5; // // In the example, DMS checks for changes in the binary logs every five seconds. EventsPollInterval *int32 // Specifies the maximum size (in KB) of any .csv file used to transfer data to a // MySQL-compatible database. // // Example: maxFileSize=512 MaxFileSize *int32 // Improves performance when loading data into the MySQL-compatible target // database. Specifies how many threads to use to load the data into the // MySQL-compatible target database. Setting a large number of threads can have an // adverse effect on database performance, because a separate connection is // required for each thread. The default is one. // // Example: parallelLoadThreads=1 ParallelLoadThreads *int32 // Endpoint connection password. Password *string // Endpoint TCP port. Port *int32 // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret. The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the MySQL endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the MySQL endpoint connection details. SecretsManagerSecretId *string // The MySQL host name. ServerName *string // Specifies the time zone for the source MySQL database. // // Example: serverTimezone=US/Pacific; // // Note: Do not enclose time zones in single quotes. ServerTimezone *string // Specifies where to migrate source tables on the target, either to a single // database or multiple databases. // // Example: targetDbType=MULTIPLE_DATABASES TargetDbType TargetDbType // Endpoint connection user name. Username *string // contains filtered or unexported fields }
Settings in JSON format for the source GCP MySQL endpoint.
type IBMDb2Settings ¶
type IBMDb2Settings struct { // For ongoing replication (CDC), use CurrentLSN to specify a log sequence number // (LSN) where you want the replication to start. CurrentLsn *string // Database name for the endpoint. DatabaseName *string // If true, DMS saves any .csv files to the Db2 LUW target that were used to // replicate data. DMS uses these files for analysis and troubleshooting. // // The default value is false. KeepCsvFiles *bool // The amount of time (in milliseconds) before DMS times out operations performed // by DMS on the Db2 target. The default value is 1200 (20 minutes). LoadTimeout *int32 // Specifies the maximum size (in KB) of .csv files used to transfer data to Db2 // LUW. MaxFileSize *int32 // Maximum number of bytes per read, as a NUMBER value. The default is 64 KB. MaxKBytesPerRead *int32 // Endpoint connection password. Password *string // Endpoint TCP port. The default value is 50000. Port *int32 // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the Db2 LUW endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the Db2 LUW endpoint connection details. SecretsManagerSecretId *string // Fully qualified domain name of the endpoint. ServerName *string // Enables ongoing replication (CDC) as a BOOLEAN value. The default is true. SetDataCaptureChanges *bool // Endpoint connection user name. Username *string // The size (in KB) of the in-memory file write buffer used when generating .csv // files on the local disk on the DMS replication instance. The default value is // 1024 (1 MB). WriteBufferSize *int32 // contains filtered or unexported fields }
Provides information that defines an IBM Db2 LUW endpoint.
type InstanceProfile ¶ added in v1.30.0
type InstanceProfile struct { // The Availability Zone where the instance profile runs. AvailabilityZone *string // A description of the instance profile. Descriptions can have up to 31 // characters. A description can contain only ASCII letters, digits, and hyphens // ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and // can only begin with a letter. Description *string // The Amazon Resource Name (ARN) string that uniquely identifies the instance // profile. InstanceProfileArn *string // The time the instance profile was created. InstanceProfileCreationTime *time.Time // The user-friendly name for the instance profile. InstanceProfileName *string // The Amazon Resource Name (ARN) of the KMS key that is used to encrypt the // connection parameters for the instance profile. // // If you don't specify a value for the KmsKeyArn parameter, then DMS uses your // default encryption key. // // KMS creates the default encryption key for your Amazon Web Services account. // Your Amazon Web Services account has a different default encryption key for each // Amazon Web Services Region. KmsKeyArn *string // Specifies the network type for the instance profile. A value of IPV4 represents // an instance profile with IPv4 network type and only supports IPv4 addressing. A // value of IPV6 represents an instance profile with IPv6 network type and only // supports IPv6 addressing. A value of DUAL represents an instance profile with // dual network type that supports IPv4 and IPv6 addressing. NetworkType *string // Specifies the accessibility options for the instance profile. A value of true // represents an instance profile with a public IP address. A value of false // represents an instance profile with a private IP address. The default value is // true . PubliclyAccessible *bool // The identifier of the subnet group that is associated with the instance profile. SubnetGroupIdentifier *string // The VPC security groups that are used with the instance profile. The VPC // security group must work with the VPC containing the instance profile. VpcSecurityGroups []string // contains filtered or unexported fields }
Provides information that defines an instance profile.
type InsufficientResourceCapacityFault ¶
type InsufficientResourceCapacityFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
There are not enough resources allocated to the database migration.
func (*InsufficientResourceCapacityFault) Error ¶
func (e *InsufficientResourceCapacityFault) Error() string
func (*InsufficientResourceCapacityFault) ErrorCode ¶
func (e *InsufficientResourceCapacityFault) ErrorCode() string
func (*InsufficientResourceCapacityFault) ErrorFault ¶
func (e *InsufficientResourceCapacityFault) ErrorFault() smithy.ErrorFault
func (*InsufficientResourceCapacityFault) ErrorMessage ¶
func (e *InsufficientResourceCapacityFault) ErrorMessage() string
type InvalidCertificateFault ¶
type InvalidCertificateFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The certificate was not valid.
func (*InvalidCertificateFault) Error ¶
func (e *InvalidCertificateFault) Error() string
func (*InvalidCertificateFault) ErrorCode ¶
func (e *InvalidCertificateFault) ErrorCode() string
func (*InvalidCertificateFault) ErrorFault ¶
func (e *InvalidCertificateFault) ErrorFault() smithy.ErrorFault
func (*InvalidCertificateFault) ErrorMessage ¶
func (e *InvalidCertificateFault) ErrorMessage() string
type InvalidOperationFault ¶ added in v1.19.0
type InvalidOperationFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The action or operation requested isn't valid.
func (*InvalidOperationFault) Error ¶ added in v1.19.0
func (e *InvalidOperationFault) Error() string
func (*InvalidOperationFault) ErrorCode ¶ added in v1.19.0
func (e *InvalidOperationFault) ErrorCode() string
func (*InvalidOperationFault) ErrorFault ¶ added in v1.19.0
func (e *InvalidOperationFault) ErrorFault() smithy.ErrorFault
func (*InvalidOperationFault) ErrorMessage ¶ added in v1.19.0
func (e *InvalidOperationFault) ErrorMessage() string
type InvalidResourceStateFault ¶
type InvalidResourceStateFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The resource is in a state that prevents it from being used for database migration.
func (*InvalidResourceStateFault) Error ¶
func (e *InvalidResourceStateFault) Error() string
func (*InvalidResourceStateFault) ErrorCode ¶
func (e *InvalidResourceStateFault) ErrorCode() string
func (*InvalidResourceStateFault) ErrorFault ¶
func (e *InvalidResourceStateFault) ErrorFault() smithy.ErrorFault
func (*InvalidResourceStateFault) ErrorMessage ¶
func (e *InvalidResourceStateFault) ErrorMessage() string
type InvalidSubnet ¶
type InvalidSubnet struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The subnet provided isn't valid.
func (*InvalidSubnet) Error ¶
func (e *InvalidSubnet) Error() string
func (*InvalidSubnet) ErrorCode ¶
func (e *InvalidSubnet) ErrorCode() string
func (*InvalidSubnet) ErrorFault ¶
func (e *InvalidSubnet) ErrorFault() smithy.ErrorFault
func (*InvalidSubnet) ErrorMessage ¶
func (e *InvalidSubnet) ErrorMessage() string
type InventoryData ¶ added in v1.19.0
type InventoryData struct { // The number of databases in the Fleet Advisor collector inventory. NumberOfDatabases *int32 // The number of schemas in the Fleet Advisor collector inventory. NumberOfSchemas *int32 // contains filtered or unexported fields }
Describes a Fleet Advisor collector inventory.
type KMSAccessDeniedFault ¶
type KMSAccessDeniedFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The ciphertext references a key that doesn't exist or that the DMS account doesn't have access to.
func (*KMSAccessDeniedFault) Error ¶
func (e *KMSAccessDeniedFault) Error() string
func (*KMSAccessDeniedFault) ErrorCode ¶
func (e *KMSAccessDeniedFault) ErrorCode() string
func (*KMSAccessDeniedFault) ErrorFault ¶
func (e *KMSAccessDeniedFault) ErrorFault() smithy.ErrorFault
func (*KMSAccessDeniedFault) ErrorMessage ¶
func (e *KMSAccessDeniedFault) ErrorMessage() string
type KMSDisabledFault ¶
type KMSDisabledFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The specified KMS key isn't enabled.
func (*KMSDisabledFault) Error ¶
func (e *KMSDisabledFault) Error() string
func (*KMSDisabledFault) ErrorCode ¶
func (e *KMSDisabledFault) ErrorCode() string
func (*KMSDisabledFault) ErrorFault ¶
func (e *KMSDisabledFault) ErrorFault() smithy.ErrorFault
func (*KMSDisabledFault) ErrorMessage ¶
func (e *KMSDisabledFault) ErrorMessage() string
type KMSFault ¶
type KMSFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
An Key Management Service (KMS) error is preventing access to KMS.
func (*KMSFault) ErrorFault ¶
func (e *KMSFault) ErrorFault() smithy.ErrorFault
func (*KMSFault) ErrorMessage ¶
type KMSInvalidStateFault ¶
type KMSInvalidStateFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The state of the specified KMS resource isn't valid for this request.
func (*KMSInvalidStateFault) Error ¶
func (e *KMSInvalidStateFault) Error() string
func (*KMSInvalidStateFault) ErrorCode ¶
func (e *KMSInvalidStateFault) ErrorCode() string
func (*KMSInvalidStateFault) ErrorFault ¶
func (e *KMSInvalidStateFault) ErrorFault() smithy.ErrorFault
func (*KMSInvalidStateFault) ErrorMessage ¶
func (e *KMSInvalidStateFault) ErrorMessage() string
type KMSKeyNotAccessibleFault ¶
type KMSKeyNotAccessibleFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
DMS cannot access the KMS key.
func (*KMSKeyNotAccessibleFault) Error ¶
func (e *KMSKeyNotAccessibleFault) Error() string
func (*KMSKeyNotAccessibleFault) ErrorCode ¶
func (e *KMSKeyNotAccessibleFault) ErrorCode() string
func (*KMSKeyNotAccessibleFault) ErrorFault ¶
func (e *KMSKeyNotAccessibleFault) ErrorFault() smithy.ErrorFault
func (*KMSKeyNotAccessibleFault) ErrorMessage ¶
func (e *KMSKeyNotAccessibleFault) ErrorMessage() string
type KMSNotFoundFault ¶
type KMSNotFoundFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The specified KMS entity or resource can't be found.
func (*KMSNotFoundFault) Error ¶
func (e *KMSNotFoundFault) Error() string
func (*KMSNotFoundFault) ErrorCode ¶
func (e *KMSNotFoundFault) ErrorCode() string
func (*KMSNotFoundFault) ErrorFault ¶
func (e *KMSNotFoundFault) ErrorFault() smithy.ErrorFault
func (*KMSNotFoundFault) ErrorMessage ¶
func (e *KMSNotFoundFault) ErrorMessage() string
type KMSThrottlingFault ¶
type KMSThrottlingFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
This request triggered KMS request throttling.
func (*KMSThrottlingFault) Error ¶
func (e *KMSThrottlingFault) Error() string
func (*KMSThrottlingFault) ErrorCode ¶
func (e *KMSThrottlingFault) ErrorCode() string
func (*KMSThrottlingFault) ErrorFault ¶
func (e *KMSThrottlingFault) ErrorFault() smithy.ErrorFault
func (*KMSThrottlingFault) ErrorMessage ¶
func (e *KMSThrottlingFault) ErrorMessage() string
type KafkaSaslMechanism ¶ added in v1.25.0
type KafkaSaslMechanism string
const ( KafkaSaslMechanismScramSha512 KafkaSaslMechanism = "scram-sha-512" KafkaSaslMechanismPlain KafkaSaslMechanism = "plain" )
Enum values for KafkaSaslMechanism
func (KafkaSaslMechanism) Values ¶ added in v1.25.0
func (KafkaSaslMechanism) Values() []KafkaSaslMechanism
Values returns all known values for KafkaSaslMechanism. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type KafkaSecurityProtocol ¶ added in v1.3.0
type KafkaSecurityProtocol string
const ( KafkaSecurityProtocolPlaintext KafkaSecurityProtocol = "plaintext" KafkaSecurityProtocolSslAuthentication KafkaSecurityProtocol = "ssl-authentication" KafkaSecurityProtocolSslEncryption KafkaSecurityProtocol = "ssl-encryption" KafkaSecurityProtocolSaslSsl KafkaSecurityProtocol = "sasl-ssl" )
Enum values for KafkaSecurityProtocol
func (KafkaSecurityProtocol) Values ¶ added in v1.3.0
func (KafkaSecurityProtocol) Values() []KafkaSecurityProtocol
Values returns all known values for KafkaSecurityProtocol. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type KafkaSettings ¶
type KafkaSettings struct { // A comma-separated list of one or more broker locations in your Kafka cluster // that host your Kafka instance. Specify each broker location in the form // broker-hostname-or-ip:port . For example, // "ec2-12-345-678-901.compute-1.amazonaws.com:2345" . For more information and // examples of specifying a list of broker locations, see [Using Apache Kafka as a target for Database Migration Service]in the Database // Migration Service User Guide. // // [Using Apache Kafka as a target for Database Migration Service]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html Broker *string // Shows detailed control information for table definition, column definition, and // table and column changes in the Kafka message output. The default is false . IncludeControlDetails *bool // Include NULL and empty columns for records migrated to the endpoint. The // default is false . IncludeNullAndEmpty *bool // Shows the partition value within the Kafka message output unless the partition // type is schema-table-type . The default is false . IncludePartitionValue *bool // Includes any data definition language (DDL) operations that change the table in // the control data, such as rename-table , drop-table , add-column , drop-column , // and rename-column . The default is false . IncludeTableAlterOperations *bool // Provides detailed transaction information from the source database. This // information includes a commit timestamp, a log position, and values for // transaction_id , previous transaction_id , and transaction_record_id (the // record offset within a transaction). The default is false . IncludeTransactionDetails *bool // The output format for the records created on the endpoint. The message format // is JSON (default) or JSON_UNFORMATTED (a single line with no tab). MessageFormat MessageFormatValue // The maximum size in bytes for records created on the endpoint The default is // 1,000,000. MessageMaxBytes *int32 // Set this optional parameter to true to avoid adding a '0x' prefix to raw data // in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the // LOB column type in hexadecimal format moving from an Oracle source to a Kafka // target. Use the NoHexPrefix endpoint setting to enable migration of RAW data // type columns without adding the '0x' prefix. NoHexPrefix *bool // Prefixes schema and table names to partition values, when the partition type is // primary-key-type . Doing this increases data distribution among Kafka // partitions. For example, suppose that a SysBench schema has thousands of tables // and each table has only limited range for a primary key. In this case, the same // primary key is sent from thousands of tables to the same partition, which causes // throttling. The default is false . PartitionIncludeSchemaTable *bool // For SASL/SSL authentication, DMS supports the SCRAM-SHA-512 mechanism by // default. DMS versions 3.5.0 and later also support the PLAIN mechanism. To use // the PLAIN mechanism, set this parameter to PLAIN. SaslMechanism KafkaSaslMechanism // The secure password you created when you first set up your MSK cluster to // validate a client identity and make an encrypted connection between server and // client using SASL-SSL authentication. SaslPassword *string // The secure user name you created when you first set up your MSK cluster to // validate a client identity and make an encrypted connection between server and // client using SASL-SSL authentication. SaslUsername *string // Set secure connection to a Kafka target endpoint using Transport Layer Security // (TLS). Options include ssl-encryption , ssl-authentication , and sasl-ssl . // sasl-ssl requires SaslUsername and SaslPassword . SecurityProtocol KafkaSecurityProtocol // The Amazon Resource Name (ARN) for the private certificate authority (CA) cert // that DMS uses to securely connect to your Kafka target endpoint. SslCaCertificateArn *string // The Amazon Resource Name (ARN) of the client certificate used to securely // connect to a Kafka target endpoint. SslClientCertificateArn *string // The Amazon Resource Name (ARN) for the client private key used to securely // connect to a Kafka target endpoint. SslClientKeyArn *string // The password for the client private key used to securely connect to a Kafka // target endpoint. SslClientKeyPassword *string // Sets hostname verification for the certificate. This setting is supported in // DMS version 3.5.1 and later. SslEndpointIdentificationAlgorithm KafkaSslEndpointIdentificationAlgorithm // The topic to which you migrate the data. If you don't specify a topic, DMS // specifies "kafka-default-topic" as the migration topic. Topic *string // contains filtered or unexported fields }
Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
type KafkaSslEndpointIdentificationAlgorithm ¶ added in v1.26.0
type KafkaSslEndpointIdentificationAlgorithm string
const ( KafkaSslEndpointIdentificationAlgorithmNone KafkaSslEndpointIdentificationAlgorithm = "none" KafkaSslEndpointIdentificationAlgorithmHttps KafkaSslEndpointIdentificationAlgorithm = "https" )
Enum values for KafkaSslEndpointIdentificationAlgorithm
func (KafkaSslEndpointIdentificationAlgorithm) Values ¶ added in v1.26.0
Values returns all known values for KafkaSslEndpointIdentificationAlgorithm. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type KinesisSettings ¶
type KinesisSettings struct { // Shows detailed control information for table definition, column definition, and // table and column changes in the Kinesis message output. The default is false . IncludeControlDetails *bool // Include NULL and empty columns for records migrated to the endpoint. The // default is false . IncludeNullAndEmpty *bool // Shows the partition value within the Kinesis message output, unless the // partition type is schema-table-type . The default is false . IncludePartitionValue *bool // Includes any data definition language (DDL) operations that change the table in // the control data, such as rename-table , drop-table , add-column , drop-column , // and rename-column . The default is false . IncludeTableAlterOperations *bool // Provides detailed transaction information from the source database. This // information includes a commit timestamp, a log position, and values for // transaction_id , previous transaction_id , and transaction_record_id (the // record offset within a transaction). The default is false . IncludeTransactionDetails *bool // The output format for the records created on the endpoint. The message format // is JSON (default) or JSON_UNFORMATTED (a single line with no tab). MessageFormat MessageFormatValue // Set this optional parameter to true to avoid adding a '0x' prefix to raw data // in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the // LOB column type in hexadecimal format moving from an Oracle source to an Amazon // Kinesis target. Use the NoHexPrefix endpoint setting to enable migration of RAW // data type columns without adding the '0x' prefix. NoHexPrefix *bool // Prefixes schema and table names to partition values, when the partition type is // primary-key-type . Doing this increases data distribution among Kinesis shards. // For example, suppose that a SysBench schema has thousands of tables and each // table has only limited range for a primary key. In this case, the same primary // key is sent from thousands of tables to the same shard, which causes throttling. // The default is false . PartitionIncludeSchemaTable *bool // The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to the // Kinesis data stream. The role must allow the iam:PassRole action. ServiceAccessRoleArn *string // The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint. StreamArn *string // contains filtered or unexported fields }
Provides information that describes an Amazon Kinesis Data Stream endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
type Limitation ¶ added in v1.24.0
type Limitation struct { // The identifier of the source database. DatabaseId *string // A description of the limitation. Provides additional information about the // limitation, and includes recommended actions that you can take to address or // avoid this limitation. Description *string // The name of the target engine that Fleet Advisor should use in the target // engine recommendation. Valid values include "rds-aurora-mysql" , // "rds-aurora-postgresql" , "rds-mysql" , "rds-oracle" , "rds-sql-server" , and // "rds-postgresql" . EngineName *string // The impact of the limitation. You can use this parameter to prioritize // limitations that you want to address. Valid values include "Blocker" , "High" , // "Medium" , and "Low" . Impact *string // The name of the limitation. Describes unsupported database features, migration // action items, and other limitations. Name *string // The type of the limitation, such as action required, upgrade required, and // limited feature. Type *string // contains filtered or unexported fields }
Provides information about the limitations of target Amazon Web Services engines.
Your source database might include features that the target Amazon Web Services engine doesn't support. Fleet Advisor lists these features as limitations. You should consider these limitations during database migration. For each limitation, Fleet Advisor recommends an action that you can take to address or avoid this limitation.
type LongVarcharMappingType ¶ added in v1.26.0
type LongVarcharMappingType string
const ( LongVarcharMappingTypeWstring LongVarcharMappingType = "wstring" LongVarcharMappingTypeClob LongVarcharMappingType = "clob" LongVarcharMappingTypeNclob LongVarcharMappingType = "nclob" )
Enum values for LongVarcharMappingType
func (LongVarcharMappingType) Values ¶ added in v1.26.0
func (LongVarcharMappingType) Values() []LongVarcharMappingType
Values returns all known values for LongVarcharMappingType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MariaDbDataProviderSettings ¶ added in v1.31.0
type MariaDbDataProviderSettings struct { // The Amazon Resource Name (ARN) of the certificate used for SSL connection. CertificateArn *string // The port value for the MariaDB data provider Port *int32 // The name of the MariaDB server. ServerName *string // The SSL mode used to connect to the MariaDB data provider. The default value is // none . SslMode DmsSslModeValue // contains filtered or unexported fields }
Provides information that defines a MariaDB data provider.
type MessageFormatValue ¶
type MessageFormatValue string
const ( MessageFormatValueJson MessageFormatValue = "json" MessageFormatValueJsonUnformatted MessageFormatValue = "json-unformatted" )
Enum values for MessageFormatValue
func (MessageFormatValue) Values ¶ added in v0.29.0
func (MessageFormatValue) Values() []MessageFormatValue
Values returns all known values for MessageFormatValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MicrosoftSQLServerSettings ¶
type MicrosoftSQLServerSettings struct { // The maximum size of the packets (in bytes) used to transfer data using BCP. BcpPacketSize *int32 // Specifies a file group for the DMS internal tables. When the replication task // starts, all the internal DMS control tables (awsdms_ apply_exception, // awsdms_apply, awsdms_changes) are created for the specified file group. ControlTablesFileGroup *string // Database name for the endpoint. DatabaseName *string // Forces LOB lookup on inline LOB. ForceLobLookup *bool // Endpoint connection password. Password *string // Endpoint TCP port. Port *int32 // Cleans and recreates table metadata information on the replication instance // when a mismatch occurs. An example is a situation where running an alter DDL // statement on a table might result in different information about the table // cached in the replication instance. QuerySingleAlwaysOnNode *bool // When this attribute is set to Y , DMS only reads changes from transaction log // backups and doesn't read from the active transaction log file during ongoing // replication. Setting this parameter to Y enables you to control active // transaction log file growth during full load and ongoing replication tasks. // However, it can add some source latency to ongoing replication. ReadBackupOnly *bool // Use this attribute to minimize the need to access the backup log and enable DMS // to prevent truncation using one of the following two methods. // // Start transactions in the database: This is the default method. When this // method is used, DMS prevents TLOG truncation by mimicking a transaction in the // database. As long as such a transaction is open, changes that appear after the // transaction started aren't truncated. If you need Microsoft Replication to be // enabled in your database, then you must choose this method. // // Exclusively use sp_repldone within a single task: When this method is used, DMS // reads the changes and then uses sp_repldone to mark the TLOG transactions as // ready for truncation. Although this method doesn't involve any transactional // activities, it can only be used when Microsoft Replication isn't running. Also, // when using this method, only one DMS task can access the database at any given // time. Therefore, if you need to run parallel DMS tasks against the same // database, use the default method. SafeguardPolicy SafeguardPolicy // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the SQL Server endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the SQL Server endpoint connection details. SecretsManagerSecretId *string // Fully qualified domain name of the endpoint. For an Amazon RDS SQL Server // instance, this is the output of [DescribeDBInstances], in the [Endpoint].Address field. // // [Endpoint]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Endpoint.html // [DescribeDBInstances]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html ServerName *string // Indicates the mode used to fetch CDC data. TlogAccessMode TlogAccessMode // Use the TrimSpaceInChar source endpoint setting to right-trim data on CHAR and // NCHAR data types during migration. Setting TrimSpaceInChar does not left-trim // data. The default value is true . TrimSpaceInChar *bool // Use this to attribute to transfer data for full-load operations using BCP. When // the target table contains an identity column that does not exist in the source // table, you must disable the use BCP for loading table option. UseBcpFullLoad *bool // When this attribute is set to Y , DMS processes third-party transaction log // backups if they are created in native format. UseThirdPartyBackupDevice *bool // Endpoint connection user name. Username *string // contains filtered or unexported fields }
Provides information that defines a Microsoft SQL Server endpoint.
type MicrosoftSqlServerDataProviderSettings ¶ added in v1.30.0
type MicrosoftSqlServerDataProviderSettings struct { // The Amazon Resource Name (ARN) of the certificate used for SSL connection. CertificateArn *string // The database name on the Microsoft SQL Server data provider. DatabaseName *string // The port value for the Microsoft SQL Server data provider. Port *int32 // The name of the Microsoft SQL Server server. ServerName *string // The SSL mode used to connect to the Microsoft SQL Server data provider. The // default value is none . SslMode DmsSslModeValue // contains filtered or unexported fields }
Provides information that defines a Microsoft SQL Server data provider.
type MigrationProject ¶ added in v1.30.0
type MigrationProject struct { // A user-friendly description of the migration project. Description *string // The Amazon Resource Name (ARN) of the instance profile for your migration // project. InstanceProfileArn *string // The name of the associated instance profile. InstanceProfileName *string // The ARN string that uniquely identifies the migration project. MigrationProjectArn *string // The time when the migration project was created. MigrationProjectCreationTime *time.Time // The name of the migration project. MigrationProjectName *string // The schema conversion application attributes, including the Amazon S3 bucket // name and Amazon S3 role ARN. SchemaConversionApplicationAttributes *SCApplicationAttributes // Information about the source data provider, including the name or ARN, and // Secrets Manager parameters. SourceDataProviderDescriptors []DataProviderDescriptor // Information about the target data provider, including the name or ARN, and // Secrets Manager parameters. TargetDataProviderDescriptors []DataProviderDescriptor // The settings in JSON format for migration rules. Migration rules make it // possible for you to change the object names according to the rules that you // specify. For example, you can change an object name to lowercase or uppercase, // add or remove a prefix or suffix, or rename objects. TransformationRules *string // contains filtered or unexported fields }
Provides information that defines a migration project.
type MigrationTypeValue ¶
type MigrationTypeValue string
const ( MigrationTypeValueFullLoad MigrationTypeValue = "full-load" MigrationTypeValueCdc MigrationTypeValue = "cdc" MigrationTypeValueFullLoadAndCdc MigrationTypeValue = "full-load-and-cdc" )
Enum values for MigrationTypeValue
func (MigrationTypeValue) Values ¶ added in v0.29.0
func (MigrationTypeValue) Values() []MigrationTypeValue
Values returns all known values for MigrationTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MongoDbDataProviderSettings ¶ added in v1.31.0
type MongoDbDataProviderSettings struct { // The authentication method for connecting to the data provider. Valid values are // DEFAULT, MONGODB_CR, or SCRAM_SHA_1. AuthMechanism AuthMechanismValue // The MongoDB database name. This setting isn't used when AuthType is set to "no" // . // // The default is "admin" . AuthSource *string // The authentication type for the database connection. Valid values are PASSWORD // or NO. AuthType AuthTypeValue // The Amazon Resource Name (ARN) of the certificate used for SSL connection. CertificateArn *string // The database name on the MongoDB data provider. DatabaseName *string // The port value for the MongoDB data provider. Port *int32 // The name of the MongoDB server. ServerName *string // The SSL mode used to connect to the MongoDB data provider. The default value is // none . SslMode DmsSslModeValue // contains filtered or unexported fields }
Provides information that defines a MongoDB data provider.
type MongoDbSettings ¶
type MongoDbSettings struct { // The authentication mechanism you use to access the MongoDB source endpoint. // // For the default value, in MongoDB version 2.x, "default" is "mongodb_cr" . For // MongoDB version 3.x or later, "default" is "scram_sha_1" . This setting isn't // used when AuthType is set to "no" . AuthMechanism AuthMechanismValue // The MongoDB database name. This setting isn't used when AuthType is set to "no" // . // // The default is "admin" . AuthSource *string // The authentication type you use to access the MongoDB source endpoint. // // When when set to "no" , user name and password parameters are not used and can // be empty. AuthType AuthTypeValue // The database name on the MongoDB source endpoint. DatabaseName *string // Indicates the number of documents to preview to determine the document // organization. Use this setting when NestingLevel is set to "one" . // // Must be a positive value greater than 0 . Default value is 1000 . DocsToInvestigate *string // Specifies the document ID. Use this setting when NestingLevel is set to "none" // . // // Default value is "false" . ExtractDocId *string // The KMS key identifier that is used to encrypt the content on the replication // instance. If you don't specify a value for the KmsKeyId parameter, then DMS // uses your default encryption key. KMS creates the default encryption key for // your Amazon Web Services account. Your Amazon Web Services account has a // different default encryption key for each Amazon Web Services Region. KmsKeyId *string // Specifies either document or table mode. // // Default value is "none" . Specify "none" to use document mode. Specify "one" to // use table mode. NestingLevel NestingLevelValue // The password for the user account you use to access the MongoDB source // endpoint. Password *string // The port value for the MongoDB source endpoint. Port *int32 // If true , DMS replicates data to shard collections. DMS only uses this setting // if the target endpoint is a DocumentDB elastic cluster. // // When this setting is true , note the following: // // - You must set TargetTablePrepMode to nothing . // // - DMS automatically sets useUpdateLookup to false . ReplicateShardCollections *bool // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the MongoDB endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the MongoDB endpoint connection details. SecretsManagerSecretId *string // The name of the server on the MongoDB source endpoint. For MongoDB Atlas, // provide the server name for any of the servers in the replication set. ServerName *string // If true , DMS retrieves the entire document from the MongoDB source during // migration. This may cause a migration failure if the server response exceeds // bandwidth limits. To fetch only updates and deletes during migration, set this // parameter to false . UseUpdateLookUp *bool // The user name you use to access the MongoDB source endpoint. Username *string // contains filtered or unexported fields }
Provides information that defines a MongoDB endpoint.
type MySQLSettings ¶
type MySQLSettings struct { // Specifies a script to run immediately after DMS connects to the endpoint. The // migration task continues running regardless if the SQL statement succeeds or // fails. // // For this parameter, provide the code of the script itself, not the name of a // file containing the script. AfterConnectScript *string // Cleans and recreates table metadata information on the replication instance // when a mismatch occurs. For example, in a situation where running an alter DDL // on the table could result in different information about the table cached in the // replication instance. CleanSourceMetadataOnMismatch *bool // Database name for the endpoint. For a MySQL source or target endpoint, don't // explicitly specify the database using the DatabaseName request parameter on // either the CreateEndpoint or ModifyEndpoint API call. Specifying DatabaseName // when you create or modify a MySQL endpoint replicates all the task tables to // this single database. For MySQL endpoints, you specify the database only when // you specify the schema in the table-mapping rules of the DMS task. DatabaseName *string // Specifies how often to check the binary log for new changes/events when the // database is idle. The default is five seconds. // // Example: eventsPollInterval=5; // // In the example, DMS checks for changes in the binary logs every five seconds. EventsPollInterval *int32 // Sets the client statement timeout (in seconds) for a MySQL source endpoint. ExecuteTimeout *int32 // Specifies the maximum size (in KB) of any .csv file used to transfer data to a // MySQL-compatible database. // // Example: maxFileSize=512 MaxFileSize *int32 // Improves performance when loading data into the MySQL-compatible target // database. Specifies how many threads to use to load the data into the // MySQL-compatible target database. Setting a large number of threads can have an // adverse effect on database performance, because a separate connection is // required for each thread. The default is one. // // Example: parallelLoadThreads=1 ParallelLoadThreads *int32 // Endpoint connection password. Password *string // Endpoint TCP port. Port *int32 // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the MySQL endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the MySQL endpoint connection details. SecretsManagerSecretId *string // The host name of the endpoint database. // // For an Amazon RDS MySQL instance, this is the output of [DescribeDBInstances], in the [Endpoint].Address field. // // For an Aurora MySQL instance, this is the output of [DescribeDBClusters], in the Endpoint field. // // [Endpoint]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Endpoint.html // [DescribeDBInstances]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html // [DescribeDBClusters]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html ServerName *string // Specifies the time zone for the source MySQL database. // // Example: serverTimezone=US/Pacific; // // Note: Do not enclose time zones in single quotes. ServerTimezone *string // Specifies where to migrate source tables on the target, either to a single // database or multiple databases. If you specify SPECIFIC_DATABASE , specify the // database name using the DatabaseName parameter of the Endpoint object. // // Example: targetDbType=MULTIPLE_DATABASES TargetDbType TargetDbType // Endpoint connection user name. Username *string // contains filtered or unexported fields }
Provides information that defines a MySQL endpoint.
type MySqlDataProviderSettings ¶ added in v1.30.0
type MySqlDataProviderSettings struct { // The Amazon Resource Name (ARN) of the certificate used for SSL connection. CertificateArn *string // The port value for the MySQL data provider. Port *int32 // The name of the MySQL server. ServerName *string // The SSL mode used to connect to the MySQL data provider. The default value is // none . SslMode DmsSslModeValue // contains filtered or unexported fields }
Provides information that defines a MySQL data provider.
type NeptuneSettings ¶
type NeptuneSettings struct { // A folder path where you want DMS to store migrated graph data in the S3 bucket // specified by S3BucketName // // This member is required. S3BucketFolder *string // The name of the Amazon S3 bucket where DMS can temporarily store migrated graph // data in .csv files before bulk-loading it to the Neptune target database. DMS // maps the SQL source data to graph data before storing it in these .csv files. // // This member is required. S3BucketName *string // The number of milliseconds for DMS to wait to retry a bulk-load of migrated // graph data to the Neptune target database before raising an error. The default // is 250. ErrorRetryDuration *int32 // If you want Identity and Access Management (IAM) authorization enabled for this // endpoint, set this parameter to true . Then attach the appropriate IAM policy // document to your service role specified by ServiceAccessRoleArn . The default is // false . IamAuthEnabled *bool // The maximum size in kilobytes of migrated graph data stored in a .csv file // before DMS bulk-loads the data to the Neptune target database. The default is // 1,048,576 KB. If the bulk load is successful, DMS clears the bucket, ready to // store the next batch of migrated graph data. MaxFileSize *int32 // The number of times for DMS to retry a bulk load of migrated graph data to the // Neptune target database before raising an error. The default is 5. MaxRetryCount *int32 // The Amazon Resource Name (ARN) of the service role that you created for the // Neptune target endpoint. The role must allow the iam:PassRole action. For more // information, see [Creating an IAM Service Role for Accessing Amazon Neptune as a Target]in the Database Migration Service User Guide. // // [Creating an IAM Service Role for Accessing Amazon Neptune as a Target]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Neptune.html#CHAP_Target.Neptune.ServiceRole ServiceAccessRoleArn *string // contains filtered or unexported fields }
Provides information that defines an Amazon Neptune endpoint.
type NestingLevelValue ¶
type NestingLevelValue string
const ( NestingLevelValueNone NestingLevelValue = "none" NestingLevelValueOne NestingLevelValue = "one" )
Enum values for NestingLevelValue
func (NestingLevelValue) Values ¶ added in v0.29.0
func (NestingLevelValue) Values() []NestingLevelValue
Values returns all known values for NestingLevelValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type OracleDataProviderSettings ¶ added in v1.30.0
type OracleDataProviderSettings struct { // The address of your Oracle Automatic Storage Management (ASM) server. You can // set this value from the asm_server value. You set asm_server as part of the // extra connection attribute string to access an Oracle server with Binary Reader // that uses ASM. For more information, see [Configuration for change data capture (CDC) on an Oracle source database]. // // [Configuration for change data capture (CDC) on an Oracle source database]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration AsmServer *string // The Amazon Resource Name (ARN) of the certificate used for SSL connection. CertificateArn *string // The database name on the Oracle data provider. DatabaseName *string // The port value for the Oracle data provider. Port *int32 // The ARN of the IAM role that provides access to the secret in Secrets Manager // that contains the Oracle ASM connection details. SecretsManagerOracleAsmAccessRoleArn *string // The identifier of the secret in Secrets Manager that contains the Oracle ASM // connection details. // // Required only if your data provider uses the Oracle ASM server. SecretsManagerOracleAsmSecretId *string // The ARN of the IAM role that provides access to the secret in Secrets Manager // that contains the TDE password. SecretsManagerSecurityDbEncryptionAccessRoleArn *string // The identifier of the secret in Secrets Manager that contains the transparent // data encryption (TDE) password. DMS requires this password to access Oracle redo // logs encrypted by TDE using Binary Reader. SecretsManagerSecurityDbEncryptionSecretId *string // The name of the Oracle server. ServerName *string // The SSL mode used to connect to the Oracle data provider. The default value is // none . SslMode DmsSslModeValue // contains filtered or unexported fields }
Provides information that defines an Oracle data provider.
type OracleSettings ¶
type OracleSettings struct { // Set this attribute to false in order to use the Binary Reader to capture change // data for an Amazon RDS for Oracle as the source. This tells the DMS instance to // not access redo logs through any specified path prefix replacement using direct // file access. AccessAlternateDirectly *bool // Set this attribute to set up table-level supplemental logging for the Oracle // database. This attribute enables PRIMARY KEY supplemental logging on all tables // selected for a migration task. // // If you use this option, you still need to enable database-level supplemental // logging. AddSupplementalLogging *bool // Set this attribute with ArchivedLogDestId in a primary/ standby setup. This // attribute is useful in the case of a switchover. In this case, DMS needs to know // which destination to get archive redo logs from to read changes. This need // arises because the previous primary instance is now a standby instance after // switchover. // // Although DMS supports the use of the Oracle RESETLOGS option to open the // database, never use RESETLOGS unless necessary. For additional information // about RESETLOGS , see [RMAN Data Repair Concepts] in the Oracle Database Backup and Recovery User's Guide. // // [RMAN Data Repair Concepts]: https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/rman-data-repair-concepts.html#GUID-1805CCF7-4AF2-482D-B65A-998192F89C2B AdditionalArchivedLogDestId *int32 // Set this attribute to true to enable replication of Oracle tables containing // columns that are nested tables or defined types. AllowSelectNestedTables *bool // Specifies the ID of the destination for the archived redo logs. This value // should be the same as a number in the dest_id column of the v$archived_log // view. If you work with an additional redo log destination, use the // AdditionalArchivedLogDestId option to specify the additional destination ID. // Doing this improves performance by ensuring that the correct logs are accessed // from the outset. ArchivedLogDestId *int32 // When this field is set to Y , DMS only accesses the archived redo logs. If the // archived redo logs are stored on Automatic Storage Management (ASM) only, the // DMS user account needs to be granted ASM privileges. ArchivedLogsOnly *bool // For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) // password. You can set this value from the asm_user_password value. You set // this value as part of the comma-separated value that you set to the Password // request parameter when you create the endpoint to access transaction logs using // Binary Reader. For more information, see [Configuration for change data capture (CDC) on an Oracle source database]. // // [Configuration for change data capture (CDC) on an Oracle source database]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration AsmPassword *string // For an Oracle source endpoint, your ASM server address. You can set this value // from the asm_server value. You set asm_server as part of the extra connection // attribute string to access an Oracle server with Binary Reader that uses ASM. // For more information, see [Configuration for change data capture (CDC) on an Oracle source database]. // // [Configuration for change data capture (CDC) on an Oracle source database]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration AsmServer *string // For an Oracle source endpoint, your ASM user name. You can set this value from // the asm_user value. You set asm_user as part of the extra connection attribute // string to access an Oracle server with Binary Reader that uses ASM. For more // information, see [Configuration for change data capture (CDC) on an Oracle source database]. // // [Configuration for change data capture (CDC) on an Oracle source database]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC.Configuration AsmUser *string // Specifies whether the length of a character column is in bytes or in // characters. To indicate that the character column length is in characters, set // this attribute to CHAR . Otherwise, the character column length is in bytes. // // Example: charLengthSemantics=CHAR; CharLengthSemantics CharLengthSemantics // When true, converts timestamps with the timezone datatype to their UTC value. ConvertTimestampWithZoneToUTC *bool // Database name for the endpoint. DatabaseName *string // When set to true , this attribute helps to increase the commit rate on the // Oracle target database by writing directly to tables and not writing a trail to // database logs. DirectPathNoLog *bool // When set to true , this attribute specifies a parallel load when // useDirectPathFullLoad is set to Y . This attribute also only applies when you // use the DMS parallel load feature. Note that the target table cannot have any // constraints or indexes. DirectPathParallelLoad *bool // Set this attribute to enable homogenous tablespace replication and create // existing tables or indexes under the same tablespace on the target. EnableHomogenousTablespace *bool // Specifies the IDs of one more destinations for one or more archived redo logs. // These IDs are the values of the dest_id column in the v$archived_log view. Use // this setting with the archivedLogDestId extra connection attribute in a // primary-to-single setup or a primary-to-multiple-standby setup. // // This setting is useful in a switchover when you use an Oracle Data Guard // database as a source. In this case, DMS needs information about what destination // to get archive redo logs from to read changes. DMS needs this because after the // switchover the previous primary is a standby instance. For example, in a // primary-to-single standby setup you might apply the following settings. // // archivedLogDestId=1; ExtraArchivedLogDestIds=[2] // // In a primary-to-multiple-standby setup, you might apply the following settings. // // archivedLogDestId=1; ExtraArchivedLogDestIds=[2,3,4] // // Although DMS supports the use of the Oracle RESETLOGS option to open the // database, never use RESETLOGS unless it's necessary. For more information about // RESETLOGS , see [RMAN Data Repair Concepts] in the Oracle Database Backup and Recovery User's Guide. // // [RMAN Data Repair Concepts]: https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/rman-data-repair-concepts.html#GUID-1805CCF7-4AF2-482D-B65A-998192F89C2B ExtraArchivedLogDestIds []int32 // When set to true , this attribute causes a task to fail if the actual size of an // LOB column is greater than the specified LobMaxSize . // // If a task is set to limited LOB mode and this option is set to true , the task // fails instead of truncating the LOB data. FailTasksOnLobTruncation *bool // Specifies the number scale. You can select a scale up to 38, or you can select // FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10. // // Example: numberDataTypeScale=12 NumberDatatypeScale *int32 // The timeframe in minutes to check for open transactions for a CDC-only task. // // You can specify an integer value between 0 (the default) and 240 (the maximum). // // This parameter is only valid in DMS version 3.5.0 and later. DMS supports a // window of up to 9.5 hours including the value for OpenTransactionWindow . OpenTransactionWindow *int32 // Set this string attribute to the required value in order to use the Binary // Reader to capture change data for an Amazon RDS for Oracle as the source. This // value specifies the default Oracle root used to access the redo logs. OraclePathPrefix *string // Set this attribute to change the number of threads that DMS configures to // perform a change data capture (CDC) load using Oracle Automatic Storage // Management (ASM). You can specify an integer value between 2 (the default) and 8 // (the maximum). Use this attribute together with the readAheadBlocks attribute. ParallelAsmReadThreads *int32 // Endpoint connection password. Password *string // Endpoint TCP port. Port *int32 // Set this attribute to change the number of read-ahead blocks that DMS // configures to perform a change data capture (CDC) load using Oracle Automatic // Storage Management (ASM). You can specify an integer value between 1000 (the // default) and 200,000 (the maximum). ReadAheadBlocks *int32 // When set to true , this attribute supports tablespace replication. ReadTableSpaceName *bool // Set this attribute to true in order to use the Binary Reader to capture change // data for an Amazon RDS for Oracle as the source. This setting tells DMS instance // to replace the default Oracle root with the specified usePathPrefix setting to // access the redo logs. ReplacePathPrefix *bool // Specifies the number of seconds that the system waits before resending a query. // // Example: retryInterval=6; RetryInterval *int32 // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the Oracle endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // Required only if your Oracle endpoint uses Automatic Storage Management (ASM). // The full ARN of the IAM role that specifies DMS as the trusted entity and grants // the required permissions to access the SecretsManagerOracleAsmSecret . This // SecretsManagerOracleAsmSecret has the secret value that allows access to the // Oracle ASM of the endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerOracleAsmSecretId . Or you // can specify clear-text values for AsmUser , AsmPassword , and AsmServerName . // You can't specify both. For more information on creating this // SecretsManagerOracleAsmSecret and the SecretsManagerOracleAsmAccessRoleArn and // SecretsManagerOracleAsmSecretId required to access it, see [Using secrets to access Database Migration Service resources] in the Database // Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerOracleAsmAccessRoleArn *string // Required only if your Oracle endpoint uses Automatic Storage Management (ASM). // The full ARN, partial ARN, or friendly name of the SecretsManagerOracleAsmSecret // that contains the Oracle ASM connection details for the Oracle endpoint. SecretsManagerOracleAsmSecretId *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the Oracle endpoint connection details. SecretsManagerSecretId *string // For an Oracle source endpoint, the transparent data encryption (TDE) password // required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary // Reader. It is also the TDE_Password part of the comma-separated value you set // to the Password request parameter when you create the endpoint. The // SecurityDbEncryptian setting is related to this SecurityDbEncryptionName // setting. For more information, see [Supported encryption methods for using Oracle as a source for DMS]in the Database Migration Service User // Guide. // // [Supported encryption methods for using Oracle as a source for DMS]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Encryption SecurityDbEncryption *string // For an Oracle source endpoint, the name of a key used for the transparent data // encryption (TDE) of the columns and tablespaces in an Oracle source database // that is encrypted using TDE. The key value is the value of the // SecurityDbEncryption setting. For more information on setting the key name value // of SecurityDbEncryptionName , see the information and example for setting the // securityDbEncryptionName extra connection attribute in [Supported encryption methods for using Oracle as a source for DMS] in the Database // Migration Service User Guide. // // [Supported encryption methods for using Oracle as a source for DMS]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Encryption SecurityDbEncryptionName *string // Fully qualified domain name of the endpoint. // // For an Amazon RDS Oracle instance, this is the output of [DescribeDBInstances], in the [Endpoint].Address // field. // // [Endpoint]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Endpoint.html // [DescribeDBInstances]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html ServerName *string // Use this attribute to convert SDO_GEOMETRY to GEOJSON format. By default, DMS // calls the SDO2GEOJSON custom function if present and accessible. Or you can // create your own custom function that mimics the operation of SDOGEOJSON and set // SpatialDataOptionToGeoJsonFunctionName to call it instead. SpatialDataOptionToGeoJsonFunctionName *string // Use this attribute to specify a time in minutes for the delay in standby sync. // If the source is an Oracle Active Data Guard standby database, use this // attribute to specify the time lag between primary and standby databases. // // In DMS, you can create an Oracle CDC task that uses an Active Data Guard // standby instance as a source for replicating ongoing changes. Doing this // eliminates the need to connect to an active database that might be in // production. StandbyDelayTime *int32 // Use the TrimSpaceInChar source endpoint setting to trim data on CHAR and NCHAR // data types during migration. The default value is true . TrimSpaceInChar *bool // Set this attribute to true in order to use the Binary Reader to capture change // data for an Amazon RDS for Oracle as the source. This tells the DMS instance to // use any specified prefix replacement to access all online redo logs. UseAlternateFolderForOnline *bool // Set this attribute to Y to capture change data using the Binary Reader utility. // Set UseLogminerReader to N to set this attribute to Y. To use Binary Reader // with Amazon RDS for Oracle as the source, you set additional attributes. For // more information about using this setting with Oracle Automatic Storage // Management (ASM), see [Using Oracle LogMiner or DMS Binary Reader for CDC]. // // [Using Oracle LogMiner or DMS Binary Reader for CDC]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC UseBFile *bool // Set this attribute to Y to have DMS use a direct path full load. Specify this // value to use the direct path protocol in the Oracle Call Interface (OCI). By // using this OCI protocol, you can bulk-load Oracle target tables during a full // load. UseDirectPathFullLoad *bool // Set this attribute to Y to capture change data using the Oracle LogMiner // utility (the default). Set this attribute to N if you want to access the redo // logs as a binary file. When you set UseLogminerReader to N, also set UseBfile // to Y. For more information on this setting and using Oracle ASM, see [Using Oracle LogMiner or DMS Binary Reader for CDC]in the DMS // User Guide. // // [Using Oracle LogMiner or DMS Binary Reader for CDC]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.CDC UseLogminerReader *bool // Set this string attribute to the required value in order to use the Binary // Reader to capture change data for an Amazon RDS for Oracle as the source. This // value specifies the path prefix used to replace the default Oracle root to // access the redo logs. UsePathPrefix *string // Endpoint connection user name. Username *string // contains filtered or unexported fields }
Provides information that defines an Oracle endpoint.
type OrderableReplicationInstance ¶
type OrderableReplicationInstance struct { // List of Availability Zones for this replication instance. AvailabilityZones []string // The default amount of storage (in gigabytes) that is allocated for the // replication instance. DefaultAllocatedStorage int32 // The version of the replication engine. EngineVersion *string // The amount of storage (in gigabytes) that is allocated for the replication // instance. IncludedAllocatedStorage int32 // The minimum amount of storage (in gigabytes) that can be allocated for the // replication instance. MaxAllocatedStorage int32 // The minimum amount of storage (in gigabytes) that can be allocated for the // replication instance. MinAllocatedStorage int32 // The value returned when the specified EngineVersion of the replication instance // is in Beta or test mode. This indicates some features might not work as // expected. // // DMS supports the ReleaseStatus parameter in versions 3.1.4 and later. ReleaseStatus ReleaseStatusValues // The compute and memory capacity of the replication instance as defined for the // specified replication instance class. For example to specify the instance class // dms.c4.large, set this parameter to "dms.c4.large" . // // For more information on the settings and capacities for the available // replication instance classes, see [Selecting the right DMS replication instance for your migration]. // // [Selecting the right DMS replication instance for your migration]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth ReplicationInstanceClass *string // The type of storage used by the replication instance. StorageType *string // contains filtered or unexported fields }
In response to the DescribeOrderableReplicationInstances operation, this object describes an available replication instance. This description includes the replication instance's type, engine version, and allocated storage.
type OriginTypeValue ¶ added in v1.30.0
type OriginTypeValue string
const ( OriginTypeValueSource OriginTypeValue = "SOURCE" OriginTypeValueTarget OriginTypeValue = "TARGET" )
Enum values for OriginTypeValue
func (OriginTypeValue) Values ¶ added in v1.30.0
func (OriginTypeValue) Values() []OriginTypeValue
Values returns all known values for OriginTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ParquetVersionValue ¶
type ParquetVersionValue string
const ( ParquetVersionValueParquet10 ParquetVersionValue = "parquet-1-0" ParquetVersionValueParquet20 ParquetVersionValue = "parquet-2-0" )
Enum values for ParquetVersionValue
func (ParquetVersionValue) Values ¶ added in v0.29.0
func (ParquetVersionValue) Values() []ParquetVersionValue
Values returns all known values for ParquetVersionValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type PendingMaintenanceAction ¶
type PendingMaintenanceAction struct { // The type of pending maintenance action that is available for the resource. Action *string // The date of the maintenance window when the action is to be applied. The // maintenance action is applied to the resource during its first maintenance // window after this date. If this date is specified, any next-maintenance opt-in // requests are ignored. AutoAppliedAfterDate *time.Time // The effective date when the pending maintenance action will be applied to the // resource. This date takes into account opt-in requests received from the // ApplyPendingMaintenanceAction API operation, and also the AutoAppliedAfterDate // and ForcedApplyDate parameter values. This value is blank if an opt-in request // has not been received and nothing has been specified for AutoAppliedAfterDate // or ForcedApplyDate . CurrentApplyDate *time.Time // A description providing more detail about the maintenance action. Description *string // The date when the maintenance action will be automatically applied. The // maintenance action is applied to the resource on this date regardless of the // maintenance window for the resource. If this date is specified, any immediate // opt-in requests are ignored. ForcedApplyDate *time.Time // The type of opt-in request that has been received for the resource. OptInStatus *string // contains filtered or unexported fields }
Describes a maintenance action pending for an DMS resource, including when and how it will be applied. This data type is a response element to the DescribePendingMaintenanceActions operation.
type PluginNameValue ¶ added in v1.6.0
type PluginNameValue string
const ( PluginNameValueNoPreference PluginNameValue = "no-preference" PluginNameValueTestDecoding PluginNameValue = "test-decoding" PluginNameValuePglogical PluginNameValue = "pglogical" )
Enum values for PluginNameValue
func (PluginNameValue) Values ¶ added in v1.6.0
func (PluginNameValue) Values() []PluginNameValue
Values returns all known values for PluginNameValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type PostgreSQLSettings ¶
type PostgreSQLSettings struct { // For use with change data capture (CDC) only, this attribute has DMS bypass // foreign keys and user triggers to reduce the time it takes to bulk load data. // // Example: afterConnectScript=SET session_replication_role='replica' AfterConnectScript *string // The Babelfish for Aurora PostgreSQL database name for the endpoint. BabelfishDatabaseName *string // To capture DDL events, DMS creates various artifacts in the PostgreSQL database // when the task starts. You can later remove these artifacts. // // If this value is set to N , you don't have to create tables or triggers on the // source database. CaptureDdls *bool // Specifies the default behavior of the replication's handling of PostgreSQL- // compatible endpoints that require some additional configuration, such as // Babelfish endpoints. DatabaseMode DatabaseMode // Database name for the endpoint. DatabaseName *string // The schema in which the operational DDL database artifacts are created. // // Example: ddlArtifactsSchema=xyzddlschema; DdlArtifactsSchema *string // Sets the client statement timeout for the PostgreSQL instance, in seconds. The // default value is 60 seconds. // // Example: executeTimeout=100; ExecuteTimeout *int32 // When set to true , this value causes a task to fail if the actual size of a LOB // column is greater than the specified LobMaxSize . // // If task is set to Limited LOB mode and this option is set to true, the task // fails instead of truncating the LOB data. FailTasksOnLobTruncation *bool // The write-ahead log (WAL) heartbeat feature mimics a dummy transaction. By // doing this, it prevents idle logical replication slots from holding onto old WAL // logs, which can result in storage full situations on the source. This heartbeat // keeps restart_lsn moving and prevents storage full scenarios. HeartbeatEnable *bool // Sets the WAL heartbeat frequency (in minutes). HeartbeatFrequency *int32 // Sets the schema in which the heartbeat artifacts are created. HeartbeatSchema *string // When true, lets PostgreSQL migrate the boolean type as boolean. By default, // PostgreSQL migrates booleans as varchar(5) . You must set this setting on both // the source and target endpoints for it to take effect. MapBooleanAsBoolean *bool // When true, DMS migrates JSONB values as CLOB. MapJsonbAsClob *bool // When true, DMS migrates LONG values as VARCHAR. MapLongVarcharAs LongVarcharMappingType // Specifies the maximum size (in KB) of any .csv file used to transfer data to // PostgreSQL. // // Example: maxFileSize=512 MaxFileSize *int32 // Endpoint connection password. Password *string // Specifies the plugin to use to create a replication slot. PluginName PluginNameValue // Endpoint TCP port. The default is 5432. Port *int32 // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the PostgreSQL endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the PostgreSQL endpoint connection details. SecretsManagerSecretId *string // The host name of the endpoint database. // // For an Amazon RDS PostgreSQL instance, this is the output of [DescribeDBInstances], in the [Endpoint].Address // field. // // For an Aurora PostgreSQL instance, this is the output of [DescribeDBClusters], in the Endpoint // field. // // [Endpoint]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Endpoint.html // [DescribeDBInstances]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html // [DescribeDBClusters]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html ServerName *string // Sets the name of a previously created logical replication slot for a change // data capture (CDC) load of the PostgreSQL source instance. // // When used with the CdcStartPosition request parameter for the DMS API , this // attribute also makes it possible to use native CDC start points. DMS verifies // that the specified logical replication slot exists before starting the CDC load // task. It also verifies that the task was created with a valid setting of // CdcStartPosition . If the specified slot doesn't exist or the task doesn't have // a valid CdcStartPosition setting, DMS raises an error. // // For more information about setting the CdcStartPosition request parameter, see [Determining a CDC native start point] // in the Database Migration Service User Guide. For more information about using // CdcStartPosition , see [CreateReplicationTask], [StartReplicationTask], and [ModifyReplicationTask]. // // [ModifyReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_ModifyReplicationTask.html // [Determining a CDC native start point]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html#CHAP_Task.CDC.StartPoint.Native // [CreateReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_CreateReplicationTask.html // [StartReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTask.html SlotName *string // Use the TrimSpaceInChar source endpoint setting to trim data on CHAR and NCHAR // data types during migration. The default value is true . TrimSpaceInChar *bool // Endpoint connection user name. Username *string // contains filtered or unexported fields }
Provides information that defines a PostgreSQL endpoint.
type PostgreSqlDataProviderSettings ¶ added in v1.30.0
type PostgreSqlDataProviderSettings struct { // The Amazon Resource Name (ARN) of the certificate used for SSL connection. CertificateArn *string // The database name on the PostgreSQL data provider. DatabaseName *string // The port value for the PostgreSQL data provider. Port *int32 // The name of the PostgreSQL server. ServerName *string // The SSL mode used to connect to the PostgreSQL data provider. The default value // is none . SslMode DmsSslModeValue // contains filtered or unexported fields }
Provides information that defines a PostgreSQL data provider.
type ProvisionData ¶ added in v1.26.0
type ProvisionData struct { // The timestamp when provisioning became available. DateNewProvisioningDataAvailable *time.Time // The timestamp when DMS provisioned replication resources. DateProvisioned *time.Time // Whether the new provisioning is available to the replication. IsNewProvisioningAvailable bool // The current provisioning state ProvisionState *string // The number of capacity units the replication is using. ProvisionedCapacityUnits int32 // A message describing the reason that DMS provisioned new resources for the // serverless replication. ReasonForNewProvisioningData *string // contains filtered or unexported fields }
Information about provisioning resources for an DMS serverless replication.
type RdsConfiguration ¶ added in v1.24.0
type RdsConfiguration struct { // Describes the deployment option for the recommended Amazon RDS DB instance. The // deployment options include Multi-AZ and Single-AZ deployments. Valid values // include "MULTI_AZ" and "SINGLE_AZ" . DeploymentOption *string // Describes the recommended target Amazon RDS engine edition. EngineEdition *string // Describes the recommended target Amazon RDS engine version. EngineVersion *string // Describes the memory on the recommended Amazon RDS DB instance that meets your // requirements. InstanceMemory *float64 // Describes the recommended target Amazon RDS instance type. InstanceType *string // Describes the number of virtual CPUs (vCPU) on the recommended Amazon RDS DB // instance that meets your requirements. InstanceVcpu *float64 // Describes the number of I/O operations completed each second (IOPS) on the // recommended Amazon RDS DB instance that meets your requirements. StorageIops *int32 // Describes the storage size of the recommended Amazon RDS DB instance that meets // your requirements. StorageSize *int32 // Describes the storage type of the recommended Amazon RDS DB instance that meets // your requirements. // // Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 // and gp3), Provisioned IOPS SSD (also known as io1), and magnetic (also known as // standard). StorageType *string // contains filtered or unexported fields }
Provides information that describes the configuration of the recommended target engine on Amazon RDS.
type RdsRecommendation ¶ added in v1.24.0
type RdsRecommendation struct { // Supplemental information about the requirements to the recommended target // database on Amazon RDS. RequirementsToTarget *RdsRequirements // Supplemental information about the configuration of the recommended target // database on Amazon RDS. TargetConfiguration *RdsConfiguration // contains filtered or unexported fields }
Provides information that describes a recommendation of a target engine on Amazon RDS.
type RdsRequirements ¶ added in v1.24.0
type RdsRequirements struct { // The required deployment option for the Amazon RDS DB instance. Valid values // include "MULTI_AZ" for Multi-AZ deployments and "SINGLE_AZ" for Single-AZ // deployments. DeploymentOption *string // The required target Amazon RDS engine edition. EngineEdition *string // The required target Amazon RDS engine version. EngineVersion *string // The required memory on the Amazon RDS DB instance. InstanceMemory *float64 // The required number of virtual CPUs (vCPU) on the Amazon RDS DB instance. InstanceVcpu *float64 // The required number of I/O operations completed each second (IOPS) on your // Amazon RDS DB instance. StorageIops *int32 // The required Amazon RDS DB instance storage size. StorageSize *int32 // contains filtered or unexported fields }
Provides information that describes the requirements to the target engine on Amazon RDS.
type Recommendation ¶ added in v1.24.0
type Recommendation struct { // The date when Fleet Advisor created the target engine recommendation. CreatedDate *string // The recommendation of a target engine for the specified source database. Data *RecommendationData // The identifier of the source database for which Fleet Advisor provided this // recommendation. DatabaseId *string // The name of the target engine. Valid values include "rds-aurora-mysql" , // "rds-aurora-postgresql" , "rds-mysql" , "rds-oracle" , "rds-sql-server" , and // "rds-postgresql" . EngineName *string // Indicates that this target is the rightsized migration destination. Preferred *bool // The settings in JSON format for the preferred target engine parameters. These // parameters include capacity, resource utilization, and the usage type // (production, development, or testing). Settings *RecommendationSettings // The status of the target engine recommendation. Valid values include "alternate" // , "in-progress" , "not-viable" , and "recommended" . Status *string // contains filtered or unexported fields }
Provides information that describes a recommendation of a target engine.
A recommendation is a set of possible Amazon Web Services target engines that you can choose to migrate your source on-premises database. In this set, Fleet Advisor suggests a single target engine as the right sized migration destination. To determine this rightsized migration destination, Fleet Advisor uses the inventory metadata and metrics from data collector. You can use recommendations before the start of migration to save costs and reduce risks.
With recommendations, you can explore different target options and compare metrics, so you can make an informed decision when you choose the migration target.
type RecommendationData ¶ added in v1.24.0
type RecommendationData struct { // The recommendation of a target Amazon RDS database engine. RdsEngine *RdsRecommendation // contains filtered or unexported fields }
Provides information about the target engine for the specified source database.
type RecommendationSettings ¶ added in v1.24.0
type RecommendationSettings struct { // The size of your target instance. Fleet Advisor calculates this value based on // your data collection type, such as total capacity and resource utilization. // Valid values include "total-capacity" and "utilization" . // // This member is required. InstanceSizingType *string // The deployment option for your target engine. For production databases, Fleet // Advisor chooses Multi-AZ deployment. For development or test databases, Fleet // Advisor chooses Single-AZ deployment. Valid values include "development" and // "production" . // // This member is required. WorkloadType *string // contains filtered or unexported fields }
Provides information about the required target engine settings.
type RedisAuthTypeValue ¶ added in v1.7.0
type RedisAuthTypeValue string
const ( RedisAuthTypeValueNone RedisAuthTypeValue = "none" RedisAuthTypeValueAuthRole RedisAuthTypeValue = "auth-role" RedisAuthTypeValueAuthToken RedisAuthTypeValue = "auth-token" )
Enum values for RedisAuthTypeValue
func (RedisAuthTypeValue) Values ¶ added in v1.7.0
func (RedisAuthTypeValue) Values() []RedisAuthTypeValue
Values returns all known values for RedisAuthTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type RedisSettings ¶ added in v1.7.0
type RedisSettings struct { // Transmission Control Protocol (TCP) port for the endpoint. // // This member is required. Port int32 // Fully qualified domain name of the endpoint. // // This member is required. ServerName *string // The password provided with the auth-role and auth-token options of the AuthType // setting for a Redis target endpoint. AuthPassword *string // The type of authentication to perform when connecting to a Redis target. // Options include none , auth-token , and auth-role . The auth-token option // requires an AuthPassword value to be provided. The auth-role option requires // AuthUserName and AuthPassword values to be provided. AuthType RedisAuthTypeValue // The user name provided with the auth-role option of the AuthType setting for a // Redis target endpoint. AuthUserName *string // The Amazon Resource Name (ARN) for the certificate authority (CA) that DMS uses // to connect to your Redis target endpoint. SslCaCertificateArn *string // The connection to a Redis target endpoint using Transport Layer Security (TLS). // Valid values include plaintext and ssl-encryption . The default is // ssl-encryption . The ssl-encryption option makes an encrypted connection. // Optionally, you can identify an Amazon Resource Name (ARN) for an SSL // certificate authority (CA) using the SslCaCertificateArn setting. If an ARN // isn't given for a CA, DMS uses the Amazon root CA. // // The plaintext option doesn't provide Transport Layer Security (TLS) encryption // for traffic between endpoint and database. SslSecurityProtocol SslSecurityProtocolValue // contains filtered or unexported fields }
Provides information that defines a Redis target endpoint.
type RedshiftDataProviderSettings ¶ added in v1.31.0
type RedshiftDataProviderSettings struct { // The database name on the Amazon Redshift data provider. DatabaseName *string // The port value for the Amazon Redshift data provider. Port *int32 // The name of the Amazon Redshift server. ServerName *string // contains filtered or unexported fields }
Provides information that defines an Amazon Redshift data provider.
type RedshiftSettings ¶
type RedshiftSettings struct { // A value that indicates to allow any date format, including invalid formats such // as 00/00/00 00:00:00, to be loaded without generating an error. You can choose // true or false (the default). // // This parameter applies only to TIMESTAMP and DATE columns. Always use // ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data // doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value // into that field. AcceptAnyDate *bool // Code to run after connecting. This parameter should contain the code itself, // not the name of a file containing the code. AfterConnectScript *string // An S3 folder where the comma-separated-value (.csv) files are stored before // being uploaded to the target Redshift cluster. // // For full load mode, DMS converts source records into .csv files and loads them // to the BucketFolder/TableID path. DMS uses the Redshift COPY command to upload // the .csv files to the target table. The files are deleted once the COPY // operation has finished. For more information, see [COPY]in the Amazon Redshift // Database Developer Guide. // // For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads // the .csv files to this BucketFolder/NetChangesTableID path. // // [COPY]: https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html BucketFolder *string // The name of the intermediate S3 bucket used to store .csv files before // uploading data to Redshift. BucketName *string // If Amazon Redshift is configured to support case sensitive schema names, set // CaseSensitiveNames to true . The default is false . CaseSensitiveNames *bool // If you set CompUpdate to true Amazon Redshift applies automatic compression if // the table is empty. This applies even if the table columns already have // encodings other than RAW . If you set CompUpdate to false , automatic // compression is disabled and existing column encodings aren't changed. The // default is true . CompUpdate *bool // A value that sets the amount of time to wait (in milliseconds) before timing // out, beginning from when you initially establish a connection. ConnectionTimeout *int32 // The name of the Amazon Redshift data warehouse (service) that you are working // with. DatabaseName *string // The date format that you are using. Valid values are auto (case-sensitive), // your date format string enclosed in quotes, or NULL. If this parameter is left // unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes // most strings, even some that aren't supported when you use a date format string. // // If your date and time values use formats different from each other, set this to // auto . DateFormat *string // A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields // as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The // default is false . EmptyAsNull *bool // The type of server-side encryption that you want to use for your data. This // encryption type is part of the endpoint settings or the extra connections // attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS // . // // For the ModifyEndpoint operation, you can change the existing value of the // EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the // existing value from SSE_S3 to SSE_KMS . // // To use SSE_S3 , create an Identity and Access Management (IAM) role with a // policy that allows "arn:aws:s3:::*" to use the following actions: // "s3:PutObject", "s3:ListBucket" EncryptionMode EncryptionModeValue // This setting is only valid for a full-load migration task. Set ExplicitIds to // true to have tables with IDENTITY columns override their auto-generated values // with explicit values loaded from the source data files used to populate the // tables. The default is false . ExplicitIds *bool // The number of threads used to upload a single file. This parameter accepts a // value from 1 through 64. It defaults to 10. // // The number of parallel streams used to upload a single .csv file to an S3 // bucket using S3 Multipart Upload. For more information, see [Multipart upload overview]. // // FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10. // // [Multipart upload overview]: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html FileTransferUploadStreams *int32 // The amount of time to wait (in milliseconds) before timing out of operations // performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, // and UPDATE. LoadTimeout *int32 // When true, lets Redshift migrate the boolean type as boolean. By default, // Redshift migrates booleans as varchar(1) . You must set this setting on both the // source and target endpoints for it to take effect. MapBooleanAsBoolean *bool // The maximum size (in KB) of any .csv file used to load data on an S3 bucket and // transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB). MaxFileSize *int32 // The password for the user named in the username property. Password *string // The port number for Amazon Redshift. The default value is 5439. Port *int32 // A value that specifies to remove surrounding quotation marks from strings in // the incoming data. All characters within the quotation marks, including // delimiters, are retained. Choose true to remove quotation marks. The default is // false . RemoveQuotes *bool // A value that specifies to replaces the invalid characters specified in // ReplaceInvalidChars , substituting the specified characters instead. The default // is "?" . ReplaceChars *string // A list of characters that you want to replace. Use with ReplaceChars . ReplaceInvalidChars *string // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the Amazon Redshift endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the Amazon Redshift endpoint connection details. SecretsManagerSecretId *string // The name of the Amazon Redshift cluster you are using. ServerName *string // The KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this // key ID. The key that you use needs an attached policy that enables IAM user // permissions and allows use of the key. ServerSideEncryptionKmsKeyId *string // The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon // Redshift service. The role must allow the iam:PassRole action. ServiceAccessRoleArn *string // The time format that you want to use. Valid values are auto (case-sensitive), // 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. // Using auto recognizes most strings, even some that aren't supported when you // use a time format string. // // If your date and time values use formats different from each other, set this // parameter to auto . TimeFormat *string // A value that specifies to remove the trailing white space characters from a // VARCHAR string. This parameter applies only to columns with a VARCHAR data type. // Choose true to remove unneeded white space. The default is false . TrimBlanks *bool // A value that specifies to truncate data in columns to the appropriate number of // characters, so that the data fits in the column. This parameter applies only to // columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. // Choose true to truncate data. The default is false . TruncateColumns *bool // An Amazon Redshift user name for a registered user. Username *string // The size (in KB) of the in-memory file write buffer used when generating .csv // files on the local disk at the DMS replication instance. The default value is // 1000 (buffer size is 1000KB). WriteBufferSize *int32 // contains filtered or unexported fields }
Provides information that defines an Amazon Redshift endpoint.
type RefreshSchemasStatus ¶
type RefreshSchemasStatus struct { // The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. EndpointArn *string // The last failure message for the schema. LastFailureMessage *string // The date the schema was last refreshed. LastRefreshDate *time.Time // The Amazon Resource Name (ARN) of the replication instance. ReplicationInstanceArn *string // The status of the schema. Status RefreshSchemasStatusTypeValue // contains filtered or unexported fields }
Provides information that describes status of a schema at an endpoint specified by the DescribeRefreshSchemaStatus operation.
type RefreshSchemasStatusTypeValue ¶
type RefreshSchemasStatusTypeValue string
const ( RefreshSchemasStatusTypeValueSuccessful RefreshSchemasStatusTypeValue = "successful" RefreshSchemasStatusTypeValueFailed RefreshSchemasStatusTypeValue = "failed" RefreshSchemasStatusTypeValueRefreshing RefreshSchemasStatusTypeValue = "refreshing" )
Enum values for RefreshSchemasStatusTypeValue
func (RefreshSchemasStatusTypeValue) Values ¶ added in v0.29.0
func (RefreshSchemasStatusTypeValue) Values() []RefreshSchemasStatusTypeValue
Values returns all known values for RefreshSchemasStatusTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ReleaseStatusValues ¶
type ReleaseStatusValues string
const ( ReleaseStatusValuesBeta ReleaseStatusValues = "beta" ReleaseStatusValuesProd ReleaseStatusValues = "prod" )
Enum values for ReleaseStatusValues
func (ReleaseStatusValues) Values ¶ added in v0.29.0
func (ReleaseStatusValues) Values() []ReleaseStatusValues
Values returns all known values for ReleaseStatusValues. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ReloadOptionValue ¶
type ReloadOptionValue string
const ( ReloadOptionValueDataReload ReloadOptionValue = "data-reload" ReloadOptionValueValidateOnly ReloadOptionValue = "validate-only" )
Enum values for ReloadOptionValue
func (ReloadOptionValue) Values ¶ added in v0.29.0
func (ReloadOptionValue) Values() []ReloadOptionValue
Values returns all known values for ReloadOptionValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Replication ¶ added in v1.26.0
type Replication struct { // Indicates the start time for a change data capture (CDC) operation. Use either // CdcStartTime or CdcStartPosition to specify when you want a CDC operation to // start. Specifying both values results in an error. CdcStartPosition *string // Indicates the start time for a change data capture (CDC) operation. Use either // CdcStartTime or CdcStartPosition to specify when you want a CDC operation to // start. Specifying both values results in an error. CdcStartTime *time.Time // Indicates when you want a change data capture (CDC) operation to stop. The // value can be either server time or commit time. CdcStopPosition *string // Error and other information about why a serverless replication failed. FailureMessages []string // Information about provisioning resources for an DMS serverless replication. ProvisionData *ProvisionData // Indicates the last checkpoint that occurred during a change data capture (CDC) // operation. You can provide this value to the CdcStartPosition parameter to // start a CDC operation that begins at that checkpoint. RecoveryCheckpoint *string // The Amazon Resource Name for the ReplicationConfig associated with the // replication. ReplicationConfigArn *string // The identifier for the ReplicationConfig associated with the replication. ReplicationConfigIdentifier *string // The time the serverless replication was created. ReplicationCreateTime *time.Time // The timestamp when DMS will deprovision the replication. ReplicationDeprovisionTime *time.Time // The timestamp when replication was last stopped. ReplicationLastStopTime *time.Time // This object provides a collection of statistics about a serverless replication. ReplicationStats *ReplicationStats // The type of the serverless replication. ReplicationType MigrationTypeValue // The time the serverless replication was updated. ReplicationUpdateTime *time.Time // The Amazon Resource Name for an existing Endpoint the serverless replication // uses for its data source. SourceEndpointArn *string // The replication type. StartReplicationType *string // The current status of the serverless replication. Status *string // The reason the replication task was stopped. This response parameter can return // one of the following values: // // - "Stop Reason NORMAL" // // - "Stop Reason RECOVERABLE_ERROR" // // - "Stop Reason FATAL_ERROR" // // - "Stop Reason FULL_LOAD_ONLY_FINISHED" // // - "Stop Reason STOPPED_AFTER_FULL_LOAD" – Full load completed, with cached // changes not applied // // - "Stop Reason STOPPED_AFTER_CACHED_EVENTS" – Full load completed, with cached // changes applied // // - "Stop Reason EXPRESS_LICENSE_LIMITS_REACHED" // // - "Stop Reason STOPPED_AFTER_DDL_APPLY" – User-defined stop task after DDL // applied // // - "Stop Reason STOPPED_DUE_TO_LOW_MEMORY" // // - "Stop Reason STOPPED_DUE_TO_LOW_DISK" // // - "Stop Reason STOPPED_AT_SERVER_TIME" – User-defined server time for stopping // task // // - "Stop Reason STOPPED_AT_COMMIT_TIME" – User-defined commit time for stopping // task // // - "Stop Reason RECONFIGURATION_RESTART" // // - "Stop Reason RECYCLE_TASK" StopReason *string // The Amazon Resource Name for an existing Endpoint the serverless replication // uses for its data target. TargetEndpointArn *string // contains filtered or unexported fields }
Provides information that describes a serverless replication created by the CreateReplication operation.
type ReplicationConfig ¶ added in v1.26.0
type ReplicationConfig struct { // Configuration parameters for provisioning an DMS serverless replication. ComputeConfig *ComputeConfig // The Amazon Resource Name (ARN) of this DMS Serverless replication configuration. ReplicationConfigArn *string // The time the serverless replication config was created. ReplicationConfigCreateTime *time.Time // The identifier for the ReplicationConfig associated with the replication. ReplicationConfigIdentifier *string // The time the serverless replication config was updated. ReplicationConfigUpdateTime *time.Time // Configuration parameters for an DMS serverless replication. ReplicationSettings *string // The type of the replication. ReplicationType MigrationTypeValue // The Amazon Resource Name (ARN) of the source endpoint for this DMS serverless // replication configuration. SourceEndpointArn *string // Additional parameters for an DMS serverless replication. SupplementalSettings *string // Table mappings specified in the replication. TableMappings *string // The Amazon Resource Name (ARN) of the target endpoint for this DMS serverless // replication configuration. TargetEndpointArn *string // contains filtered or unexported fields }
This object provides configuration information about a serverless replication.
type ReplicationEndpointTypeValue ¶
type ReplicationEndpointTypeValue string
const ( ReplicationEndpointTypeValueSource ReplicationEndpointTypeValue = "source" ReplicationEndpointTypeValueTarget ReplicationEndpointTypeValue = "target" )
Enum values for ReplicationEndpointTypeValue
func (ReplicationEndpointTypeValue) Values ¶ added in v0.29.0
func (ReplicationEndpointTypeValue) Values() []ReplicationEndpointTypeValue
Values returns all known values for ReplicationEndpointTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ReplicationInstance ¶
type ReplicationInstance struct { // The amount of storage (in gigabytes) that is allocated for the replication // instance. AllocatedStorage int32 // Boolean value indicating if minor version upgrades will be automatically // applied to the instance. AutoMinorVersionUpgrade bool // The Availability Zone for the instance. AvailabilityZone *string // The DNS name servers supported for the replication instance to access your // on-premise source or target database. DnsNameServers *string // The engine version number of the replication instance. // // If an engine version number is not specified when a replication instance is // created, the default is the latest engine version available. // // When modifying a major engine version of an instance, also set // AllowMajorVersionUpgrade to true . EngineVersion *string // The expiration date of the free replication instance that is part of the Free // DMS program. FreeUntil *time.Time // The time the replication instance was created. InstanceCreateTime *time.Time // An KMS key identifier that is used to encrypt the data on the replication // instance. // // If you don't specify a value for the KmsKeyId parameter, then DMS uses your // default encryption key. // // KMS creates the default encryption key for your Amazon Web Services account. // Your Amazon Web Services account has a different default encryption key for each // Amazon Web Services Region. KmsKeyId *string // Specifies whether the replication instance is a Multi-AZ deployment. You can't // set the AvailabilityZone parameter if the Multi-AZ parameter is set to true . MultiAZ bool // The type of IP address protocol used by a replication instance, such as IPv4 // only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not // yet supported. NetworkType *string // The pending modification values. PendingModifiedValues *ReplicationPendingModifiedValues // The maintenance window times for the replication instance. Any pending upgrades // to the replication instance are performed during this time. PreferredMaintenanceWindow *string // Specifies the accessibility options for the replication instance. A value of // true represents an instance with a public IP address. A value of false // represents an instance with a private IP address. The default value is true . PubliclyAccessible bool // The Amazon Resource Name (ARN) of the replication instance. ReplicationInstanceArn *string // The compute and memory capacity of the replication instance as defined for the // specified replication instance class. It is a required parameter, although a // default value is pre-selected in the DMS console. // // For more information on the settings and capacities for the available // replication instance classes, see [Selecting the right DMS replication instance for your migration]. // // [Selecting the right DMS replication instance for your migration]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth ReplicationInstanceClass *string // The replication instance identifier is a required parameter. This parameter is // stored as a lowercase string. // // Constraints: // // - Must contain 1-63 alphanumeric characters or hyphens. // // - First character must be a letter. // // - Cannot end with a hyphen or contain two consecutive hyphens. // // Example: myrepinstance ReplicationInstanceIdentifier *string // One or more IPv6 addresses for the replication instance. ReplicationInstanceIpv6Addresses []string // The private IP address of the replication instance. // // Deprecated: This member has been deprecated. ReplicationInstancePrivateIpAddress *string // One or more private IP addresses for the replication instance. ReplicationInstancePrivateIpAddresses []string // The public IP address of the replication instance. // // Deprecated: This member has been deprecated. ReplicationInstancePublicIpAddress *string // One or more public IP addresses for the replication instance. ReplicationInstancePublicIpAddresses []string // The status of the replication instance. The possible return values include: // // - "available" // // - "creating" // // - "deleted" // // - "deleting" // // - "failed" // // - "modifying" // // - "upgrading" // // - "rebooting" // // - "resetting-master-credentials" // // - "storage-full" // // - "incompatible-credentials" // // - "incompatible-network" // // - "maintenance" ReplicationInstanceStatus *string // The subnet group for the replication instance. ReplicationSubnetGroup *ReplicationSubnetGroup // The Availability Zone of the standby replication instance in a Multi-AZ // deployment. SecondaryAvailabilityZone *string // The VPC security group for the instance. VpcSecurityGroups []VpcSecurityGroupMembership // contains filtered or unexported fields }
Provides information that defines a replication instance.
type ReplicationInstanceTaskLog ¶
type ReplicationInstanceTaskLog struct { // The size, in bytes, of the replication task log. ReplicationInstanceTaskLogSize int64 // The Amazon Resource Name (ARN) of the replication task. ReplicationTaskArn *string // The name of the replication task. ReplicationTaskName *string // contains filtered or unexported fields }
Contains metadata for a replication instance task log.
type ReplicationPendingModifiedValues ¶
type ReplicationPendingModifiedValues struct { // The amount of storage (in gigabytes) that is allocated for the replication // instance. AllocatedStorage *int32 // The engine version number of the replication instance. EngineVersion *string // Specifies whether the replication instance is a Multi-AZ deployment. You can't // set the AvailabilityZone parameter if the Multi-AZ parameter is set to true . MultiAZ *bool // The type of IP address protocol used by a replication instance, such as IPv4 // only or Dual-stack that supports both IPv4 and IPv6 addressing. IPv6 only is not // yet supported. NetworkType *string // The compute and memory capacity of the replication instance as defined for the // specified replication instance class. // // For more information on the settings and capacities for the available // replication instance classes, see [Selecting the right DMS replication instance for your migration]. // // [Selecting the right DMS replication instance for your migration]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.InDepth ReplicationInstanceClass *string // contains filtered or unexported fields }
Provides information about the values of pending modifications to a replication instance. This data type is an object of the ReplicationInstanceReplicationInstance user-defined data type.
type ReplicationStats ¶ added in v1.26.0
type ReplicationStats struct { // The elapsed time of the replication, in milliseconds. ElapsedTimeMillis int64 // The date the replication was started either with a fresh start or a target // reload. FreshStartDate *time.Time // The date the replication full load was finished. FullLoadFinishDate *time.Time // The percent complete for the full load serverless replication. FullLoadProgressPercent int32 // The date the replication full load was started. FullLoadStartDate *time.Time // The date the replication is scheduled to start. StartDate *time.Time // The date the replication was stopped. StopDate *time.Time // The number of errors that have occured for this replication. TablesErrored int32 // The number of tables loaded for this replication. TablesLoaded int32 // The number of tables currently loading for this replication. TablesLoading int32 // The number of tables queued for this replication. TablesQueued int32 // contains filtered or unexported fields }
This object provides a collection of statistics about a serverless replication.
type ReplicationSubnetGroup ¶
type ReplicationSubnetGroup struct { // A description for the replication subnet group. ReplicationSubnetGroupDescription *string // The identifier of the replication instance subnet group. ReplicationSubnetGroupIdentifier *string // The status of the subnet group. SubnetGroupStatus *string // The subnets that are in the subnet group. Subnets []Subnet // The IP addressing protocol supported by the subnet group. This is used by a // replication instance with values such as IPv4 only or Dual-stack that supports // both IPv4 and IPv6 addressing. IPv6 only is not yet supported. SupportedNetworkTypes []string // The ID of the VPC. VpcId *string // contains filtered or unexported fields }
Describes a subnet group in response to a request by the DescribeReplicationSubnetGroups operation.
type ReplicationSubnetGroupDoesNotCoverEnoughAZs ¶
type ReplicationSubnetGroupDoesNotCoverEnoughAZs struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The replication subnet group does not cover enough Availability Zones (AZs). Edit the replication subnet group and add more AZs.
func (*ReplicationSubnetGroupDoesNotCoverEnoughAZs) Error ¶
func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) Error() string
func (*ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorCode ¶
func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorCode() string
func (*ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorFault ¶
func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorFault() smithy.ErrorFault
func (*ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorMessage ¶
func (e *ReplicationSubnetGroupDoesNotCoverEnoughAZs) ErrorMessage() string
type ReplicationTask ¶
type ReplicationTask struct { // Indicates when you want a change data capture (CDC) operation to start. Use // either CdcStartPosition or CdcStartTime to specify when you want the CDC // operation to start. Specifying both values results in an error. // // The value can be in date, checkpoint, or LSN/SCN format. // // Date Example: --cdc-start-position “2018-03-08T12:12:12” // // Checkpoint Example: --cdc-start-position // "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93" // // LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373” CdcStartPosition *string // Indicates when you want a change data capture (CDC) operation to stop. The // value can be either server time or commit time. // // Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12” // // Commit time example: --cdc-stop-position “commit_time:2018-02-09T12:12:12“ CdcStopPosition *string // The last error (failure) message generated for the replication task. LastFailureMessage *string // The type of migration. MigrationType MigrationTypeValue // Indicates the last checkpoint that occurred during a change data capture (CDC) // operation. You can provide this value to the CdcStartPosition parameter to // start a CDC operation that begins at that checkpoint. RecoveryCheckpoint *string // The ARN of the replication instance. ReplicationInstanceArn *string // The Amazon Resource Name (ARN) of the replication task. ReplicationTaskArn *string // The date the replication task was created. ReplicationTaskCreationDate *time.Time // The user-assigned replication task identifier or name. // // Constraints: // // - Must contain 1-255 alphanumeric characters or hyphens. // // - First character must be a letter. // // - Cannot end with a hyphen or contain two consecutive hyphens. ReplicationTaskIdentifier *string // The settings for the replication task. ReplicationTaskSettings *string // The date the replication task is scheduled to start. ReplicationTaskStartDate *time.Time // The statistics for the task, including elapsed time, tables loaded, and table // errors. ReplicationTaskStats *ReplicationTaskStats // The Amazon Resource Name (ARN) that uniquely identifies the endpoint. SourceEndpointArn *string // The status of the replication task. This response parameter can return one of // the following values: // // - "moving" – The task is being moved in response to running the [MoveReplicationTask] // MoveReplicationTask operation. // // - "creating" – The task is being created in response to running the [CreateReplicationTask] // CreateReplicationTask operation. // // - "deleting" – The task is being deleted in response to running the [DeleteReplicationTask] // DeleteReplicationTask operation. // // - "failed" – The task failed to successfully complete the database migration // in response to running the [StartReplicationTask]StartReplicationTask operation. // // - "failed-move" – The task failed to move in response to running the [MoveReplicationTask] // MoveReplicationTask operation. // // - "modifying" – The task definition is being modified in response to running // the [ModifyReplicationTask]ModifyReplicationTask operation. // // - "ready" – The task is in a ready state where it can respond to other task // operations, such as [StartReplicationTask]StartReplicationTask or [DeleteReplicationTask]DeleteReplicationTask . // // - "running" – The task is performing a database migration in response to // running the [StartReplicationTask]StartReplicationTask operation. // // - "starting" – The task is preparing to perform a database migration in // response to running the [StartReplicationTask]StartReplicationTask operation. // // - "stopped" – The task has stopped in response to running the [StopReplicationTask] // StopReplicationTask operation. // // - "stopping" – The task is preparing to stop in response to running the [StopReplicationTask] // StopReplicationTask operation. // // - "testing" – The database migration specified for this task is being tested // in response to running either the [StartReplicationTaskAssessmentRun]StartReplicationTaskAssessmentRun or the [StartReplicationTaskAssessment] // StartReplicationTaskAssessment operation. // // [StartReplicationTaskAssessmentRun]StartReplicationTaskAssessmentRun is an improved premigration task assessment // operation. The [StartReplicationTaskAssessment]StartReplicationTaskAssessment operation assesses data type // compatibility only between the source and target database of a given migration // task. In contrast, [StartReplicationTaskAssessmentRun]StartReplicationTaskAssessmentRun enables you to specify a // variety of premigration task assessments in addition to data type compatibility. // These assessments include ones for the validity of primary key definitions and // likely issues with database migration performance, among others. // // [ModifyReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_ModifyReplicationTask.html // [DeleteReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_DeleteReplicationTask.html // [CreateReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_CreateReplicationTask.html // [StartReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTask.html // [StopReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_StopReplicationTask.html // [StartReplicationTaskAssessment]: https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessment.html // [MoveReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_MoveReplicationTask.html // [StartReplicationTaskAssessmentRun]: https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessmentRun.html Status *string // The reason the replication task was stopped. This response parameter can return // one of the following values: // // - "Stop Reason NORMAL" // // - "Stop Reason RECOVERABLE_ERROR" // // - "Stop Reason FATAL_ERROR" // // - "Stop Reason FULL_LOAD_ONLY_FINISHED" // // - "Stop Reason STOPPED_AFTER_FULL_LOAD" – Full load completed, with cached // changes not applied // // - "Stop Reason STOPPED_AFTER_CACHED_EVENTS" – Full load completed, with cached // changes applied // // - "Stop Reason EXPRESS_LICENSE_LIMITS_REACHED" // // - "Stop Reason STOPPED_AFTER_DDL_APPLY" – User-defined stop task after DDL // applied // // - "Stop Reason STOPPED_DUE_TO_LOW_MEMORY" // // - "Stop Reason STOPPED_DUE_TO_LOW_DISK" // // - "Stop Reason STOPPED_AT_SERVER_TIME" – User-defined server time for stopping // task // // - "Stop Reason STOPPED_AT_COMMIT_TIME" – User-defined commit time for stopping // task // // - "Stop Reason RECONFIGURATION_RESTART" // // - "Stop Reason RECYCLE_TASK" StopReason *string // Table mappings specified in the task. TableMappings *string // The ARN that uniquely identifies the endpoint. TargetEndpointArn *string // The ARN of the replication instance to which this task is moved in response to // running the [MoveReplicationTask]MoveReplicationTask operation. Otherwise, this response parameter // isn't a member of the ReplicationTask object. // // [MoveReplicationTask]: https://docs.aws.amazon.com/dms/latest/APIReference/API_MoveReplicationTask.html TargetReplicationInstanceArn *string // Supplemental information that the task requires to migrate the data for certain // source and target endpoints. For more information, see [Specifying Supplemental Data for Task Settings]in the Database // Migration Service User Guide. // // [Specifying Supplemental Data for Task Settings]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.TaskData.html TaskData *string // contains filtered or unexported fields }
Provides information that describes a replication task created by the CreateReplicationTask operation.
type ReplicationTaskAssessmentResult ¶
type ReplicationTaskAssessmentResult struct { // The task assessment results in JSON format. // // The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request. AssessmentResults *string // The file containing the results of the task assessment. AssessmentResultsFile *string // The status of the task assessment. AssessmentStatus *string // The Amazon Resource Name (ARN) of the replication task. ReplicationTaskArn *string // The replication task identifier of the task on which the task assessment was // run. ReplicationTaskIdentifier *string // The date the task assessment was completed. ReplicationTaskLastAssessmentDate *time.Time // The URL of the S3 object containing the task assessment results. // // The response object only contains this field if you provide DescribeReplicationTaskAssessmentResultsMessage$ReplicationTaskArn in the request. S3ObjectUrl *string // contains filtered or unexported fields }
The task assessment report in JSON format.
type ReplicationTaskAssessmentRun ¶
type ReplicationTaskAssessmentRun struct { // Indication of the completion progress for the individual assessments specified // to run. AssessmentProgress *ReplicationTaskAssessmentRunProgress // Unique name of the assessment run. AssessmentRunName *string // Indicates that the following PreflightAssessmentRun is the latest for the // ReplicationTask. The status is either true or false. IsLatestTaskAssessmentRun bool // Last message generated by an individual assessment failure. LastFailureMessage *string // ARN of the migration task associated with this premigration assessment run. ReplicationTaskArn *string // Amazon Resource Name (ARN) of this assessment run. ReplicationTaskAssessmentRunArn *string // Date on which the assessment run was created using the // StartReplicationTaskAssessmentRun operation. ReplicationTaskAssessmentRunCreationDate *time.Time // Encryption mode used to encrypt the assessment run results. ResultEncryptionMode *string // ARN of the KMS encryption key used to encrypt the assessment run results. ResultKmsKeyArn *string // Amazon S3 bucket where DMS stores the results of this assessment run. ResultLocationBucket *string // Folder in an Amazon S3 bucket where DMS stores the results of this assessment // run. ResultLocationFolder *string // Result statistics for a completed assessment run, showing aggregated // statistics of IndividualAssessments for how many assessments were passed, // failed, or encountered issues such as errors or warnings. ResultStatistic *ReplicationTaskAssessmentRunResultStatistic // ARN of the service role used to start the assessment run using the // StartReplicationTaskAssessmentRun operation. The role must allow the // iam:PassRole action. ServiceAccessRoleArn *string // Assessment run status. // // This status can have one of the following values: // // - "cancelling" – The assessment run was canceled by the // CancelReplicationTaskAssessmentRun operation. // // - "deleting" – The assessment run was deleted by the // DeleteReplicationTaskAssessmentRun operation. // // - "failed" – At least one individual assessment completed with a failed status. // // - "error-provisioning" – An internal error occurred while resources were // provisioned (during provisioning status). // // - "error-executing" – An internal error occurred while individual assessments // ran (during running status). // // - "invalid state" – The assessment run is in an unknown state. // // - "passed" – All individual assessments have completed, and none has a failed // status. // // - "provisioning" – Resources required to run individual assessments are being // provisioned. // // - "running" – Individual assessments are being run. // // - "starting" – The assessment run is starting, but resources are not yet being // provisioned for individual assessments. Status *string // contains filtered or unexported fields }
Provides information that describes a premigration assessment run that you have started using the StartReplicationTaskAssessmentRun operation.
Some of the information appears based on other operations that can return the ReplicationTaskAssessmentRun object.
type ReplicationTaskAssessmentRunProgress ¶
type ReplicationTaskAssessmentRunProgress struct { // The number of individual assessments that have completed, successfully or not. IndividualAssessmentCompletedCount int32 // The number of individual assessments that are specified to run. IndividualAssessmentCount int32 // contains filtered or unexported fields }
The progress values reported by the AssessmentProgress response element.
type ReplicationTaskAssessmentRunResultStatistic ¶ added in v1.44.0
type ReplicationTaskAssessmentRunResultStatistic struct { // The number of individual assessments that were cancelled during the assessment // run. Cancelled int32 // The number of individual assessments that encountered a critical error and // could not complete properly. Error int32 // The number of individual assessments that failed to meet the criteria defined // in the assessment run. Failed int32 // The number of individual assessments that successfully passed all checks in the // assessment run. Passed int32 // Indicates that the recent completed AssessmentRun triggered a warning. Warning int32 // contains filtered or unexported fields }
The object containing the result statistics for a completed assessment run.
type ReplicationTaskIndividualAssessment ¶
type ReplicationTaskIndividualAssessment struct { // Name of this individual assessment. IndividualAssessmentName *string // ARN of the premigration assessment run that is created to run this individual // assessment. ReplicationTaskAssessmentRunArn *string // Amazon Resource Name (ARN) of this individual assessment. ReplicationTaskIndividualAssessmentArn *string // Date when this individual assessment was started as part of running the // StartReplicationTaskAssessmentRun operation. ReplicationTaskIndividualAssessmentStartDate *time.Time // Individual assessment status. // // This status can have one of the following values: // // - "cancelled" // // - "error" // // - "failed" // // - "passed" // // - "pending" // // - "running" Status *string // contains filtered or unexported fields }
Provides information that describes an individual assessment from a premigration assessment run.
type ReplicationTaskStats ¶
type ReplicationTaskStats struct { // The elapsed time of the task, in milliseconds. ElapsedTimeMillis int64 // The date the replication task was started either with a fresh start or a target // reload. FreshStartDate *time.Time // The date the replication task full load was completed. FullLoadFinishDate *time.Time // The percent complete for the full load migration task. FullLoadProgressPercent int32 // The date the replication task full load was started. FullLoadStartDate *time.Time // The date the replication task was started either with a fresh start or a // resume. For more information, see [StartReplicationTaskType]. // // [StartReplicationTaskType]: https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTask.html#DMS-StartReplicationTask-request-StartReplicationTaskType StartDate *time.Time // The date the replication task was stopped. StopDate *time.Time // The number of errors that have occurred during this task. TablesErrored int32 // The number of tables loaded for this task. TablesLoaded int32 // The number of tables currently loading for this task. TablesLoading int32 // The number of tables queued for this task. TablesQueued int32 // contains filtered or unexported fields }
In response to a request by the DescribeReplicationTasks operation, this object provides a collection of statistics about a replication task.
type ResourceAlreadyExistsFault ¶
type ResourceAlreadyExistsFault struct { Message *string ErrorCodeOverride *string ResourceArn *string // contains filtered or unexported fields }
The resource you are attempting to create already exists.
func (*ResourceAlreadyExistsFault) Error ¶
func (e *ResourceAlreadyExistsFault) Error() string
func (*ResourceAlreadyExistsFault) ErrorCode ¶
func (e *ResourceAlreadyExistsFault) ErrorCode() string
func (*ResourceAlreadyExistsFault) ErrorFault ¶
func (e *ResourceAlreadyExistsFault) ErrorFault() smithy.ErrorFault
func (*ResourceAlreadyExistsFault) ErrorMessage ¶
func (e *ResourceAlreadyExistsFault) ErrorMessage() string
type ResourceNotFoundFault ¶
type ResourceNotFoundFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The resource could not be found.
func (*ResourceNotFoundFault) Error ¶
func (e *ResourceNotFoundFault) Error() string
func (*ResourceNotFoundFault) ErrorCode ¶
func (e *ResourceNotFoundFault) ErrorCode() string
func (*ResourceNotFoundFault) ErrorFault ¶
func (e *ResourceNotFoundFault) ErrorFault() smithy.ErrorFault
func (*ResourceNotFoundFault) ErrorMessage ¶
func (e *ResourceNotFoundFault) ErrorMessage() string
type ResourcePendingMaintenanceActions ¶
type ResourcePendingMaintenanceActions struct { // Detailed information about the pending maintenance action. PendingMaintenanceActionDetails []PendingMaintenanceAction // The Amazon Resource Name (ARN) of the DMS resource that the pending maintenance // action applies to. For information about creating an ARN, see [Constructing an Amazon Resource Name (ARN) for DMS]in the DMS // documentation. // // [Constructing an Amazon Resource Name (ARN) for DMS]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.AWS.ARN.html ResourceIdentifier *string // contains filtered or unexported fields }
Identifies an DMS resource and any pending actions for it.
type ResourceQuotaExceededFault ¶
type ResourceQuotaExceededFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The quota for this resource quota has been exceeded.
func (*ResourceQuotaExceededFault) Error ¶
func (e *ResourceQuotaExceededFault) Error() string
func (*ResourceQuotaExceededFault) ErrorCode ¶
func (e *ResourceQuotaExceededFault) ErrorCode() string
func (*ResourceQuotaExceededFault) ErrorFault ¶
func (e *ResourceQuotaExceededFault) ErrorFault() smithy.ErrorFault
func (*ResourceQuotaExceededFault) ErrorMessage ¶
func (e *ResourceQuotaExceededFault) ErrorMessage() string
type S3AccessDeniedFault ¶
type S3AccessDeniedFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
Insufficient privileges are preventing access to an Amazon S3 object.
func (*S3AccessDeniedFault) Error ¶
func (e *S3AccessDeniedFault) Error() string
func (*S3AccessDeniedFault) ErrorCode ¶
func (e *S3AccessDeniedFault) ErrorCode() string
func (*S3AccessDeniedFault) ErrorFault ¶
func (e *S3AccessDeniedFault) ErrorFault() smithy.ErrorFault
func (*S3AccessDeniedFault) ErrorMessage ¶
func (e *S3AccessDeniedFault) ErrorMessage() string
type S3ResourceNotFoundFault ¶
type S3ResourceNotFoundFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
A specified Amazon S3 bucket, bucket folder, or other object can't be found.
func (*S3ResourceNotFoundFault) Error ¶
func (e *S3ResourceNotFoundFault) Error() string
func (*S3ResourceNotFoundFault) ErrorCode ¶
func (e *S3ResourceNotFoundFault) ErrorCode() string
func (*S3ResourceNotFoundFault) ErrorFault ¶
func (e *S3ResourceNotFoundFault) ErrorFault() smithy.ErrorFault
func (*S3ResourceNotFoundFault) ErrorMessage ¶
func (e *S3ResourceNotFoundFault) ErrorMessage() string
type S3Settings ¶
type S3Settings struct { // An optional parameter that, when set to true or y , you can use to add column // name information to the .csv output file. // // The default value is false . Valid values are true , false , y , and n . AddColumnName *bool // Use the S3 target endpoint setting AddTrailingPaddingCharacter to add padding // on string data. The default value is false . AddTrailingPaddingCharacter *bool // An optional parameter to set a folder name in the S3 bucket. If provided, // tables are created in the path bucketFolder/schema_name/table_name/ . If this // parameter isn't specified, then the path used is schema_name/table_name/ . BucketFolder *string // The name of the S3 bucket. BucketName *string // A value that enables DMS to specify a predefined (canned) access control list // for objects created in an Amazon S3 bucket as .csv or .parquet files. For more // information about Amazon S3 canned ACLs, see [Canned ACL]in the Amazon S3 Developer Guide. // // The default value is NONE. Valid values include NONE, PRIVATE, PUBLIC_READ, // PUBLIC_READ_WRITE, AUTHENTICATED_READ, AWS_EXEC_READ, BUCKET_OWNER_READ, and // BUCKET_OWNER_FULL_CONTROL. // // [Canned ACL]: http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl CannedAclForObjects CannedAclForObjectsValue // A value that enables a change data capture (CDC) load to write INSERT and // UPDATE operations to .csv or .parquet (columnar storage) output files. The // default setting is false , but when CdcInsertsAndUpdates is set to true or y , // only INSERTs and UPDATEs from the source database are migrated to the .csv or // .parquet file. // // DMS supports the use of the .parquet files in versions 3.4.7 and later. // // How these INSERTs and UPDATEs are recorded depends on the value of the // IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the // first field of every CDC record is set to either I or U to indicate INSERT and // UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , // CDC records are written without an indication of INSERT or UPDATE operations at // the source. For more information about how these settings work together, see [Indicating Source DB Operations in Migrated S3 Data]in // the Database Migration Service User Guide.. // // DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 // and later. // // CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same // endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the // same endpoint, but not both. // // [Indicating Source DB Operations in Migrated S3 Data]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps CdcInsertsAndUpdates *bool // A value that enables a change data capture (CDC) load to write only INSERT // operations to .csv or columnar storage (.parquet) output files. By default (the // false setting), the first field in a .csv or .parquet record contains the letter // I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was // inserted, updated, or deleted at the source database for a CDC load to the // target. // // If CdcInsertsOnly is set to true or y , only INSERTs from the source database // are migrated to the .csv or .parquet file. For .csv format only, how these // INSERTs are recorded depends on the value of IncludeOpForFullLoad . If // IncludeOpForFullLoad is set to true , the first field of every CDC record is set // to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is // set to false , every CDC record is written without a first field to indicate the // INSERT operation at the source. For more information about how these settings // work together, see [Indicating Source DB Operations in Migrated S3 Data]in the Database Migration Service User Guide.. // // DMS supports the interaction described preceding between the CdcInsertsOnly and // IncludeOpForFullLoad parameters in versions 3.1.4 and later. // // CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same // endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the // same endpoint, but not both. // // [Indicating Source DB Operations in Migrated S3 Data]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps CdcInsertsOnly *bool // Maximum length of the interval, defined in seconds, after which to output a // file to Amazon S3. // // When CdcMaxBatchInterval and CdcMinFileSize are both specified, the file write // is triggered by whichever parameter condition is met first within an DMS // CloudFormation template. // // The default value is 60 seconds. CdcMaxBatchInterval *int32 // Minimum file size, defined in kilobytes, to reach for a file output to Amazon // S3. // // When CdcMinFileSize and CdcMaxBatchInterval are both specified, the file write // is triggered by whichever parameter condition is met first within an DMS // CloudFormation template. // // The default value is 32 MB. CdcMinFileSize *int32 // Specifies the folder path of CDC files. For an S3 source, this setting is // required if a task captures change data; otherwise, it's optional. If CdcPath // is set, DMS reads CDC files from this path and replicates the data changes to // the target endpoint. For an S3 target if you set [PreserveTransactions]PreserveTransactions to true , // DMS verifies that you have set this parameter to a folder path on your S3 target // where DMS can save the transaction order for the CDC load. DMS creates this CDC // folder path in either your S3 target working directory or the S3 target location // specified by [BucketFolder]BucketFolder and [BucketName]BucketName . // // For example, if you specify CdcPath as MyChangedData , and you specify // BucketName as MyTargetBucket but do not specify BucketFolder , DMS creates the // CDC folder path following: MyTargetBucket/MyChangedData . // // If you specify the same CdcPath , and you specify BucketName as MyTargetBucket // and BucketFolder as MyTargetData , DMS creates the CDC folder path following: // MyTargetBucket/MyTargetData/MyChangedData . // // For more information on CDC including transaction order on an S3 target, see [Capturing data changes (CDC) including transaction order on the S3 target]. // // This setting is supported in DMS versions 3.4.2 and later. // // [Capturing data changes (CDC) including transaction order on the S3 target]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPath // [PreserveTransactions]: https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-PreserveTransactions // [BucketName]: https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketName // [BucketFolder]: https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-BucketFolder CdcPath *string // An optional parameter to use GZIP to compress the target files. Set to GZIP to // compress the target files. Either set this parameter to NONE (the default) or // don't use it to leave the files uncompressed. This parameter applies to both // .csv and .parquet file formats. CompressionType CompressionTypeValue // The delimiter used to separate columns in the .csv file for both source and // target. The default is a comma. CsvDelimiter *string // This setting only applies if your Amazon S3 output files during a change data // capture (CDC) load are written in .csv format. If [UseCsvNoSupValue]UseCsvNoSupValue is set to // true, specify a string value that you want DMS to use for all columns not // included in the supplemental log. If you do not specify a string value, DMS uses // the null value for these columns regardless of the UseCsvNoSupValue setting. // // This setting is supported in DMS versions 3.4.1 and later. // // [UseCsvNoSupValue]: https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-UseCsvNoSupValue CsvNoSupValue *string // An optional parameter that specifies how DMS treats null values. While handling // the null value, you can use this parameter to pass a user-defined string as null // when writing to the target. For example, when target columns are nullable, you // can use this option to differentiate between the empty string value and the null // value. So, if you set this parameter value to the empty string ("" or ”), DMS // treats the empty string as the null value instead of NULL . // // The default value is NULL . Valid values include any valid string. CsvNullValue *string // The delimiter used to separate rows in the .csv file for both source and // target. The default is a carriage return ( \n ). CsvRowDelimiter *string // The format of the data that you want to use for output. You can choose one of // the following: // // - csv : This is a row-based file format with comma-separated values (.csv). // // - parquet : Apache Parquet (.parquet) is a columnar storage file format that // features efficient compression and provides faster query response. DataFormat DataFormatValue // The size of one data page in bytes. This parameter defaults to 1024 * 1024 // bytes (1 MiB). This number is used for .parquet file format only. DataPageSize *int32 // Specifies a date separating delimiter to use during folder partitioning. The // default value is SLASH . Use this parameter when DatePartitionedEnabled is set // to true . DatePartitionDelimiter DatePartitionDelimiterValue // When set to true , this parameter partitions S3 bucket folders based on // transaction commit dates. The default value is false . For more information // about date-based folder partitioning, see [Using date-based folder partitioning]. // // [Using date-based folder partitioning]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.DatePartitioning DatePartitionEnabled *bool // Identifies the sequence of the date format to use during folder partitioning. // The default value is YYYYMMDD . Use this parameter when DatePartitionedEnabled // is set to true . DatePartitionSequence DatePartitionSequenceValue // When creating an S3 target endpoint, set DatePartitionTimezone to convert the // current UTC time into a specified time zone. The conversion occurs when a date // partition folder is created and a CDC filename is generated. The time zone // format is Area/Location. Use this parameter when DatePartitionedEnabled is set // to true , as shown in the following example. // // s3-settings='{"DatePartitionEnabled": true, "DatePartitionSequence": // "YYYYMMDDHH", "DatePartitionDelimiter": "SLASH", // "DatePartitionTimezone":"Asia/Seoul", "BucketName": "dms-nattarat-test"}' DatePartitionTimezone *string // The maximum size of an encoded dictionary page of a column. If the dictionary // page exceeds this, this column is stored using an encoding type of PLAIN . This // parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a // dictionary page before it reverts to PLAIN encoding. This size is used for // .parquet file format only. DictPageSizeLimit *int32 // A value that enables statistics for Parquet pages and row groups. Choose true // to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX // , and MIN values. This parameter defaults to true . This value is used for // .parquet file format only. EnableStatistics *bool // The type of encoding you are using: // // - RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to // store repeated values more efficiently. This is the default. // // - PLAIN doesn't use encoding at all. Values are stored as they are. // // - PLAIN_DICTIONARY builds a dictionary of the values encountered in a given // column. The dictionary is stored in a dictionary page for each column chunk. EncodingType EncodingTypeValue // The type of server-side encryption that you want to use for your data. This // encryption type is part of the endpoint settings or the extra connections // attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS // . // // For the ModifyEndpoint operation, you can change the existing value of the // EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the // existing value from SSE_S3 to SSE_KMS . // // To use SSE_S3 , you need an Identity and Access Management (IAM) role with // permission to allow "arn:aws:s3:::dms-*" to use the following actions: // // - s3:CreateBucket // // - s3:ListBucket // // - s3:DeleteBucket // // - s3:GetBucketLocation // // - s3:GetObject // // - s3:PutObject // // - s3:DeleteObject // // - s3:GetObjectVersion // // - s3:GetBucketPolicy // // - s3:PutBucketPolicy // // - s3:DeleteBucketPolicy EncryptionMode EncryptionModeValue // To specify a bucket owner and prevent sniping, you can use the // ExpectedBucketOwner endpoint setting. // // Example: --s3-settings='{"ExpectedBucketOwner": "AWS_Account_ID"}' // // When you make a request to test a connection or perform a migration, S3 checks // the account ID of the bucket owner against the specified parameter. ExpectedBucketOwner *string // Specifies how tables are defined in the S3 source files only. ExternalTableDefinition *string // When true, allows Glue to catalog your S3 bucket. Creating an Glue catalog lets // you use Athena to query your data. GlueCatalogGeneration *bool // When this value is set to 1, DMS ignores the first row header in a .csv file. A // value of 1 turns on the feature; a value of 0 turns off the feature. // // The default is 0. IgnoreHeaderRows *int32 // A value that enables a full load to write INSERT operations to the // comma-separated value (.csv) or .parquet output files only to indicate how the // rows were added to the source database. // // DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later. // // DMS supports the use of the .parquet files with the IncludeOpForFullLoad // parameter in versions 3.4.7 and later. // // For full load, records can only be inserted. By default (the false setting), no // information is recorded in these output files for a full load to indicate that // the rows were inserted at the source database. If IncludeOpForFullLoad is set // to true or y , the INSERT is recorded as an I annotation in the first field of // the .csv file. This allows the format of your target records from a full load to // be consistent with the target records from a CDC load. // // This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates // parameters for output to .csv files only. For more information about how these // settings work together, see [Indicating Source DB Operations in Migrated S3 Data]in the Database Migration Service User Guide.. // // [Indicating Source DB Operations in Migrated S3 Data]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring.InsertOps IncludeOpForFullLoad *bool // A value that specifies the maximum size (in KB) of any .csv file to be created // while migrating to an S3 target during full load. // // The default value is 1,048,576 KB (1 GB). Valid values include 1 to 1,048,576. MaxFileSize *int32 // A value that specifies the precision of any TIMESTAMP column values that are // written to an Amazon S3 object file in .parquet format. // // DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and // later. // // When ParquetTimestampInMillisecond is set to true or y , DMS writes all // TIMESTAMP columns in a .parquet formatted file with millisecond precision. // Otherwise, DMS writes them with microsecond precision. // // Currently, Amazon Athena and Glue can handle only millisecond precision for // TIMESTAMP values. Set this parameter to true for S3 endpoint object files that // are .parquet formatted only if you plan to query or process the data with Athena // or Glue. // // DMS writes any TIMESTAMP column values written to an S3 file in .csv format // with microsecond precision. // // Setting ParquetTimestampInMillisecond has no effect on the string format of the // timestamp column value that is inserted by setting the TimestampColumnName // parameter. ParquetTimestampInMillisecond *bool // The version of the Apache Parquet format that you want to use: parquet_1_0 (the // default) or parquet_2_0 . ParquetVersion ParquetVersionValue // If set to true , DMS saves the transaction order for a change data capture (CDC) // load on the Amazon S3 target specified by [CdcPath]CdcPath . For more information, see [Capturing data changes (CDC) including transaction order on the S3 target]. // // This setting is supported in DMS versions 3.4.2 and later. // // [Capturing data changes (CDC) including transaction order on the S3 target]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.EndpointSettings.CdcPath // [CdcPath]: https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CdcPath PreserveTransactions *bool // For an S3 source, when this value is set to true or y , each leading double // quotation mark has to be followed by an ending double quotation mark. This // formatting complies with RFC 4180. When this value is set to false or n , string // literals are copied to the target as is. In this case, a delimiter (row or // column) signals the end of the field. Thus, you can't use a delimiter as part of // the string, because it signals the end of the value. // // For an S3 target, an optional parameter used to set behavior to comply with RFC // 4180 for data migrated to Amazon S3 using .csv file format only. When this value // is set to true or y using Amazon S3 as a target, if the data has quotation // marks or newline characters in it, DMS encloses the entire column with an // additional pair of double quotation marks ("). Every quotation mark within the // data is repeated twice. // // The default value is true . Valid values include true , false , y , and n . Rfc4180 *bool // The number of rows in a row group. A smaller row group size provides faster // reads. But as the number of row groups grows, the slower writes become. This // parameter defaults to 10,000 rows. This number is used for .parquet file format // only. // // If you choose a value larger than the maximum, RowGroupLength is set to the max // row group length in bytes (64 * 1024 * 1024). RowGroupLength *int32 // If you are using SSE_KMS for the EncryptionMode , provide the KMS key ID. The // key that you use needs an attached policy that enables Identity and Access // Management (IAM) user permissions and allows use of the key. // // Here is a CLI example: aws dms create-endpoint --endpoint-identifier value // --endpoint-type target --engine-name s3 --s3-settings // ServiceAccessRoleArn=value,BucketFolder=value,BucketName=value,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=value ServerSideEncryptionKmsKeyId *string // The Amazon Resource Name (ARN) used by the service to access the IAM role. The // role must allow the iam:PassRole action. It is a required parameter that // enables DMS to write and read objects from an S3 bucket. ServiceAccessRoleArn *string // A value that when nonblank causes DMS to add a column with timestamp // information to the endpoint data for an Amazon S3 target. // // DMS supports the TimestampColumnName parameter in versions 3.1.4 and later. // // DMS includes an additional STRING column in the .csv or .parquet object files // of your migrated data when you set TimestampColumnName to a nonblank value. // // For a full load, each row of this timestamp column contains a timestamp for // when the data was transferred from the source to the target by DMS. // // For a change data capture (CDC) load, each row of the timestamp column contains // the timestamp for the commit of that row in the source database. // // The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS // . By default, the precision of this value is in microseconds. For a CDC load, // the rounding of the precision depends on the commit timestamp supported by DMS // for the source database. // // When the AddColumnName parameter is set to true , DMS also includes a name for // the timestamp column that you set with TimestampColumnName . TimestampColumnName *string // This setting applies if the S3 output files during a change data capture (CDC) // load are written in .csv format. If set to true for columns not included in the // supplemental log, DMS uses the value specified by [CsvNoSupValue]CsvNoSupValue . If not set or // set to false , DMS uses the null value for these columns. // // This setting is supported in DMS versions 3.4.1 and later. // // [CsvNoSupValue]: https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html#DMS-Type-S3Settings-CsvNoSupValue UseCsvNoSupValue *bool // When set to true, this parameter uses the task start time as the timestamp // column value instead of the time data is written to target. For full load, when // useTaskStartTimeForFullLoadTimestamp is set to true , each row of the timestamp // column contains the task start time. For CDC loads, each row of the timestamp // column contains the transaction commit time. // // When useTaskStartTimeForFullLoadTimestamp is set to false , the full load // timestamp in the timestamp column increments with the time data arrives at the // target. UseTaskStartTimeForFullLoadTimestamp *bool // contains filtered or unexported fields }
Settings for exporting data to Amazon S3.
type SCApplicationAttributes ¶ added in v1.30.0
type SCApplicationAttributes struct { // The path for the Amazon S3 bucket that the application uses for exporting // assessment reports. S3BucketPath *string // The ARN for the role the application uses to access its Amazon S3 bucket. S3BucketRoleArn *string // contains filtered or unexported fields }
Provides information that defines a schema conversion application.
type SNSInvalidTopicFault ¶
type SNSInvalidTopicFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The SNS topic is invalid.
func (*SNSInvalidTopicFault) Error ¶
func (e *SNSInvalidTopicFault) Error() string
func (*SNSInvalidTopicFault) ErrorCode ¶
func (e *SNSInvalidTopicFault) ErrorCode() string
func (*SNSInvalidTopicFault) ErrorFault ¶
func (e *SNSInvalidTopicFault) ErrorFault() smithy.ErrorFault
func (*SNSInvalidTopicFault) ErrorMessage ¶
func (e *SNSInvalidTopicFault) ErrorMessage() string
type SNSNoAuthorizationFault ¶
type SNSNoAuthorizationFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
You are not authorized for the SNS subscription.
func (*SNSNoAuthorizationFault) Error ¶
func (e *SNSNoAuthorizationFault) Error() string
func (*SNSNoAuthorizationFault) ErrorCode ¶
func (e *SNSNoAuthorizationFault) ErrorCode() string
func (*SNSNoAuthorizationFault) ErrorFault ¶
func (e *SNSNoAuthorizationFault) ErrorFault() smithy.ErrorFault
func (*SNSNoAuthorizationFault) ErrorMessage ¶
func (e *SNSNoAuthorizationFault) ErrorMessage() string
type SafeguardPolicy ¶ added in v0.29.0
type SafeguardPolicy string
const ( SafeguardPolicyRelyOnSqlServerReplicationAgent SafeguardPolicy = "rely-on-sql-server-replication-agent" SafeguardPolicyExclusiveAutomaticTruncation SafeguardPolicy = "exclusive-automatic-truncation" )
Enum values for SafeguardPolicy
func (SafeguardPolicy) Values ¶ added in v0.29.0
func (SafeguardPolicy) Values() []SafeguardPolicy
Values returns all known values for SafeguardPolicy. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type SchemaConversionRequest ¶ added in v1.30.0
type SchemaConversionRequest struct { // Provides error information about a project. Error ErrorDetails // Provides information about a metadata model assessment exported to SQL. ExportSqlDetails *ExportSqlDetails // The migration project ARN. MigrationProjectArn *string // The identifier for the schema conversion action. RequestIdentifier *string // The schema conversion action status. Status *string // contains filtered or unexported fields }
Provides information about a schema conversion action.
type SchemaResponse ¶ added in v1.19.0
type SchemaResponse struct { // The number of lines of code in a schema in a Fleet Advisor collector inventory. CodeLineCount *int64 // The size level of the code in a schema in a Fleet Advisor collector inventory. CodeSize *int64 // The complexity level of the code in a schema in a Fleet Advisor collector // inventory. Complexity *string // The database for a schema in a Fleet Advisor collector inventory. DatabaseInstance *DatabaseShortInfoResponse // Describes a schema in a Fleet Advisor collector inventory. OriginalSchema *SchemaShortInfoResponse // The ID of a schema in a Fleet Advisor collector inventory. SchemaId *string // The name of a schema in a Fleet Advisor collector inventory. SchemaName *string // The database server for a schema in a Fleet Advisor collector inventory. Server *ServerShortInfoResponse // The similarity value for a schema in a Fleet Advisor collector inventory. A // higher similarity value indicates that a schema is likely to be a duplicate. Similarity *float64 // contains filtered or unexported fields }
Describes a schema in a Fleet Advisor collector inventory.
type SchemaShortInfoResponse ¶ added in v1.19.0
type SchemaShortInfoResponse struct { // The ID of a database in a Fleet Advisor collector inventory. DatabaseId *string // The IP address of a database in a Fleet Advisor collector inventory. DatabaseIpAddress *string // The name of a database in a Fleet Advisor collector inventory. DatabaseName *string // The ID of a schema in a Fleet Advisor collector inventory. SchemaId *string // The name of a schema in a Fleet Advisor collector inventory. SchemaName *string // contains filtered or unexported fields }
Describes a schema in a Fleet Advisor collector inventory.
type ServerShortInfoResponse ¶ added in v1.19.0
type ServerShortInfoResponse struct { // The IP address of a server in a Fleet Advisor collector inventory. IpAddress *string // The ID of a server in a Fleet Advisor collector inventory. ServerId *string // The name address of a server in a Fleet Advisor collector inventory. ServerName *string // contains filtered or unexported fields }
Describes a server in a Fleet Advisor collector inventory.
type SourceDataSetting ¶ added in v1.43.0
type SourceDataSetting struct { // The change data capture (CDC) start position for the source data provider. CDCStartPosition *string // The change data capture (CDC) start time for the source data provider. CDCStartTime *time.Time // The change data capture (CDC) stop time for the source data provider. CDCStopTime *time.Time // The name of the replication slot on the source data provider. This attribute is // only valid for a PostgreSQL or Aurora PostgreSQL source. SlotName *string // contains filtered or unexported fields }
Defines settings for a source data provider for a data migration.
type SourceType ¶
type SourceType string
const (
SourceTypeReplicationInstance SourceType = "replication-instance"
)
Enum values for SourceType
func (SourceType) Values ¶ added in v0.29.0
func (SourceType) Values() []SourceType
Values returns all known values for SourceType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type SslSecurityProtocolValue ¶ added in v1.7.0
type SslSecurityProtocolValue string
const ( SslSecurityProtocolValuePlaintext SslSecurityProtocolValue = "plaintext" SslSecurityProtocolValueSslEncryption SslSecurityProtocolValue = "ssl-encryption" )
Enum values for SslSecurityProtocolValue
func (SslSecurityProtocolValue) Values ¶ added in v1.7.0
func (SslSecurityProtocolValue) Values() []SslSecurityProtocolValue
Values returns all known values for SslSecurityProtocolValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type StartRecommendationsRequestEntry ¶ added in v1.24.0
type StartRecommendationsRequestEntry struct { // The identifier of the source database. // // This member is required. DatabaseId *string // The required target engine settings. // // This member is required. Settings *RecommendationSettings // contains filtered or unexported fields }
Provides information about the source database to analyze and provide target recommendations according to the specified requirements.
type StartReplicationMigrationTypeValue ¶ added in v1.43.0
type StartReplicationMigrationTypeValue string
const ( StartReplicationMigrationTypeValueReloadTarget StartReplicationMigrationTypeValue = "reload-target" StartReplicationMigrationTypeValueResumeProcessing StartReplicationMigrationTypeValue = "resume-processing" StartReplicationMigrationTypeValueStartReplication StartReplicationMigrationTypeValue = "start-replication" )
Enum values for StartReplicationMigrationTypeValue
func (StartReplicationMigrationTypeValue) Values ¶ added in v1.43.0
func (StartReplicationMigrationTypeValue) Values() []StartReplicationMigrationTypeValue
Values returns all known values for StartReplicationMigrationTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type StartReplicationTaskTypeValue ¶
type StartReplicationTaskTypeValue string
const ( StartReplicationTaskTypeValueStartReplication StartReplicationTaskTypeValue = "start-replication" StartReplicationTaskTypeValueResumeProcessing StartReplicationTaskTypeValue = "resume-processing" StartReplicationTaskTypeValueReloadTarget StartReplicationTaskTypeValue = "reload-target" )
Enum values for StartReplicationTaskTypeValue
func (StartReplicationTaskTypeValue) Values ¶ added in v0.29.0
func (StartReplicationTaskTypeValue) Values() []StartReplicationTaskTypeValue
Values returns all known values for StartReplicationTaskTypeValue. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type StorageQuotaExceededFault ¶
type StorageQuotaExceededFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The storage quota has been exceeded.
func (*StorageQuotaExceededFault) Error ¶
func (e *StorageQuotaExceededFault) Error() string
func (*StorageQuotaExceededFault) ErrorCode ¶
func (e *StorageQuotaExceededFault) ErrorCode() string
func (*StorageQuotaExceededFault) ErrorFault ¶
func (e *StorageQuotaExceededFault) ErrorFault() smithy.ErrorFault
func (*StorageQuotaExceededFault) ErrorMessage ¶
func (e *StorageQuotaExceededFault) ErrorMessage() string
type Subnet ¶
type Subnet struct { // The Availability Zone of the subnet. SubnetAvailabilityZone *AvailabilityZone // The subnet identifier. SubnetIdentifier *string // The status of the subnet. SubnetStatus *string // contains filtered or unexported fields }
In response to a request by the DescribeReplicationSubnetGroups operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
type SubnetAlreadyInUse ¶
type SubnetAlreadyInUse struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The specified subnet is already in use.
func (*SubnetAlreadyInUse) Error ¶
func (e *SubnetAlreadyInUse) Error() string
func (*SubnetAlreadyInUse) ErrorCode ¶
func (e *SubnetAlreadyInUse) ErrorCode() string
func (*SubnetAlreadyInUse) ErrorFault ¶
func (e *SubnetAlreadyInUse) ErrorFault() smithy.ErrorFault
func (*SubnetAlreadyInUse) ErrorMessage ¶
func (e *SubnetAlreadyInUse) ErrorMessage() string
type SupportedEndpointType ¶
type SupportedEndpointType struct { // The type of endpoint. Valid values are source and target . EndpointType ReplicationEndpointTypeValue // The expanded name for the engine name. For example, if the EngineName parameter // is "aurora", this value would be "Amazon Aurora MySQL". EngineDisplayName *string // The database engine name. Valid values, depending on the EndpointType, include // "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , // "redshift" , "s3" , "db2" , "db2-zos" , "azuredb" , "sybase" , "dynamodb" , // "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , "sqlserver" , // "neptune" , and "babelfish" . EngineName *string // The earliest DMS engine version that supports this endpoint engine. Note that // endpoint engines released with DMS versions earlier than 3.1.1 do not return a // value for this parameter. ReplicationInstanceEngineMinimumVersion *string // Indicates if change data capture (CDC) is supported. SupportsCDC bool // contains filtered or unexported fields }
Provides information about types of supported endpoints in response to a request by the DescribeEndpointTypes operation. This information includes the type of endpoint, the database engine name, and whether change data capture (CDC) is supported.
type SybaseSettings ¶
type SybaseSettings struct { // Database name for the endpoint. DatabaseName *string // Endpoint connection password. Password *string // Endpoint TCP port. The default is 5000. Port *int32 // The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the // trusted entity and grants the required permissions to access the value in // SecretsManagerSecret . The role must allow the iam:PassRole action. // SecretsManagerSecret has the value of the Amazon Web Services Secrets Manager // secret that allows access to the SAP ASE endpoint. // // You can specify one of two sets of values for these permissions. You can // specify the values for this setting and SecretsManagerSecretId . Or you can // specify clear-text values for UserName , Password , ServerName , and Port . You // can't specify both. For more information on creating this SecretsManagerSecret // and the SecretsManagerAccessRoleArn and SecretsManagerSecretId required to // access it, see [Using secrets to access Database Migration Service resources]in the Database Migration Service User Guide. // // [Using secrets to access Database Migration Service resources]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#security-iam-secretsmanager SecretsManagerAccessRoleArn *string // The full ARN, partial ARN, or friendly name of the SecretsManagerSecret that // contains the SAP SAE endpoint connection details. SecretsManagerSecretId *string // Fully qualified domain name of the endpoint. ServerName *string // Endpoint connection user name. Username *string // contains filtered or unexported fields }
Provides information that defines a SAP ASE endpoint.
type TableStatistics ¶
type TableStatistics struct { // The number of data definition language (DDL) statements used to build and // modify the structure of your tables applied on the target. AppliedDdls *int64 // The number of delete actions applied on a target table. AppliedDeletes *int64 // The number of insert actions applied on a target table. AppliedInserts *int64 // The number of update actions applied on a target table. AppliedUpdates *int64 // The data definition language (DDL) used to build and modify the structure of // your tables. Ddls int64 // The number of delete actions performed on a table. Deletes int64 // The number of rows that failed conditional checks during the full load // operation (valid only for migrations where DynamoDB is the target). FullLoadCondtnlChkFailedRows int64 // The time when the full load operation completed. FullLoadEndTime *time.Time // The number of rows that failed to load during the full load operation (valid // only for migrations where DynamoDB is the target). FullLoadErrorRows int64 // A value that indicates if the table was reloaded ( true ) or loaded as part of a // new full load operation ( false ). FullLoadReloaded *bool // The number of rows added during the full load operation. FullLoadRows int64 // The time when the full load operation started. FullLoadStartTime *time.Time // The number of insert actions performed on a table. Inserts int64 // The last time a table was updated. LastUpdateTime *time.Time // The schema name. SchemaName *string // The name of the table. TableName *string // The state of the tables described. // // Valid states: Table does not exist | Before load | Full load | Table completed // | Table cancelled | Table error | Table is being reloaded TableState *string // The number of update actions performed on a table. Updates int64 // The number of records that failed validation. ValidationFailedRecords int64 // The number of records that have yet to be validated. ValidationPendingRecords int64 // The validation state of the table. // // This parameter can have the following values: // // - Not enabled – Validation isn't enabled for the table in the migration task. // // - Pending records – Some records in the table are waiting for validation. // // - Mismatched records – Some records in the table don't match between the // source and target. // // - Suspended records – Some records in the table couldn't be validated. // // - No primary key –The table couldn't be validated because it has no primary // key. // // - Table error – The table wasn't validated because it's in an error state and // some data wasn't migrated. // // - Validated – All rows in the table are validated. If the table is updated, // the status can change from Validated. // // - Error – The table couldn't be validated because of an unexpected error. // // - Pending validation – The table is waiting validation. // // - Preparing table – Preparing the table enabled in the migration task for // validation. // // - Pending revalidation – All rows in the table are pending validation after // the table was updated. ValidationState *string // Additional details about the state of validation. ValidationStateDetails *string // The number of records that couldn't be validated. ValidationSuspendedRecords int64 // contains filtered or unexported fields }
Provides a collection of table statistics in response to a request by the DescribeTableStatistics operation.
type TableToReload ¶
type TableToReload struct { // The schema name of the table to be reloaded. // // This member is required. SchemaName *string // The table name of the table to be reloaded. // // This member is required. TableName *string // contains filtered or unexported fields }
Provides the name of the schema and table to be reloaded.
type Tag ¶
type Tag struct { // A key is the required name of the tag. The string value can be 1-128 Unicode // characters in length and can't be prefixed with "aws:" or "dms:". The string can // only contain only the set of Unicode letters, digits, white-space, '_', '.', // '/', '=', '+', '-' (Java regular expressions: // "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$"). Key *string // The Amazon Resource Name (ARN) string that uniquely identifies the resource for // which the tag is created. ResourceArn *string // A value is the optional value of the tag. The string value can be 1-256 Unicode // characters in length and can't be prefixed with "aws:" or "dms:". The string can // only contain only the set of Unicode letters, digits, white-space, '_', '.', // '/', '=', '+', '-' (Java regular expressions: // "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$"). Value *string // contains filtered or unexported fields }
A user-defined key-value pair that describes metadata added to an DMS resource and that is used by operations such as the following:
AddTagsToResource
ListTagsForResource
RemoveTagsFromResource
type TargetDbType ¶ added in v0.29.0
type TargetDbType string
const ( TargetDbTypeSpecificDatabase TargetDbType = "specific-database" TargetDbTypeMultipleDatabases TargetDbType = "multiple-databases" )
Enum values for TargetDbType
func (TargetDbType) Values ¶ added in v0.29.0
func (TargetDbType) Values() []TargetDbType
Values returns all known values for TargetDbType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type TimestreamSettings ¶ added in v1.26.0
type TimestreamSettings struct { // Database name for the endpoint. // // This member is required. DatabaseName *string // Set this attribute to specify the default magnetic duration applied to the // Amazon Timestream tables in days. This is the number of days that records remain // in magnetic store before being discarded. For more information, see [Storage]in the [Amazon Timestream Developer Guide]. // // [Storage]: https://docs.aws.amazon.com/timestream/latest/developerguide/storage.html // [Amazon Timestream Developer Guide]: https://docs.aws.amazon.com/timestream/latest/developerguide/ // // This member is required. MagneticDuration *int32 // Set this attribute to specify the length of time to store all of the tables in // memory that are migrated into Amazon Timestream from the source database. Time // is measured in units of hours. When Timestream data comes in, it first resides // in memory for the specified duration, which allows quick access to it. // // This member is required. MemoryDuration *int32 // Set this attribute to true to specify that DMS only applies inserts and // updates, and not deletes. Amazon Timestream does not allow deleting records, so // if this value is false , DMS nulls out the corresponding record in the // Timestream database rather than deleting it. CdcInsertsAndUpdates *bool // Set this attribute to true to enable memory store writes. When this value is // false , DMS does not write records that are older in days than the value // specified in MagneticDuration , because Amazon Timestream does not allow memory // writes by default. For more information, see [Storage]in the [Amazon Timestream Developer Guide]. // // [Storage]: https://docs.aws.amazon.com/timestream/latest/developerguide/storage.html // [Amazon Timestream Developer Guide]: https://docs.aws.amazon.com/timestream/latest/developerguide/ EnableMagneticStoreWrites *bool // contains filtered or unexported fields }
Provides information that defines an Amazon Timestream endpoint.
type TlogAccessMode ¶ added in v1.25.0
type TlogAccessMode string
const ( TlogAccessModeBackupOnly TlogAccessMode = "BackupOnly" TlogAccessModePreferBackup TlogAccessMode = "PreferBackup" TlogAccessModePreferTlog TlogAccessMode = "PreferTlog" TlogAccessModeTlogOnly TlogAccessMode = "TlogOnly" )
Enum values for TlogAccessMode
func (TlogAccessMode) Values ¶ added in v1.25.0
func (TlogAccessMode) Values() []TlogAccessMode
Values returns all known values for TlogAccessMode. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type UnknownUnionMember ¶ added in v1.30.0
type UnknownUnionMember struct { Tag string Value []byte // contains filtered or unexported fields }
UnknownUnionMember is returned when a union member is returned over the wire, but has an unknown tag.
type UpgradeDependencyFailureFault ¶
type UpgradeDependencyFailureFault struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
An upgrade dependency is preventing the database migration.
func (*UpgradeDependencyFailureFault) Error ¶
func (e *UpgradeDependencyFailureFault) Error() string
func (*UpgradeDependencyFailureFault) ErrorCode ¶
func (e *UpgradeDependencyFailureFault) ErrorCode() string
func (*UpgradeDependencyFailureFault) ErrorFault ¶
func (e *UpgradeDependencyFailureFault) ErrorFault() smithy.ErrorFault
func (*UpgradeDependencyFailureFault) ErrorMessage ¶
func (e *UpgradeDependencyFailureFault) ErrorMessage() string
type VersionStatus ¶ added in v1.19.0
type VersionStatus string
const ( VersionStatusUpToDate VersionStatus = "UP_TO_DATE" VersionStatusOutdated VersionStatus = "OUTDATED" VersionStatusUnsupported VersionStatus = "UNSUPPORTED" )
Enum values for VersionStatus
func (VersionStatus) Values ¶ added in v1.19.0
func (VersionStatus) Values() []VersionStatus
Values returns all known values for VersionStatus. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type VpcSecurityGroupMembership ¶
type VpcSecurityGroupMembership struct { // The status of the VPC security group. Status *string // The VPC security group ID. VpcSecurityGroupId *string // contains filtered or unexported fields }
Describes the status of a security group associated with the virtual private cloud (VPC) hosting your replication and DB instances.