Documentation ¶
Index ¶
- type AccessDeniedException
- type BatchLoadDataFormat
- type BatchLoadProgressReport
- type BatchLoadStatus
- type BatchLoadTask
- type BatchLoadTaskDescription
- type ConflictException
- type CsvConfiguration
- type DataModel
- type DataModelConfiguration
- type DataModelS3Configuration
- type DataSourceConfiguration
- type DataSourceS3Configuration
- type Database
- type Dimension
- type DimensionMapping
- type DimensionValueType
- type Endpoint
- type InternalServerException
- type InvalidEndpointException
- type MagneticStoreRejectedDataLocation
- type MagneticStoreWriteProperties
- type MeasureValue
- type MeasureValueType
- type MixedMeasureMapping
- type MultiMeasureAttributeMapping
- type MultiMeasureMappings
- type PartitionKey
- type PartitionKeyEnforcementLevel
- type PartitionKeyType
- type Record
- type RecordsIngested
- type RejectedRecord
- type RejectedRecordsException
- type ReportConfiguration
- type ReportS3Configuration
- type ResourceNotFoundException
- type RetentionProperties
- type S3Configuration
- type S3EncryptionOption
- type ScalarMeasureValueType
- type Schema
- type ServiceQuotaExceededException
- type Table
- type TableStatus
- type Tag
- type ThrottlingException
- type TimeUnit
- type ValidationException
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AccessDeniedException ¶
type AccessDeniedException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
You are not authorized to perform this action.
func (*AccessDeniedException) Error ¶
func (e *AccessDeniedException) Error() string
func (*AccessDeniedException) ErrorCode ¶
func (e *AccessDeniedException) ErrorCode() string
func (*AccessDeniedException) ErrorFault ¶
func (e *AccessDeniedException) ErrorFault() smithy.ErrorFault
func (*AccessDeniedException) ErrorMessage ¶
func (e *AccessDeniedException) ErrorMessage() string
type BatchLoadDataFormat ¶
type BatchLoadDataFormat string
const (
BatchLoadDataFormatCsv BatchLoadDataFormat = "CSV"
)
Enum values for BatchLoadDataFormat
func (BatchLoadDataFormat) Values ¶
func (BatchLoadDataFormat) Values() []BatchLoadDataFormat
Values returns all known values for BatchLoadDataFormat. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type BatchLoadProgressReport ¶
type BatchLoadProgressReport struct { // BytesMetered int64 // FileFailures int64 // ParseFailures int64 // RecordIngestionFailures int64 // RecordsIngested int64 // RecordsProcessed int64 // contains filtered or unexported fields }
Details about the progress of a batch load task.
type BatchLoadStatus ¶
type BatchLoadStatus string
const ( BatchLoadStatusCreated BatchLoadStatus = "CREATED" BatchLoadStatusInProgress BatchLoadStatus = "IN_PROGRESS" BatchLoadStatusFailed BatchLoadStatus = "FAILED" BatchLoadStatusSucceeded BatchLoadStatus = "SUCCEEDED" BatchLoadStatusProgressStopped BatchLoadStatus = "PROGRESS_STOPPED" BatchLoadStatusPendingResume BatchLoadStatus = "PENDING_RESUME" )
Enum values for BatchLoadStatus
func (BatchLoadStatus) Values ¶
func (BatchLoadStatus) Values() []BatchLoadStatus
Values returns all known values for BatchLoadStatus. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type BatchLoadTask ¶
type BatchLoadTask struct { // The time when the Timestream batch load task was created. CreationTime *time.Time // Database name for the database into which a batch load task loads data. DatabaseName *string // The time when the Timestream batch load task was last updated. LastUpdatedTime *time.Time // ResumableUntil *time.Time // Table name for the table into which a batch load task loads data. TableName *string // The ID of the batch load task. TaskId *string // Status of the batch load task. TaskStatus BatchLoadStatus // contains filtered or unexported fields }
Details about a batch load task.
type BatchLoadTaskDescription ¶
type BatchLoadTaskDescription struct { // The time when the Timestream batch load task was created. CreationTime *time.Time // Data model configuration for a batch load task. This contains details about // where a data model for a batch load task is stored. DataModelConfiguration *DataModelConfiguration // Configuration details about the data source for a batch load task. DataSourceConfiguration *DataSourceConfiguration // ErrorMessage *string // The time when the Timestream batch load task was last updated. LastUpdatedTime *time.Time // ProgressReport *BatchLoadProgressReport // RecordVersion int64 // Report configuration for a batch load task. This contains details about where // error reports are stored. ReportConfiguration *ReportConfiguration // ResumableUntil *time.Time // TargetDatabaseName *string // TargetTableName *string // The ID of the batch load task. TaskId *string // Status of the batch load task. TaskStatus BatchLoadStatus // contains filtered or unexported fields }
Details about a batch load task.
type ConflictException ¶
type ConflictException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
Timestream was unable to process this request because it contains resource that already exists.
func (*ConflictException) Error ¶
func (e *ConflictException) Error() string
func (*ConflictException) ErrorCode ¶
func (e *ConflictException) ErrorCode() string
func (*ConflictException) ErrorFault ¶
func (e *ConflictException) ErrorFault() smithy.ErrorFault
func (*ConflictException) ErrorMessage ¶
func (e *ConflictException) ErrorMessage() string
type CsvConfiguration ¶
type CsvConfiguration struct { // Column separator can be one of comma (','), pipe ('|), semicolon (';'), // tab('/t'), or blank space (' '). ColumnSeparator *string // Escape character can be one of EscapeChar *string // Can be blank space (' '). NullValue *string // Can be single quote (') or double quote ("). QuoteChar *string // Specifies to trim leading and trailing white space. TrimWhiteSpace *bool // contains filtered or unexported fields }
A delimited data format where the column separator can be a comma and the record separator is a newline character.
type DataModel ¶
type DataModel struct { // Source to target mappings for dimensions. // // This member is required. DimensionMappings []DimensionMapping // MeasureNameColumn *string // Source to target mappings for measures. MixedMeasureMappings []MixedMeasureMapping // Source to target mappings for multi-measure records. MultiMeasureMappings *MultiMeasureMappings // Source column to be mapped to time. TimeColumn *string // The granularity of the timestamp unit. It indicates if the time value is in // seconds, milliseconds, nanoseconds, or other supported values. Default is // MILLISECONDS . TimeUnit TimeUnit // contains filtered or unexported fields }
Data model for a batch load task.
type DataModelConfiguration ¶
type DataModelConfiguration struct { // DataModel *DataModel // DataModelS3Configuration *DataModelS3Configuration // contains filtered or unexported fields }
type DataModelS3Configuration ¶
type DataSourceConfiguration ¶
type DataSourceConfiguration struct { // This is currently CSV. // // This member is required. DataFormat BatchLoadDataFormat // Configuration of an S3 location for a file which contains data to load. // // This member is required. DataSourceS3Configuration *DataSourceS3Configuration // A delimited data format where the column separator can be a comma and the // record separator is a newline character. CsvConfiguration *CsvConfiguration // contains filtered or unexported fields }
Defines configuration details about the data source.
type DataSourceS3Configuration ¶
type Database ¶
type Database struct { // The Amazon Resource Name that uniquely identifies this database. Arn *string // The time when the database was created, calculated from the Unix epoch time. CreationTime *time.Time // The name of the Timestream database. DatabaseName *string // The identifier of the KMS key used to encrypt the data stored in the database. KmsKeyId *string // The last time that this database was updated. LastUpdatedTime *time.Time // The total number of tables found within a Timestream database. TableCount int64 // contains filtered or unexported fields }
A top-level container for a table. Databases and tables are the fundamental management concepts in Amazon Timestream. All tables in a database are encrypted with the same KMS key.
type Dimension ¶
type Dimension struct { // Dimension represents the metadata attributes of the time series. For example, // the name and Availability Zone of an EC2 instance or the name of the // manufacturer of a wind turbine are dimensions. // // For constraints on dimension names, see [Naming Constraints]. // // [Naming Constraints]: https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html#limits.naming // // This member is required. Name *string // The value of the dimension. // // This member is required. Value *string // The data type of the dimension for the time-series data point. DimensionValueType DimensionValueType // contains filtered or unexported fields }
Represents the metadata attributes of the time series. For example, the name and Availability Zone of an EC2 instance or the name of the manufacturer of a wind turbine are dimensions.
type DimensionMapping ¶
type DimensionValueType ¶
type DimensionValueType string
const (
DimensionValueTypeVarchar DimensionValueType = "VARCHAR"
)
Enum values for DimensionValueType
func (DimensionValueType) Values ¶
func (DimensionValueType) Values() []DimensionValueType
Values returns all known values for DimensionValueType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Endpoint ¶
type Endpoint struct { // An endpoint address. // // This member is required. Address *string // The TTL for the endpoint, in minutes. // // This member is required. CachePeriodInMinutes int64 // contains filtered or unexported fields }
Represents an available endpoint against which to make API calls against, as well as the TTL for that endpoint.
type InternalServerException ¶
type InternalServerException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
Timestream was unable to fully process this request because of an internal
server error.
func (*InternalServerException) Error ¶
func (e *InternalServerException) Error() string
func (*InternalServerException) ErrorCode ¶
func (e *InternalServerException) ErrorCode() string
func (*InternalServerException) ErrorFault ¶
func (e *InternalServerException) ErrorFault() smithy.ErrorFault
func (*InternalServerException) ErrorMessage ¶
func (e *InternalServerException) ErrorMessage() string
type InvalidEndpointException ¶
type InvalidEndpointException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The requested endpoint was not valid.
func (*InvalidEndpointException) Error ¶
func (e *InvalidEndpointException) Error() string
func (*InvalidEndpointException) ErrorCode ¶
func (e *InvalidEndpointException) ErrorCode() string
func (*InvalidEndpointException) ErrorFault ¶
func (e *InvalidEndpointException) ErrorFault() smithy.ErrorFault
func (*InvalidEndpointException) ErrorMessage ¶
func (e *InvalidEndpointException) ErrorMessage() string
type MagneticStoreRejectedDataLocation ¶
type MagneticStoreRejectedDataLocation struct { // Configuration of an S3 location to write error reports for records rejected, // asynchronously, during magnetic store writes. S3Configuration *S3Configuration // contains filtered or unexported fields }
The location to write error reports for records rejected, asynchronously, during magnetic store writes.
type MagneticStoreWriteProperties ¶
type MagneticStoreWriteProperties struct { // A flag to enable magnetic store writes. // // This member is required. EnableMagneticStoreWrites *bool // The location to write error reports for records rejected asynchronously during // magnetic store writes. MagneticStoreRejectedDataLocation *MagneticStoreRejectedDataLocation // contains filtered or unexported fields }
The set of properties on a table for configuring magnetic store writes.
type MeasureValue ¶
type MeasureValue struct { // The name of the MeasureValue. // // For constraints on MeasureValue names, see [Naming Constraints] in the Amazon Timestream Developer // Guide. // // [Naming Constraints]: https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html#limits.naming // // This member is required. Name *string // Contains the data type of the MeasureValue for the time-series data point. // // This member is required. Type MeasureValueType // The value for the MeasureValue. For information, see [Data types]. // // [Data types]: https://docs.aws.amazon.com/timestream/latest/developerguide/writes.html#writes.data-types // // This member is required. Value *string // contains filtered or unexported fields }
Represents the data attribute of the time series. For example, the CPU
utilization of an EC2 instance or the RPM of a wind turbine are measures. MeasureValue has both name and value.
MeasureValue is only allowed for type MULTI . Using MULTI type, you can pass multiple data attributes associated with the same time series in a single record
type MeasureValueType ¶
type MeasureValueType string
const ( MeasureValueTypeDouble MeasureValueType = "DOUBLE" MeasureValueTypeBigint MeasureValueType = "BIGINT" MeasureValueTypeVarchar MeasureValueType = "VARCHAR" MeasureValueTypeBoolean MeasureValueType = "BOOLEAN" MeasureValueTypeTimestamp MeasureValueType = "TIMESTAMP" MeasureValueTypeMulti MeasureValueType = "MULTI" )
Enum values for MeasureValueType
func (MeasureValueType) Values ¶
func (MeasureValueType) Values() []MeasureValueType
Values returns all known values for MeasureValueType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MixedMeasureMapping ¶
type MixedMeasureMapping struct { // // // This member is required. MeasureValueType MeasureValueType // MeasureName *string // MultiMeasureAttributeMappings []MultiMeasureAttributeMapping // SourceColumn *string // TargetMeasureName *string // contains filtered or unexported fields }
type MultiMeasureAttributeMapping ¶
type MultiMeasureAttributeMapping struct { // // // This member is required. SourceColumn *string // MeasureValueType ScalarMeasureValueType // TargetMultiMeasureAttributeName *string // contains filtered or unexported fields }
type MultiMeasureMappings ¶
type MultiMeasureMappings struct { // // // This member is required. MultiMeasureAttributeMappings []MultiMeasureAttributeMapping // TargetMultiMeasureName *string // contains filtered or unexported fields }
type PartitionKey ¶
type PartitionKey struct { // The type of the partition key. Options are DIMENSION (dimension key) and // MEASURE (measure key). // // This member is required. Type PartitionKeyType // The level of enforcement for the specification of a dimension key in ingested // records. Options are REQUIRED (dimension key must be specified) and OPTIONAL // (dimension key does not have to be specified). EnforcementInRecord PartitionKeyEnforcementLevel // The name of the attribute used for a dimension key. Name *string // contains filtered or unexported fields }
An attribute used in partitioning data in a table. A dimension key partitions
data using the values of the dimension specified by the dimension-name as partition key, while a measure key partitions data using measure names (values of the 'measure_name' column).
type PartitionKeyEnforcementLevel ¶
type PartitionKeyEnforcementLevel string
const ( PartitionKeyEnforcementLevelRequired PartitionKeyEnforcementLevel = "REQUIRED" PartitionKeyEnforcementLevelOptional PartitionKeyEnforcementLevel = "OPTIONAL" )
Enum values for PartitionKeyEnforcementLevel
func (PartitionKeyEnforcementLevel) Values ¶
func (PartitionKeyEnforcementLevel) Values() []PartitionKeyEnforcementLevel
Values returns all known values for PartitionKeyEnforcementLevel. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type PartitionKeyType ¶
type PartitionKeyType string
const ( PartitionKeyTypeDimension PartitionKeyType = "DIMENSION" PartitionKeyTypeMeasure PartitionKeyType = "MEASURE" )
Enum values for PartitionKeyType
func (PartitionKeyType) Values ¶
func (PartitionKeyType) Values() []PartitionKeyType
Values returns all known values for PartitionKeyType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Record ¶
type Record struct { // Contains the list of dimensions for time-series data points. Dimensions []Dimension // Measure represents the data attribute of the time series. For example, the CPU // utilization of an EC2 instance or the RPM of a wind turbine are measures. MeasureName *string // Contains the measure value for the time-series data point. MeasureValue *string // Contains the data type of the measure value for the time-series data point. // Default type is DOUBLE . For more information, see [Data types]. // // [Data types]: https://docs.aws.amazon.com/timestream/latest/developerguide/writes.html#writes.data-types MeasureValueType MeasureValueType // Contains the list of MeasureValue for time-series data points. // // This is only allowed for type MULTI . For scalar values, use MeasureValue // attribute of the record directly. MeasureValues []MeasureValue // Contains the time at which the measure value for the data point was collected. // The time value plus the unit provides the time elapsed since the epoch. For // example, if the time value is 12345 and the unit is ms , then 12345 ms have // elapsed since the epoch. Time *string // The granularity of the timestamp unit. It indicates if the time value is in // seconds, milliseconds, nanoseconds, or other supported values. Default is // MILLISECONDS . TimeUnit TimeUnit // 64-bit attribute used for record updates. Write requests for duplicate data // with a higher version number will update the existing measure value and version. // In cases where the measure value is the same, Version will still be updated. // Default value is 1 . // // Version must be 1 or greater, or you will receive a ValidationException error. Version *int64 // contains filtered or unexported fields }
Represents a time-series data point being written into Timestream. Each record contains an array of dimensions. Dimensions represent the metadata attributes of a time-series data point, such as the instance name or Availability Zone of an EC2 instance. A record also contains the measure name, which is the name of the measure being collected (for example, the CPU utilization of an EC2 instance). Additionally, a record contains the measure value and the value type, which is the data type of the measure value. Also, the record contains the timestamp of when the measure was collected and the timestamp unit, which represents the granularity of the timestamp.
Records have a Version field, which is a 64-bit long that you can use for updating data points. Writes of a duplicate record with the same dimension, timestamp, and measure name but different measure value will only succeed if the Version attribute of the record in the write request is higher than that of the existing record. Timestream defaults to a Version of 1 for records without the Version field.
type RecordsIngested ¶
type RecordsIngested struct { // Count of records ingested into the magnetic store. MagneticStore int32 // Count of records ingested into the memory store. MemoryStore int32 // Total count of successfully ingested records. Total int32 // contains filtered or unexported fields }
Information on the records ingested by this request.
type RejectedRecord ¶
type RejectedRecord struct { // The existing version of the record. This value is populated in scenarios where // an identical record exists with a higher version than the version in the write // request. ExistingVersion *int64 // The reason why a record was not successfully inserted into Timestream. // Possible causes of failure include: // // - Records with duplicate data where there are multiple records with the same // dimensions, timestamps, and measure names but: // // - Measure values are different // // - Version is not present in the request, or the value of version in the new // record is equal to or lower than the existing value // // If Timestream rejects data for this case, the ExistingVersion field in the // RejectedRecords response will indicate the current record’s version. To force // an update, you can resend the request with a version for the record set to a // value greater than the ExistingVersion . // // - Records with timestamps that lie outside the retention duration of the // memory store. // // When the retention window is updated, you will receive a RejectedRecords // exception if you immediately try to ingest data within the new window. To avoid // a RejectedRecords exception, wait until the duration of the new window to // ingest new data. For further information, see [Best Practices for Configuring Timestream]and [the explanation of how storage works in Timestream]. // // - Records with dimensions or measures that exceed the Timestream defined // limits. // // For more information, see [Access Management] in the Timestream Developer Guide. // // [Access Management]: https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html // [Best Practices for Configuring Timestream]: https://docs.aws.amazon.com/timestream/latest/developerguide/best-practices.html#configuration // [the explanation of how storage works in Timestream]: https://docs.aws.amazon.com/timestream/latest/developerguide/storage.html Reason *string // The index of the record in the input request for WriteRecords. Indexes begin // with 0. RecordIndex int32 // contains filtered or unexported fields }
Represents records that were not successfully inserted into Timestream due to
data validation issues that must be resolved before reinserting time-series data into the system.
type RejectedRecordsException ¶
type RejectedRecordsException struct { Message *string ErrorCodeOverride *string RejectedRecords []RejectedRecord // contains filtered or unexported fields }
WriteRecords would throw this exception in the following cases: - Records with duplicate data where there are multiple records with the same dimensions, timestamps, and measure names but: - Measure values are different - Version is not present in the request or the value of version in the new record is equal to or lower than the existing value
In this case, if Timestream rejects data, the ExistingVersion field in the
RejectedRecords response will indicate the current record’s version. To force an update, you can resend the request with a version for the record set to a value greater than the ExistingVersion . - Records with timestamps that lie outside the retention duration of the memory store. - Records with dimensions or measures that exceed the Timestream defined limits.
For more information, see Quotas in the Amazon Timestream Developer Guide.
func (*RejectedRecordsException) Error ¶
func (e *RejectedRecordsException) Error() string
func (*RejectedRecordsException) ErrorCode ¶
func (e *RejectedRecordsException) ErrorCode() string
func (*RejectedRecordsException) ErrorFault ¶
func (e *RejectedRecordsException) ErrorFault() smithy.ErrorFault
func (*RejectedRecordsException) ErrorMessage ¶
func (e *RejectedRecordsException) ErrorMessage() string
type ReportConfiguration ¶
type ReportConfiguration struct { // Configuration of an S3 location to write error reports and events for a batch // load. ReportS3Configuration *ReportS3Configuration // contains filtered or unexported fields }
Report configuration for a batch load task. This contains details about where error reports are stored.
type ReportS3Configuration ¶
type ReportS3Configuration struct { // // // This member is required. BucketName *string // EncryptionOption S3EncryptionOption // KmsKeyId *string // ObjectKeyPrefix *string // contains filtered or unexported fields }
type ResourceNotFoundException ¶
type ResourceNotFoundException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The operation tried to access a nonexistent resource. The resource might not be specified correctly, or its status might not be ACTIVE.
func (*ResourceNotFoundException) Error ¶
func (e *ResourceNotFoundException) Error() string
func (*ResourceNotFoundException) ErrorCode ¶
func (e *ResourceNotFoundException) ErrorCode() string
func (*ResourceNotFoundException) ErrorFault ¶
func (e *ResourceNotFoundException) ErrorFault() smithy.ErrorFault
func (*ResourceNotFoundException) ErrorMessage ¶
func (e *ResourceNotFoundException) ErrorMessage() string
type RetentionProperties ¶
type RetentionProperties struct { // The duration for which data must be stored in the magnetic store. // // This member is required. MagneticStoreRetentionPeriodInDays *int64 // The duration for which data must be stored in the memory store. // // This member is required. MemoryStoreRetentionPeriodInHours *int64 // contains filtered or unexported fields }
Retention properties contain the duration for which your time-series data must be stored in the magnetic store and the memory store.
type S3Configuration ¶
type S3Configuration struct { // The bucket name of the customer S3 bucket. BucketName *string // The encryption option for the customer S3 location. Options are S3 server-side // encryption with an S3 managed key or Amazon Web Services managed key. EncryptionOption S3EncryptionOption // The KMS key ID for the customer S3 location when encrypting with an Amazon Web // Services managed key. KmsKeyId *string // The object key preview for the customer S3 location. ObjectKeyPrefix *string // contains filtered or unexported fields }
The configuration that specifies an S3 location.
type S3EncryptionOption ¶
type S3EncryptionOption string
const ( S3EncryptionOptionSseS3 S3EncryptionOption = "SSE_S3" S3EncryptionOptionSseKms S3EncryptionOption = "SSE_KMS" )
Enum values for S3EncryptionOption
func (S3EncryptionOption) Values ¶
func (S3EncryptionOption) Values() []S3EncryptionOption
Values returns all known values for S3EncryptionOption. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ScalarMeasureValueType ¶
type ScalarMeasureValueType string
const ( ScalarMeasureValueTypeDouble ScalarMeasureValueType = "DOUBLE" ScalarMeasureValueTypeBigint ScalarMeasureValueType = "BIGINT" ScalarMeasureValueTypeBoolean ScalarMeasureValueType = "BOOLEAN" ScalarMeasureValueTypeVarchar ScalarMeasureValueType = "VARCHAR" ScalarMeasureValueTypeTimestamp ScalarMeasureValueType = "TIMESTAMP" )
Enum values for ScalarMeasureValueType
func (ScalarMeasureValueType) Values ¶
func (ScalarMeasureValueType) Values() []ScalarMeasureValueType
Values returns all known values for ScalarMeasureValueType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Schema ¶
type Schema struct { // A non-empty list of partition keys defining the attributes used to partition // the table data. The order of the list determines the partition hierarchy. The // name and type of each partition key as well as the partition key order cannot be // changed after the table is created. However, the enforcement level of each // partition key can be changed. CompositePartitionKey []PartitionKey // contains filtered or unexported fields }
A Schema specifies the expected data model of the table.
type ServiceQuotaExceededException ¶
type ServiceQuotaExceededException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The instance quota of resource exceeded for this account.
func (*ServiceQuotaExceededException) Error ¶
func (e *ServiceQuotaExceededException) Error() string
func (*ServiceQuotaExceededException) ErrorCode ¶
func (e *ServiceQuotaExceededException) ErrorCode() string
func (*ServiceQuotaExceededException) ErrorFault ¶
func (e *ServiceQuotaExceededException) ErrorFault() smithy.ErrorFault
func (*ServiceQuotaExceededException) ErrorMessage ¶
func (e *ServiceQuotaExceededException) ErrorMessage() string
type Table ¶
type Table struct { // The Amazon Resource Name that uniquely identifies this table. Arn *string // The time when the Timestream table was created. CreationTime *time.Time // The name of the Timestream database that contains this table. DatabaseName *string // The time when the Timestream table was last updated. LastUpdatedTime *time.Time // Contains properties to set on the table when enabling magnetic store writes. MagneticStoreWriteProperties *MagneticStoreWriteProperties // The retention duration for the memory store and magnetic store. RetentionProperties *RetentionProperties // The schema of the table. Schema *Schema // The name of the Timestream table. TableName *string // The current state of the table: // // - DELETING - The table is being deleted. // // - ACTIVE - The table is ready for use. TableStatus TableStatus // contains filtered or unexported fields }
Represents a database table in Timestream. Tables contain one or more related time series. You can modify the retention duration of the memory store and the magnetic store for a table.
type TableStatus ¶
type TableStatus string
const ( TableStatusActive TableStatus = "ACTIVE" TableStatusDeleting TableStatus = "DELETING" TableStatusRestoring TableStatus = "RESTORING" )
Enum values for TableStatus
func (TableStatus) Values ¶
func (TableStatus) Values() []TableStatus
Values returns all known values for TableStatus. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Tag ¶
type Tag struct { // The key of the tag. Tag keys are case sensitive. // // This member is required. Key *string // The value of the tag. Tag values are case-sensitive and can be null. // // This member is required. Value *string // contains filtered or unexported fields }
A tag is a label that you assign to a Timestream database and/or table. Each
tag consists of a key and an optional value, both of which you define. With tags, you can categorize databases and/or tables, for example, by purpose, owner, or environment.
type ThrottlingException ¶
type ThrottlingException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
Too many requests were made by a user and they exceeded the service quotas.
The request was throttled.
func (*ThrottlingException) Error ¶
func (e *ThrottlingException) Error() string
func (*ThrottlingException) ErrorCode ¶
func (e *ThrottlingException) ErrorCode() string
func (*ThrottlingException) ErrorFault ¶
func (e *ThrottlingException) ErrorFault() smithy.ErrorFault
func (*ThrottlingException) ErrorMessage ¶
func (e *ThrottlingException) ErrorMessage() string
type TimeUnit ¶
type TimeUnit string
type ValidationException ¶
type ValidationException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
An invalid or malformed request.
func (*ValidationException) Error ¶
func (e *ValidationException) Error() string
func (*ValidationException) ErrorCode ¶
func (e *ValidationException) ErrorCode() string
func (*ValidationException) ErrorFault ¶
func (e *ValidationException) ErrorFault() smithy.ErrorFault
func (*ValidationException) ErrorMessage ¶
func (e *ValidationException) ErrorMessage() string