Documentation ¶
Overview ¶
+kubebuilder:object:generate=true +groupName=online.mongodbatlas.crossplane.io +versionName=v1alpha1
Index ¶
- Constants
- Variables
- type Archive
- func (in *Archive) DeepCopy() *Archive
- func (in *Archive) DeepCopyInto(out *Archive)
- func (in *Archive) DeepCopyObject() runtime.Object
- func (tr *Archive) GetConnectionDetailsMapping() map[string]string
- func (tr *Archive) GetID() string
- func (tr *Archive) GetObservation() (map[string]any, error)
- func (tr *Archive) GetParameters() (map[string]any, error)
- func (mg *Archive) GetTerraformResourceType() string
- func (tr *Archive) GetTerraformSchemaVersion() int
- func (tr *Archive) LateInitialize(attrs []byte) (bool, error)
- func (tr *Archive) SetObservation(obs map[string]any) error
- func (tr *Archive) SetParameters(params map[string]any) error
- type ArchiveList
- type ArchiveObservation
- type ArchiveParameters
- type ArchiveSpec
- type ArchiveStatus
- type CriteriaObservation
- type CriteriaParameters
- type PartitionFieldsObservation
- type PartitionFieldsParameters
- type ScheduleObservation
- type ScheduleParameters
Constants ¶
const ( CRDGroup = "online.mongodbatlas.crossplane.io" CRDVersion = "v1alpha1" )
Package type metadata.
Variables ¶
var ( Archive_Kind = "Archive" Archive_GroupKind = schema.GroupKind{Group: CRDGroup, Kind: Archive_Kind}.String() Archive_KindAPIVersion = Archive_Kind + "." + CRDGroupVersion.String() Archive_GroupVersionKind = CRDGroupVersion.WithKind(Archive_Kind) )
Repository type metadata.
var ( // CRDGroupVersion is the API Group Version used to register the objects CRDGroupVersion = schema.GroupVersion{Group: CRDGroup, Version: CRDVersion} // SchemeBuilder is used to add go types to the GroupVersionKind scheme SchemeBuilder = &scheme.Builder{GroupVersion: CRDGroupVersion} // AddToScheme adds the types in this group-version to the given scheme. AddToScheme = SchemeBuilder.AddToScheme )
Functions ¶
This section is empty.
Types ¶
type Archive ¶
type Archive struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` // +kubebuilder:validation:XValidation:rule="self.managementPolicy == 'ObserveOnly' || has(self.forProvider.clusterName)",message="clusterName is a required parameter" // +kubebuilder:validation:XValidation:rule="self.managementPolicy == 'ObserveOnly' || has(self.forProvider.collName)",message="collName is a required parameter" // +kubebuilder:validation:XValidation:rule="self.managementPolicy == 'ObserveOnly' || has(self.forProvider.criteria)",message="criteria is a required parameter" // +kubebuilder:validation:XValidation:rule="self.managementPolicy == 'ObserveOnly' || has(self.forProvider.dbName)",message="dbName is a required parameter" // +kubebuilder:validation:XValidation:rule="self.managementPolicy == 'ObserveOnly' || has(self.forProvider.projectId)",message="projectId is a required parameter" Spec ArchiveSpec `json:"spec"` Status ArchiveStatus `json:"status,omitempty"` }
Archive is the Schema for the Archives API. Provides a Online Archive resource for creation, update, and delete +kubebuilder:printcolumn:name="READY",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].status" +kubebuilder:printcolumn:name="SYNCED",type="string",JSONPath=".status.conditions[?(@.type=='Synced')].status" +kubebuilder:printcolumn:name="EXTERNAL-NAME",type="string",JSONPath=".metadata.annotations.crossplane\\.io/external-name" +kubebuilder:printcolumn:name="AGE",type="date",JSONPath=".metadata.creationTimestamp" +kubebuilder:subresource:status +kubebuilder:resource:scope=Cluster,categories={crossplane,managed,mongodbatlas}
func (*Archive) DeepCopy ¶
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Archive.
func (*Archive) DeepCopyInto ¶
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*Archive) DeepCopyObject ¶
DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (*Archive) GetConnectionDetailsMapping ¶
GetConnectionDetailsMapping for this Archive
func (*Archive) GetObservation ¶
GetObservation of this Archive
func (*Archive) GetParameters ¶
GetParameters of this Archive
func (*Archive) GetTerraformResourceType ¶
GetTerraformResourceType returns Terraform resource type for this Archive
func (*Archive) GetTerraformSchemaVersion ¶
GetTerraformSchemaVersion returns the associated Terraform schema version
func (*Archive) LateInitialize ¶
LateInitialize this Archive using its observed tfState. returns True if there are any spec changes for the resource.
func (*Archive) SetObservation ¶
SetObservation for this Archive
type ArchiveList ¶
type ArchiveList struct { metav1.TypeMeta `json:",inline"` metav1.ListMeta `json:"metadata,omitempty"` Items []Archive `json:"items"` }
ArchiveList contains a list of Archives
func (*ArchiveList) DeepCopy ¶
func (in *ArchiveList) DeepCopy() *ArchiveList
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ArchiveList.
func (*ArchiveList) DeepCopyInto ¶
func (in *ArchiveList) DeepCopyInto(out *ArchiveList)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*ArchiveList) DeepCopyObject ¶
func (in *ArchiveList) DeepCopyObject() runtime.Object
DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
type ArchiveObservation ¶
type ArchiveObservation struct { // ID of the online archive. ArchiveID *string `json:"archiveId,omitempty" tf:"archive_id,omitempty"` // Name of the cluster that contains the collection. ClusterName *string `json:"clusterName,omitempty" tf:"cluster_name,omitempty"` // Name of the collection. CollName *string `json:"collName,omitempty" tf:"coll_name,omitempty"` // Classification of MongoDB database collection that you want to return, "TIMESERIES" or "STANDARD". Default is "STANDARD". CollectionType *string `json:"collectionType,omitempty" tf:"collection_type,omitempty"` // Criteria to use for archiving data. Criteria []CriteriaObservation `json:"criteria,omitempty" tf:"criteria,omitempty"` // Name of the database that contains the collection. DBName *string `json:"dbName,omitempty" tf:"db_name,omitempty"` ID *string `json:"id,omitempty" tf:"id,omitempty"` // (Recommended) Fields to use to partition data. You can specify up to two frequently queried fields to use for partitioning data. Note that queries that don’t contain the specified fields will require a full collection scan of all archived documents, which will take longer and increase your costs. To learn more about how partition improves query performance, see Data Structure in S3. The value of a partition field can be up to a maximum of 700 characters. Documents with values exceeding 700 characters are not archived. PartitionFields []PartitionFieldsObservation `json:"partitionFields,omitempty" tf:"partition_fields,omitempty"` // State of the online archive. This is required for pausing an active or resume a paused online archive. The resume request will fail if the collection has another active online archive. Paused *bool `json:"paused,omitempty" tf:"paused,omitempty"` // The unique ID for the project ProjectID *string `json:"projectId,omitempty" tf:"project_id,omitempty"` Schedule []ScheduleObservation `json:"schedule,omitempty" tf:"schedule,omitempty"` // Status of the online archive. Valid values are: Pending, Archiving, Idle, Pausing, Paused, Orphaned and Deleted State *string `json:"state,omitempty" tf:"state,omitempty"` SyncCreation *bool `json:"syncCreation,omitempty" tf:"sync_creation,omitempty"` }
func (*ArchiveObservation) DeepCopy ¶
func (in *ArchiveObservation) DeepCopy() *ArchiveObservation
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ArchiveObservation.
func (*ArchiveObservation) DeepCopyInto ¶
func (in *ArchiveObservation) DeepCopyInto(out *ArchiveObservation)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ArchiveParameters ¶
type ArchiveParameters struct { // Name of the cluster that contains the collection. // +kubebuilder:validation:Optional ClusterName *string `json:"clusterName,omitempty" tf:"cluster_name,omitempty"` // Name of the collection. // +kubebuilder:validation:Optional CollName *string `json:"collName,omitempty" tf:"coll_name,omitempty"` // Classification of MongoDB database collection that you want to return, "TIMESERIES" or "STANDARD". Default is "STANDARD". // +kubebuilder:validation:Optional CollectionType *string `json:"collectionType,omitempty" tf:"collection_type,omitempty"` // Criteria to use for archiving data. // +kubebuilder:validation:Optional Criteria []CriteriaParameters `json:"criteria,omitempty" tf:"criteria,omitempty"` // Name of the database that contains the collection. // +kubebuilder:validation:Optional DBName *string `json:"dbName,omitempty" tf:"db_name,omitempty"` // (Recommended) Fields to use to partition data. You can specify up to two frequently queried fields to use for partitioning data. Note that queries that don’t contain the specified fields will require a full collection scan of all archived documents, which will take longer and increase your costs. To learn more about how partition improves query performance, see Data Structure in S3. The value of a partition field can be up to a maximum of 700 characters. Documents with values exceeding 700 characters are not archived. // +kubebuilder:validation:Optional PartitionFields []PartitionFieldsParameters `json:"partitionFields,omitempty" tf:"partition_fields,omitempty"` // State of the online archive. This is required for pausing an active or resume a paused online archive. The resume request will fail if the collection has another active online archive. // +kubebuilder:validation:Optional Paused *bool `json:"paused,omitempty" tf:"paused,omitempty"` // The unique ID for the project // +kubebuilder:validation:Optional ProjectID *string `json:"projectId,omitempty" tf:"project_id,omitempty"` // +kubebuilder:validation:Optional Schedule []ScheduleParameters `json:"schedule,omitempty" tf:"schedule,omitempty"` // +kubebuilder:validation:Optional SyncCreation *bool `json:"syncCreation,omitempty" tf:"sync_creation,omitempty"` }
func (*ArchiveParameters) DeepCopy ¶
func (in *ArchiveParameters) DeepCopy() *ArchiveParameters
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ArchiveParameters.
func (*ArchiveParameters) DeepCopyInto ¶
func (in *ArchiveParameters) DeepCopyInto(out *ArchiveParameters)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ArchiveSpec ¶
type ArchiveSpec struct { v1.ResourceSpec `json:",inline"` ForProvider ArchiveParameters `json:"forProvider"` }
ArchiveSpec defines the desired state of Archive
func (*ArchiveSpec) DeepCopy ¶
func (in *ArchiveSpec) DeepCopy() *ArchiveSpec
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ArchiveSpec.
func (*ArchiveSpec) DeepCopyInto ¶
func (in *ArchiveSpec) DeepCopyInto(out *ArchiveSpec)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ArchiveStatus ¶
type ArchiveStatus struct { v1.ResourceStatus `json:",inline"` AtProvider ArchiveObservation `json:"atProvider,omitempty"` }
ArchiveStatus defines the observed state of Archive.
func (*ArchiveStatus) DeepCopy ¶
func (in *ArchiveStatus) DeepCopy() *ArchiveStatus
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ArchiveStatus.
func (*ArchiveStatus) DeepCopyInto ¶
func (in *ArchiveStatus) DeepCopyInto(out *ArchiveStatus)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type CriteriaObservation ¶
type CriteriaObservation struct { // Indexed database parameter that stores the date that determines when data moves to the online archive. MongoDB Cloud archives the data when the current date exceeds the date in this database parameter plus the number of days specified through the expireAfterDays parameter. DateField *string `json:"dateField,omitempty" tf:"date_field,omitempty"` // Syntax used to write the date after which data moves to the online archive. Date can be expressed as ISO 8601 or Epoch timestamps. The Epoch timestamp can be expressed as nanoseconds, milliseconds, or seconds. You must set type to DATE if collectionType is TIMESERIES. Valid values: ISODATE (default), EPOCH_SECONDS, EPOCH_MILLIS, EPOCH_NANOSECONDS. DateFormat *string `json:"dateFormat,omitempty" tf:"date_format,omitempty"` // Number of days after the value in the criteria.dateField when MongoDB Cloud archives data in the specified cluster. ExpireAfterDays *float64 `json:"expireAfterDays,omitempty" tf:"expire_after_days,omitempty"` // JSON query to use to select documents for archiving. Atlas uses the specified query with the db.collection.find(query) command. The empty document {} to return all documents is not supported. Query *string `json:"query,omitempty" tf:"query,omitempty"` // Type of criteria (DATE, CUSTOM) Type *string `json:"type,omitempty" tf:"type,omitempty"` }
func (*CriteriaObservation) DeepCopy ¶
func (in *CriteriaObservation) DeepCopy() *CriteriaObservation
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CriteriaObservation.
func (*CriteriaObservation) DeepCopyInto ¶
func (in *CriteriaObservation) DeepCopyInto(out *CriteriaObservation)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type CriteriaParameters ¶
type CriteriaParameters struct { // Indexed database parameter that stores the date that determines when data moves to the online archive. MongoDB Cloud archives the data when the current date exceeds the date in this database parameter plus the number of days specified through the expireAfterDays parameter. // +kubebuilder:validation:Optional DateField *string `json:"dateField,omitempty" tf:"date_field,omitempty"` // Syntax used to write the date after which data moves to the online archive. Date can be expressed as ISO 8601 or Epoch timestamps. The Epoch timestamp can be expressed as nanoseconds, milliseconds, or seconds. You must set type to DATE if collectionType is TIMESERIES. Valid values: ISODATE (default), EPOCH_SECONDS, EPOCH_MILLIS, EPOCH_NANOSECONDS. // +kubebuilder:validation:Optional DateFormat *string `json:"dateFormat,omitempty" tf:"date_format,omitempty"` // Number of days after the value in the criteria.dateField when MongoDB Cloud archives data in the specified cluster. // +kubebuilder:validation:Optional ExpireAfterDays *float64 `json:"expireAfterDays,omitempty" tf:"expire_after_days,omitempty"` // JSON query to use to select documents for archiving. Atlas uses the specified query with the db.collection.find(query) command. The empty document {} to return all documents is not supported. // +kubebuilder:validation:Optional Query *string `json:"query,omitempty" tf:"query,omitempty"` // Type of criteria (DATE, CUSTOM) // +kubebuilder:validation:Required Type *string `json:"type" tf:"type,omitempty"` }
func (*CriteriaParameters) DeepCopy ¶
func (in *CriteriaParameters) DeepCopy() *CriteriaParameters
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CriteriaParameters.
func (*CriteriaParameters) DeepCopyInto ¶
func (in *CriteriaParameters) DeepCopyInto(out *CriteriaParameters)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type PartitionFieldsObservation ¶
type PartitionFieldsObservation struct { // Human-readable label that identifies the parameter that MongoDB Cloud uses to partition data. To specify a nested parameter, use the dot notation. FieldName *string `json:"fieldName,omitempty" tf:"field_name,omitempty"` // Data type of the parameter that that MongoDB Cloud uses to partition data. Partition parameters of type UUID must be of binary subtype 4. MongoDB Cloud skips partition parameters of type UUID with subtype 3. Valid values: date, int, long, objectId, string, uuid. FieldType *string `json:"fieldType,omitempty" tf:"field_type,omitempty"` // Sequence in which MongoDB Cloud slices the collection data to create partitions. The resource expresses this sequence starting with zero. The value of the criteria.dateField parameter defaults as the first item in the partition sequence. Order *float64 `json:"order,omitempty" tf:"order,omitempty"` }
func (*PartitionFieldsObservation) DeepCopy ¶
func (in *PartitionFieldsObservation) DeepCopy() *PartitionFieldsObservation
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PartitionFieldsObservation.
func (*PartitionFieldsObservation) DeepCopyInto ¶
func (in *PartitionFieldsObservation) DeepCopyInto(out *PartitionFieldsObservation)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type PartitionFieldsParameters ¶
type PartitionFieldsParameters struct { // Human-readable label that identifies the parameter that MongoDB Cloud uses to partition data. To specify a nested parameter, use the dot notation. // +kubebuilder:validation:Required FieldName *string `json:"fieldName" tf:"field_name,omitempty"` // Sequence in which MongoDB Cloud slices the collection data to create partitions. The resource expresses this sequence starting with zero. The value of the criteria.dateField parameter defaults as the first item in the partition sequence. // +kubebuilder:validation:Required Order *float64 `json:"order" tf:"order,omitempty"` }
func (*PartitionFieldsParameters) DeepCopy ¶
func (in *PartitionFieldsParameters) DeepCopy() *PartitionFieldsParameters
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PartitionFieldsParameters.
func (*PartitionFieldsParameters) DeepCopyInto ¶
func (in *PartitionFieldsParameters) DeepCopyInto(out *PartitionFieldsParameters)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ScheduleObservation ¶
type ScheduleObservation struct { // Day of the month when the scheduled archive starts. This field should be provided only when schedule type is MONTHLY. DayOfMonth *float64 `json:"dayOfMonth,omitempty" tf:"day_of_month,omitempty"` // Day of the week when the scheduled archive starts. The week starts with Monday (1) and ends with Sunday (7). This field should be provided only when schedule type is WEEKLY. DayOfWeek *float64 `json:"dayOfWeek,omitempty" tf:"day_of_week,omitempty"` // Hour of the day when the scheduled window to run one online archive ends. EndHour *float64 `json:"endHour,omitempty" tf:"end_hour,omitempty"` // Minute of the hour when the scheduled window to run one online archive ends. EndMinute *float64 `json:"endMinute,omitempty" tf:"end_minute,omitempty"` // Hour of the day when the when the scheduled window to run one online archive starts. StartHour *float64 `json:"startHour,omitempty" tf:"start_hour,omitempty"` // Minute of the hour when the scheduled window to run one online archive starts. StartMinute *float64 `json:"startMinute,omitempty" tf:"start_minute,omitempty"` // Type of criteria (DATE, CUSTOM) Type *string `json:"type,omitempty" tf:"type,omitempty"` }
func (*ScheduleObservation) DeepCopy ¶
func (in *ScheduleObservation) DeepCopy() *ScheduleObservation
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduleObservation.
func (*ScheduleObservation) DeepCopyInto ¶
func (in *ScheduleObservation) DeepCopyInto(out *ScheduleObservation)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type ScheduleParameters ¶
type ScheduleParameters struct { // Day of the month when the scheduled archive starts. This field should be provided only when schedule type is MONTHLY. // +kubebuilder:validation:Optional DayOfMonth *float64 `json:"dayOfMonth,omitempty" tf:"day_of_month,omitempty"` // Day of the week when the scheduled archive starts. The week starts with Monday (1) and ends with Sunday (7). This field should be provided only when schedule type is WEEKLY. // +kubebuilder:validation:Optional DayOfWeek *float64 `json:"dayOfWeek,omitempty" tf:"day_of_week,omitempty"` // Hour of the day when the scheduled window to run one online archive ends. // +kubebuilder:validation:Optional EndHour *float64 `json:"endHour,omitempty" tf:"end_hour,omitempty"` // Minute of the hour when the scheduled window to run one online archive ends. // +kubebuilder:validation:Optional EndMinute *float64 `json:"endMinute,omitempty" tf:"end_minute,omitempty"` // Hour of the day when the when the scheduled window to run one online archive starts. // +kubebuilder:validation:Optional StartHour *float64 `json:"startHour,omitempty" tf:"start_hour,omitempty"` // Minute of the hour when the scheduled window to run one online archive starts. // +kubebuilder:validation:Optional StartMinute *float64 `json:"startMinute,omitempty" tf:"start_minute,omitempty"` // Type of criteria (DATE, CUSTOM) // +kubebuilder:validation:Required Type *string `json:"type" tf:"type,omitempty"` }
func (*ScheduleParameters) DeepCopy ¶
func (in *ScheduleParameters) DeepCopy() *ScheduleParameters
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduleParameters.
func (*ScheduleParameters) DeepCopyInto ¶
func (in *ScheduleParameters) DeepCopyInto(out *ScheduleParameters)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.