v1alpha1

package
v0.0.0-...-b890ebe Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 6, 2024 License: Apache-2.0 Imports: 14 Imported by: 0

Documentation

Overview

+kubebuilder:object:generate=true +groupName=logpush.upbound.io +versionName=v1alpha1

Index

Constants

View Source
const (
	CRDGroup   = "logpush.upbound.io"
	CRDVersion = "v1alpha1"
)

Package type metadata.

Variables

View Source
var (
	// CRDGroupVersion is the API Group Version used to register the objects
	CRDGroupVersion = schema.GroupVersion{Group: CRDGroup, Version: CRDVersion}

	// SchemeBuilder is used to add go types to the GroupVersionKind scheme
	SchemeBuilder = &scheme.Builder{GroupVersion: CRDGroupVersion}

	// AddToScheme adds the types in this group-version to the given scheme.
	AddToScheme = SchemeBuilder.AddToScheme
)
View Source
var (
	Job_Kind             = "Job"
	Job_GroupKind        = schema.GroupKind{Group: CRDGroup, Kind: Job_Kind}.String()
	Job_KindAPIVersion   = Job_Kind + "." + CRDGroupVersion.String()
	Job_GroupVersionKind = CRDGroupVersion.WithKind(Job_Kind)
)

Repository type metadata.

View Source
var (
	OwnershipChallenge_Kind             = "OwnershipChallenge"
	OwnershipChallenge_GroupKind        = schema.GroupKind{Group: CRDGroup, Kind: OwnershipChallenge_Kind}.String()
	OwnershipChallenge_KindAPIVersion   = OwnershipChallenge_Kind + "." + CRDGroupVersion.String()
	OwnershipChallenge_GroupVersionKind = CRDGroupVersion.WithKind(OwnershipChallenge_Kind)
)

Repository type metadata.

Functions

This section is empty.

Types

type Job

type Job struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`
	// +kubebuilder:validation:XValidation:rule="!('*' in self.managementPolicies || 'Create' in self.managementPolicies || 'Update' in self.managementPolicies) || has(self.forProvider.dataset) || (has(self.initProvider) && has(self.initProvider.dataset))",message="spec.forProvider.dataset is a required parameter"
	// +kubebuilder:validation:XValidation:rule="!('*' in self.managementPolicies || 'Create' in self.managementPolicies || 'Update' in self.managementPolicies) || has(self.forProvider.destinationConf) || (has(self.initProvider) && has(self.initProvider.destinationConf))",message="spec.forProvider.destinationConf is a required parameter"
	Spec   JobSpec   `json:"spec"`
	Status JobStatus `json:"status,omitempty"`
}

Job is the Schema for the Jobs API. Provides a resource which manages Cloudflare Logpush jobs. For Logpush jobs pushing to Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic, this resource cannot be automatically created. In order to have this automated, you must have: cloudflare_logpush_ownership_challenge: Configured to generate the challenge to confirm ownership of the destination.cloudflare_logpush_job: Create and manage the Logpush Job itself. +kubebuilder:printcolumn:name="READY",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].status" +kubebuilder:printcolumn:name="SYNCED",type="string",JSONPath=".status.conditions[?(@.type=='Synced')].status" +kubebuilder:printcolumn:name="EXTERNAL-NAME",type="string",JSONPath=".metadata.annotations.crossplane\\.io/external-name" +kubebuilder:printcolumn:name="AGE",type="date",JSONPath=".metadata.creationTimestamp" +kubebuilder:subresource:status +kubebuilder:resource:scope=Cluster,categories={crossplane,managed,cloudflare}

func (*Job) DeepCopy

func (in *Job) DeepCopy() *Job

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Job.

func (*Job) DeepCopyInto

func (in *Job) DeepCopyInto(out *Job)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Job) DeepCopyObject

func (in *Job) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

func (*Job) GetCondition

func (mg *Job) GetCondition(ct xpv1.ConditionType) xpv1.Condition

GetCondition of this Job.

func (*Job) GetConnectionDetailsMapping

func (tr *Job) GetConnectionDetailsMapping() map[string]string

GetConnectionDetailsMapping for this Job

func (*Job) GetDeletionPolicy

func (mg *Job) GetDeletionPolicy() xpv1.DeletionPolicy

GetDeletionPolicy of this Job.

func (*Job) GetID

func (tr *Job) GetID() string

GetID returns ID of underlying Terraform resource of this Job

func (*Job) GetInitParameters

func (tr *Job) GetInitParameters() (map[string]any, error)

GetInitParameters of this Job

func (*Job) GetManagementPolicies

func (mg *Job) GetManagementPolicies() xpv1.ManagementPolicies

GetManagementPolicies of this Job.

func (*Job) GetObservation

func (tr *Job) GetObservation() (map[string]any, error)

GetObservation of this Job

func (*Job) GetParameters

func (tr *Job) GetParameters() (map[string]any, error)

GetParameters of this Job

func (*Job) GetProviderConfigReference

func (mg *Job) GetProviderConfigReference() *xpv1.Reference

GetProviderConfigReference of this Job.

func (*Job) GetPublishConnectionDetailsTo

func (mg *Job) GetPublishConnectionDetailsTo() *xpv1.PublishConnectionDetailsTo

GetPublishConnectionDetailsTo of this Job.

func (*Job) GetTerraformResourceType

func (mg *Job) GetTerraformResourceType() string

GetTerraformResourceType returns Terraform resource type for this Job

func (*Job) GetTerraformSchemaVersion

func (tr *Job) GetTerraformSchemaVersion() int

GetTerraformSchemaVersion returns the associated Terraform schema version

func (*Job) GetWriteConnectionSecretToReference

func (mg *Job) GetWriteConnectionSecretToReference() *xpv1.SecretReference

GetWriteConnectionSecretToReference of this Job.

func (*Job) LateInitialize

func (tr *Job) LateInitialize(attrs []byte) (bool, error)

LateInitialize this Job using its observed tfState. returns True if there are any spec changes for the resource.

func (*Job) ResolveReferences

func (mg *Job) ResolveReferences(ctx context.Context, c client.Reader) error

ResolveReferences of this Job.

func (*Job) SetConditions

func (mg *Job) SetConditions(c ...xpv1.Condition)

SetConditions of this Job.

func (*Job) SetDeletionPolicy

func (mg *Job) SetDeletionPolicy(r xpv1.DeletionPolicy)

SetDeletionPolicy of this Job.

func (*Job) SetManagementPolicies

func (mg *Job) SetManagementPolicies(r xpv1.ManagementPolicies)

SetManagementPolicies of this Job.

func (*Job) SetObservation

func (tr *Job) SetObservation(obs map[string]any) error

SetObservation for this Job

func (*Job) SetParameters

func (tr *Job) SetParameters(params map[string]any) error

SetParameters for this Job

func (*Job) SetProviderConfigReference

func (mg *Job) SetProviderConfigReference(r *xpv1.Reference)

SetProviderConfigReference of this Job.

func (*Job) SetPublishConnectionDetailsTo

func (mg *Job) SetPublishConnectionDetailsTo(r *xpv1.PublishConnectionDetailsTo)

SetPublishConnectionDetailsTo of this Job.

func (*Job) SetWriteConnectionSecretToReference

func (mg *Job) SetWriteConnectionSecretToReference(r *xpv1.SecretReference)

SetWriteConnectionSecretToReference of this Job.

type JobInitParameters

type JobInitParameters struct {

	// (String) The kind of the dataset to use with the logpush job. Available values: access_requests, casb_findings, firewall_events, http_requests, spectrum_events, nel_reports, audit_logs, gateway_dns, gateway_http, gateway_network, dns_logs, network_analytics_logs, workers_trace_events, device_posture_results, zero_trust_network_sessions, magic_ids_detections, page_shield_events.
	// The kind of the dataset to use with the logpush job. Available values: `access_requests`, `casb_findings`, `firewall_events`, `http_requests`, `spectrum_events`, `nel_reports`, `audit_logs`, `gateway_dns`, `gateway_http`, `gateway_network`, `dns_logs`, `network_analytics_logs`, `workers_trace_events`, `device_posture_results`, `zero_trust_network_sessions`, `magic_ids_detections`, `page_shield_events`.
	Dataset *string `json:"dataset,omitempty" tf:"dataset,omitempty"`

	// (String) Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See Logpush destination documentation.
	// Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See [Logpush destination documentation](https://developers.cloudflare.com/logs/reference/logpush-api-configuration#destination).
	DestinationConf *string `json:"destinationConf,omitempty" tf:"destination_conf,omitempty"`

	// (Boolean) Whether to enable the job.
	// Whether to enable the job.
	Enabled *bool `json:"enabled,omitempty" tf:"enabled,omitempty"`

	// (String) Use filters to select the events to include and/or remove from your logs. For more information, refer to Filters.
	// Use filters to select the events to include and/or remove from your logs. For more information, refer to [Filters](https://developers.cloudflare.com/logs/reference/logpush-api-configuration/filters/).
	Filter *string `json:"filter,omitempty" tf:"filter,omitempty"`

	// (String) A higher frequency will result in logs being pushed on faster with smaller files. low frequency will push logs less often with larger files. Available values: high, low. Defaults to high.
	// A higher frequency will result in logs being pushed on faster with smaller files. `low` frequency will push logs less often with larger files. Available values: `high`, `low`. Defaults to `high`.
	Frequency *string `json:"frequency,omitempty" tf:"frequency,omitempty"`

	// logs, "".
	// The kind of logpush job to create. Available values: `edge`, `instant-logs`, `""`.
	Kind *string `json:"kind,omitempty" tf:"kind,omitempty"`

	// (String) Configuration string for the Logshare API. It specifies things like requested fields and timestamp formats. See Logpush options documentation.
	// Configuration string for the Logshare API. It specifies things like requested fields and timestamp formats. See [Logpush options documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#options).
	LogpullOptions *string `json:"logpullOptions,omitempty" tf:"logpull_options,omitempty"`

	// (Number) The maximum uncompressed file size of a batch of logs. Value must be between 5MB and 1GB.
	// The maximum uncompressed file size of a batch of logs. Value must be between 5MB and 1GB.
	MaxUploadBytes *float64 `json:"maxUploadBytes,omitempty" tf:"max_upload_bytes,omitempty"`

	// (Number) The maximum interval in seconds for log batches. Value must be between 30 and 300.
	// The maximum interval in seconds for log batches. Value must be between 30 and 300.
	MaxUploadIntervalSeconds *float64 `json:"maxUploadIntervalSeconds,omitempty" tf:"max_upload_interval_seconds,omitempty"`

	// (Number) The maximum number of log lines per batch. Value must be between 1000 and 1,000,000.
	// The maximum number of log lines per batch. Value must be between 1000 and 1,000,000.
	MaxUploadRecords *float64 `json:"maxUploadRecords,omitempty" tf:"max_upload_records,omitempty"`

	// (String) The name of the logpush job to create.
	// The name of the logpush job to create.
	Name *string `json:"name,omitempty" tf:"name,omitempty"`

	// (Block List, Max: 1) Structured replacement for logpull_options. When including this field, the logpull_option field will be ignored. (see below for nested schema)
	// Structured replacement for logpull_options. When including this field, the logpull_option field will be ignored.
	OutputOptions []OutputOptionsInitParameters `json:"outputOptions,omitempty" tf:"output_options,omitempty"`

	// (String) Ownership challenge token to prove destination ownership, required when destination is Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic. See Developer documentation.
	// Ownership challenge token to prove destination ownership, required when destination is Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic. See [Developer documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#usage).
	OwnershipChallenge *string `json:"ownershipChallenge,omitempty" tf:"ownership_challenge,omitempty"`
}

func (*JobInitParameters) DeepCopy

func (in *JobInitParameters) DeepCopy() *JobInitParameters

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobInitParameters.

func (*JobInitParameters) DeepCopyInto

func (in *JobInitParameters) DeepCopyInto(out *JobInitParameters)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type JobList

type JobList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []Job `json:"items"`
}

JobList contains a list of Jobs

func (*JobList) DeepCopy

func (in *JobList) DeepCopy() *JobList

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobList.

func (*JobList) DeepCopyInto

func (in *JobList) DeepCopyInto(out *JobList)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*JobList) DeepCopyObject

func (in *JobList) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

func (*JobList) GetItems

func (l *JobList) GetItems() []resource.Managed

GetItems of this JobList.

type JobObservation

type JobObservation struct {

	// (String) The account identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The account identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	AccountID *string `json:"accountId,omitempty" tf:"account_id,omitempty"`

	// (String) The kind of the dataset to use with the logpush job. Available values: access_requests, casb_findings, firewall_events, http_requests, spectrum_events, nel_reports, audit_logs, gateway_dns, gateway_http, gateway_network, dns_logs, network_analytics_logs, workers_trace_events, device_posture_results, zero_trust_network_sessions, magic_ids_detections, page_shield_events.
	// The kind of the dataset to use with the logpush job. Available values: `access_requests`, `casb_findings`, `firewall_events`, `http_requests`, `spectrum_events`, `nel_reports`, `audit_logs`, `gateway_dns`, `gateway_http`, `gateway_network`, `dns_logs`, `network_analytics_logs`, `workers_trace_events`, `device_posture_results`, `zero_trust_network_sessions`, `magic_ids_detections`, `page_shield_events`.
	Dataset *string `json:"dataset,omitempty" tf:"dataset,omitempty"`

	// (String) Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See Logpush destination documentation.
	// Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See [Logpush destination documentation](https://developers.cloudflare.com/logs/reference/logpush-api-configuration#destination).
	DestinationConf *string `json:"destinationConf,omitempty" tf:"destination_conf,omitempty"`

	// (Boolean) Whether to enable the job.
	// Whether to enable the job.
	Enabled *bool `json:"enabled,omitempty" tf:"enabled,omitempty"`

	// (String) Use filters to select the events to include and/or remove from your logs. For more information, refer to Filters.
	// Use filters to select the events to include and/or remove from your logs. For more information, refer to [Filters](https://developers.cloudflare.com/logs/reference/logpush-api-configuration/filters/).
	Filter *string `json:"filter,omitempty" tf:"filter,omitempty"`

	// (String) A higher frequency will result in logs being pushed on faster with smaller files. low frequency will push logs less often with larger files. Available values: high, low. Defaults to high.
	// A higher frequency will result in logs being pushed on faster with smaller files. `low` frequency will push logs less often with larger files. Available values: `high`, `low`. Defaults to `high`.
	Frequency *string `json:"frequency,omitempty" tf:"frequency,omitempty"`

	// (String) The ID of this resource.
	ID *string `json:"id,omitempty" tf:"id,omitempty"`

	// logs, "".
	// The kind of logpush job to create. Available values: `edge`, `instant-logs`, `""`.
	Kind *string `json:"kind,omitempty" tf:"kind,omitempty"`

	// (String) Configuration string for the Logshare API. It specifies things like requested fields and timestamp formats. See Logpush options documentation.
	// Configuration string for the Logshare API. It specifies things like requested fields and timestamp formats. See [Logpush options documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#options).
	LogpullOptions *string `json:"logpullOptions,omitempty" tf:"logpull_options,omitempty"`

	// (Number) The maximum uncompressed file size of a batch of logs. Value must be between 5MB and 1GB.
	// The maximum uncompressed file size of a batch of logs. Value must be between 5MB and 1GB.
	MaxUploadBytes *float64 `json:"maxUploadBytes,omitempty" tf:"max_upload_bytes,omitempty"`

	// (Number) The maximum interval in seconds for log batches. Value must be between 30 and 300.
	// The maximum interval in seconds for log batches. Value must be between 30 and 300.
	MaxUploadIntervalSeconds *float64 `json:"maxUploadIntervalSeconds,omitempty" tf:"max_upload_interval_seconds,omitempty"`

	// (Number) The maximum number of log lines per batch. Value must be between 1000 and 1,000,000.
	// The maximum number of log lines per batch. Value must be between 1000 and 1,000,000.
	MaxUploadRecords *float64 `json:"maxUploadRecords,omitempty" tf:"max_upload_records,omitempty"`

	// (String) The name of the logpush job to create.
	// The name of the logpush job to create.
	Name *string `json:"name,omitempty" tf:"name,omitempty"`

	// (Block List, Max: 1) Structured replacement for logpull_options. When including this field, the logpull_option field will be ignored. (see below for nested schema)
	// Structured replacement for logpull_options. When including this field, the logpull_option field will be ignored.
	OutputOptions []OutputOptionsObservation `json:"outputOptions,omitempty" tf:"output_options,omitempty"`

	// (String) Ownership challenge token to prove destination ownership, required when destination is Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic. See Developer documentation.
	// Ownership challenge token to prove destination ownership, required when destination is Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic. See [Developer documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#usage).
	OwnershipChallenge *string `json:"ownershipChallenge,omitempty" tf:"ownership_challenge,omitempty"`

	// (String) The zone identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The zone identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	ZoneID *string `json:"zoneId,omitempty" tf:"zone_id,omitempty"`
}

func (*JobObservation) DeepCopy

func (in *JobObservation) DeepCopy() *JobObservation

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobObservation.

func (*JobObservation) DeepCopyInto

func (in *JobObservation) DeepCopyInto(out *JobObservation)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type JobParameters

type JobParameters struct {

	// (String) The account identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The account identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	// +crossplane:generate:reference:type=github.com/milkpirate/provider-cloudflare/apis/account/v1alpha1.Account
	// +kubebuilder:validation:Optional
	AccountID *string `json:"accountId,omitempty" tf:"account_id,omitempty"`

	// Reference to a Account in account to populate accountId.
	// +kubebuilder:validation:Optional
	AccountIDRef *v1.Reference `json:"accountIdRef,omitempty" tf:"-"`

	// Selector for a Account in account to populate accountId.
	// +kubebuilder:validation:Optional
	AccountIDSelector *v1.Selector `json:"accountIdSelector,omitempty" tf:"-"`

	// (String) The kind of the dataset to use with the logpush job. Available values: access_requests, casb_findings, firewall_events, http_requests, spectrum_events, nel_reports, audit_logs, gateway_dns, gateway_http, gateway_network, dns_logs, network_analytics_logs, workers_trace_events, device_posture_results, zero_trust_network_sessions, magic_ids_detections, page_shield_events.
	// The kind of the dataset to use with the logpush job. Available values: `access_requests`, `casb_findings`, `firewall_events`, `http_requests`, `spectrum_events`, `nel_reports`, `audit_logs`, `gateway_dns`, `gateway_http`, `gateway_network`, `dns_logs`, `network_analytics_logs`, `workers_trace_events`, `device_posture_results`, `zero_trust_network_sessions`, `magic_ids_detections`, `page_shield_events`.
	// +kubebuilder:validation:Optional
	Dataset *string `json:"dataset,omitempty" tf:"dataset,omitempty"`

	// (String) Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See Logpush destination documentation.
	// Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See [Logpush destination documentation](https://developers.cloudflare.com/logs/reference/logpush-api-configuration#destination).
	// +kubebuilder:validation:Optional
	DestinationConf *string `json:"destinationConf,omitempty" tf:"destination_conf,omitempty"`

	// (Boolean) Whether to enable the job.
	// Whether to enable the job.
	// +kubebuilder:validation:Optional
	Enabled *bool `json:"enabled,omitempty" tf:"enabled,omitempty"`

	// (String) Use filters to select the events to include and/or remove from your logs. For more information, refer to Filters.
	// Use filters to select the events to include and/or remove from your logs. For more information, refer to [Filters](https://developers.cloudflare.com/logs/reference/logpush-api-configuration/filters/).
	// +kubebuilder:validation:Optional
	Filter *string `json:"filter,omitempty" tf:"filter,omitempty"`

	// (String) A higher frequency will result in logs being pushed on faster with smaller files. low frequency will push logs less often with larger files. Available values: high, low. Defaults to high.
	// A higher frequency will result in logs being pushed on faster with smaller files. `low` frequency will push logs less often with larger files. Available values: `high`, `low`. Defaults to `high`.
	// +kubebuilder:validation:Optional
	Frequency *string `json:"frequency,omitempty" tf:"frequency,omitempty"`

	// logs, "".
	// The kind of logpush job to create. Available values: `edge`, `instant-logs`, `""`.
	// +kubebuilder:validation:Optional
	Kind *string `json:"kind,omitempty" tf:"kind,omitempty"`

	// (String) Configuration string for the Logshare API. It specifies things like requested fields and timestamp formats. See Logpush options documentation.
	// Configuration string for the Logshare API. It specifies things like requested fields and timestamp formats. See [Logpush options documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#options).
	// +kubebuilder:validation:Optional
	LogpullOptions *string `json:"logpullOptions,omitempty" tf:"logpull_options,omitempty"`

	// (Number) The maximum uncompressed file size of a batch of logs. Value must be between 5MB and 1GB.
	// The maximum uncompressed file size of a batch of logs. Value must be between 5MB and 1GB.
	// +kubebuilder:validation:Optional
	MaxUploadBytes *float64 `json:"maxUploadBytes,omitempty" tf:"max_upload_bytes,omitempty"`

	// (Number) The maximum interval in seconds for log batches. Value must be between 30 and 300.
	// The maximum interval in seconds for log batches. Value must be between 30 and 300.
	// +kubebuilder:validation:Optional
	MaxUploadIntervalSeconds *float64 `json:"maxUploadIntervalSeconds,omitempty" tf:"max_upload_interval_seconds,omitempty"`

	// (Number) The maximum number of log lines per batch. Value must be between 1000 and 1,000,000.
	// The maximum number of log lines per batch. Value must be between 1000 and 1,000,000.
	// +kubebuilder:validation:Optional
	MaxUploadRecords *float64 `json:"maxUploadRecords,omitempty" tf:"max_upload_records,omitempty"`

	// (String) The name of the logpush job to create.
	// The name of the logpush job to create.
	// +kubebuilder:validation:Optional
	Name *string `json:"name,omitempty" tf:"name,omitempty"`

	// (Block List, Max: 1) Structured replacement for logpull_options. When including this field, the logpull_option field will be ignored. (see below for nested schema)
	// Structured replacement for logpull_options. When including this field, the logpull_option field will be ignored.
	// +kubebuilder:validation:Optional
	OutputOptions []OutputOptionsParameters `json:"outputOptions,omitempty" tf:"output_options,omitempty"`

	// (String) Ownership challenge token to prove destination ownership, required when destination is Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic. See Developer documentation.
	// Ownership challenge token to prove destination ownership, required when destination is Amazon S3, Google Cloud Storage, Microsoft Azure or Sumo Logic. See [Developer documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#usage).
	// +kubebuilder:validation:Optional
	OwnershipChallenge *string `json:"ownershipChallenge,omitempty" tf:"ownership_challenge,omitempty"`

	// (String) The zone identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The zone identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	// +crossplane:generate:reference:type=github.com/milkpirate/provider-cloudflare/apis/zone/v1alpha1.Zone
	// +kubebuilder:validation:Optional
	ZoneID *string `json:"zoneId,omitempty" tf:"zone_id,omitempty"`

	// Reference to a Zone in zone to populate zoneId.
	// +kubebuilder:validation:Optional
	ZoneIDRef *v1.Reference `json:"zoneIdRef,omitempty" tf:"-"`

	// Selector for a Zone in zone to populate zoneId.
	// +kubebuilder:validation:Optional
	ZoneIDSelector *v1.Selector `json:"zoneIdSelector,omitempty" tf:"-"`
}

func (*JobParameters) DeepCopy

func (in *JobParameters) DeepCopy() *JobParameters

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobParameters.

func (*JobParameters) DeepCopyInto

func (in *JobParameters) DeepCopyInto(out *JobParameters)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type JobSpec

type JobSpec struct {
	v1.ResourceSpec `json:",inline"`
	ForProvider     JobParameters `json:"forProvider"`
	// THIS IS A BETA FIELD. It will be honored
	// unless the Management Policies feature flag is disabled.
	// InitProvider holds the same fields as ForProvider, with the exception
	// of Identifier and other resource reference fields. The fields that are
	// in InitProvider are merged into ForProvider when the resource is created.
	// The same fields are also added to the terraform ignore_changes hook, to
	// avoid updating them after creation. This is useful for fields that are
	// required on creation, but we do not desire to update them after creation,
	// for example because of an external controller is managing them, like an
	// autoscaler.
	InitProvider JobInitParameters `json:"initProvider,omitempty"`
}

JobSpec defines the desired state of Job

func (*JobSpec) DeepCopy

func (in *JobSpec) DeepCopy() *JobSpec

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobSpec.

func (*JobSpec) DeepCopyInto

func (in *JobSpec) DeepCopyInto(out *JobSpec)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type JobStatus

type JobStatus struct {
	v1.ResourceStatus `json:",inline"`
	AtProvider        JobObservation `json:"atProvider,omitempty"`
}

JobStatus defines the observed state of Job.

func (*JobStatus) DeepCopy

func (in *JobStatus) DeepCopy() *JobStatus

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobStatus.

func (*JobStatus) DeepCopyInto

func (in *JobStatus) DeepCopyInto(out *JobStatus)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OutputOptionsInitParameters

type OutputOptionsInitParameters struct {

	// (String) String to be prepended before each batch.
	// String to be prepended before each batch.
	BatchPrefix *string `json:"batchPrefix,omitempty" tf:"batch_prefix,omitempty"`

	// (String) String to be appended after each batch.
	// String to be appended after each batch.
	BatchSuffix *string `json:"batchSuffix,omitempty" tf:"batch_suffix,omitempty"`

	// 2021-44228. If set to true, will cause all occurrences of ${ in the generated files to be replaced with x{. Defaults to false.
	// Mitigation for CVE-2021-44228. If set to true, will cause all occurrences of ${ in the generated files to be replaced with x{. Defaults to `false`.
	Cve20214428 *bool `json:"cve20214428,omitempty" tf:"cve20214428,omitempty"`

	// (String) String to join fields. This field be ignored when record_template is set. Defaults to ,.
	// String to join fields. This field be ignored when record_template is set. Defaults to `,`.
	FieldDelimiter *string `json:"fieldDelimiter,omitempty" tf:"field_delimiter,omitempty"`

	// (List of String) List of field names to be included in the Logpush output.
	// List of field names to be included in the Logpush output.
	FieldNames []*string `json:"fieldNames,omitempty" tf:"field_names,omitempty"`

	// (String) Specifies the output type. Available values: ndjson, csv. Defaults to ndjson.
	// Specifies the output type. Available values: `ndjson`, `csv`. Defaults to `ndjson`.
	OutputType *string `json:"outputType,omitempty" tf:"output_type,omitempty"`

	// between the records as separator.
	// String to be inserted in-between the records as separator.
	RecordDelimiter *string `json:"recordDelimiter,omitempty" tf:"record_delimiter,omitempty"`

	// (String) String to be prepended before each record. Defaults to {.
	// String to be prepended before each record. Defaults to `{`.
	RecordPrefix *string `json:"recordPrefix,omitempty" tf:"record_prefix,omitempty"`

	// (String) String to be appended after each record. Defaults to }.
	// String to be appended after each record. Defaults to `}`.
	RecordSuffix *string `json:"recordSuffix,omitempty" tf:"record_suffix,omitempty"`

	// separated list.
	// String to use as template for each record instead of the default comma-separated list.
	RecordTemplate *string `json:"recordTemplate,omitempty" tf:"record_template,omitempty"`

	// (Number) Specifies the sampling rate. Defaults to 1.
	// Specifies the sampling rate. Defaults to `1`.
	SampleRate *float64 `json:"sampleRate,omitempty" tf:"sample_rate,omitempty"`

	// (String) Specifies the format for timestamps. Available values: unixnano, unix, rfc3339. Defaults to unixnano.
	// Specifies the format for timestamps. Available values: `unixnano`, `unix`, `rfc3339`. Defaults to `unixnano`.
	TimestampFormat *string `json:"timestampFormat,omitempty" tf:"timestamp_format,omitempty"`
}

func (*OutputOptionsInitParameters) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OutputOptionsInitParameters.

func (*OutputOptionsInitParameters) DeepCopyInto

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OutputOptionsObservation

type OutputOptionsObservation struct {

	// (String) String to be prepended before each batch.
	// String to be prepended before each batch.
	BatchPrefix *string `json:"batchPrefix,omitempty" tf:"batch_prefix,omitempty"`

	// (String) String to be appended after each batch.
	// String to be appended after each batch.
	BatchSuffix *string `json:"batchSuffix,omitempty" tf:"batch_suffix,omitempty"`

	// 2021-44228. If set to true, will cause all occurrences of ${ in the generated files to be replaced with x{. Defaults to false.
	// Mitigation for CVE-2021-44228. If set to true, will cause all occurrences of ${ in the generated files to be replaced with x{. Defaults to `false`.
	Cve20214428 *bool `json:"cve20214428,omitempty" tf:"cve20214428,omitempty"`

	// (String) String to join fields. This field be ignored when record_template is set. Defaults to ,.
	// String to join fields. This field be ignored when record_template is set. Defaults to `,`.
	FieldDelimiter *string `json:"fieldDelimiter,omitempty" tf:"field_delimiter,omitempty"`

	// (List of String) List of field names to be included in the Logpush output.
	// List of field names to be included in the Logpush output.
	FieldNames []*string `json:"fieldNames,omitempty" tf:"field_names,omitempty"`

	// (String) Specifies the output type. Available values: ndjson, csv. Defaults to ndjson.
	// Specifies the output type. Available values: `ndjson`, `csv`. Defaults to `ndjson`.
	OutputType *string `json:"outputType,omitempty" tf:"output_type,omitempty"`

	// between the records as separator.
	// String to be inserted in-between the records as separator.
	RecordDelimiter *string `json:"recordDelimiter,omitempty" tf:"record_delimiter,omitempty"`

	// (String) String to be prepended before each record. Defaults to {.
	// String to be prepended before each record. Defaults to `{`.
	RecordPrefix *string `json:"recordPrefix,omitempty" tf:"record_prefix,omitempty"`

	// (String) String to be appended after each record. Defaults to }.
	// String to be appended after each record. Defaults to `}`.
	RecordSuffix *string `json:"recordSuffix,omitempty" tf:"record_suffix,omitempty"`

	// separated list.
	// String to use as template for each record instead of the default comma-separated list.
	RecordTemplate *string `json:"recordTemplate,omitempty" tf:"record_template,omitempty"`

	// (Number) Specifies the sampling rate. Defaults to 1.
	// Specifies the sampling rate. Defaults to `1`.
	SampleRate *float64 `json:"sampleRate,omitempty" tf:"sample_rate,omitempty"`

	// (String) Specifies the format for timestamps. Available values: unixnano, unix, rfc3339. Defaults to unixnano.
	// Specifies the format for timestamps. Available values: `unixnano`, `unix`, `rfc3339`. Defaults to `unixnano`.
	TimestampFormat *string `json:"timestampFormat,omitempty" tf:"timestamp_format,omitempty"`
}

func (*OutputOptionsObservation) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OutputOptionsObservation.

func (*OutputOptionsObservation) DeepCopyInto

func (in *OutputOptionsObservation) DeepCopyInto(out *OutputOptionsObservation)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OutputOptionsParameters

type OutputOptionsParameters struct {

	// (String) String to be prepended before each batch.
	// String to be prepended before each batch.
	// +kubebuilder:validation:Optional
	BatchPrefix *string `json:"batchPrefix,omitempty" tf:"batch_prefix,omitempty"`

	// (String) String to be appended after each batch.
	// String to be appended after each batch.
	// +kubebuilder:validation:Optional
	BatchSuffix *string `json:"batchSuffix,omitempty" tf:"batch_suffix,omitempty"`

	// 2021-44228. If set to true, will cause all occurrences of ${ in the generated files to be replaced with x{. Defaults to false.
	// Mitigation for CVE-2021-44228. If set to true, will cause all occurrences of ${ in the generated files to be replaced with x{. Defaults to `false`.
	// +kubebuilder:validation:Optional
	Cve20214428 *bool `json:"cve20214428,omitempty" tf:"cve20214428,omitempty"`

	// (String) String to join fields. This field be ignored when record_template is set. Defaults to ,.
	// String to join fields. This field be ignored when record_template is set. Defaults to `,`.
	// +kubebuilder:validation:Optional
	FieldDelimiter *string `json:"fieldDelimiter,omitempty" tf:"field_delimiter,omitempty"`

	// (List of String) List of field names to be included in the Logpush output.
	// List of field names to be included in the Logpush output.
	// +kubebuilder:validation:Optional
	FieldNames []*string `json:"fieldNames,omitempty" tf:"field_names,omitempty"`

	// (String) Specifies the output type. Available values: ndjson, csv. Defaults to ndjson.
	// Specifies the output type. Available values: `ndjson`, `csv`. Defaults to `ndjson`.
	// +kubebuilder:validation:Optional
	OutputType *string `json:"outputType,omitempty" tf:"output_type,omitempty"`

	// between the records as separator.
	// String to be inserted in-between the records as separator.
	// +kubebuilder:validation:Optional
	RecordDelimiter *string `json:"recordDelimiter,omitempty" tf:"record_delimiter,omitempty"`

	// (String) String to be prepended before each record. Defaults to {.
	// String to be prepended before each record. Defaults to `{`.
	// +kubebuilder:validation:Optional
	RecordPrefix *string `json:"recordPrefix,omitempty" tf:"record_prefix,omitempty"`

	// (String) String to be appended after each record. Defaults to }.
	// String to be appended after each record. Defaults to `}`.
	// +kubebuilder:validation:Optional
	RecordSuffix *string `json:"recordSuffix,omitempty" tf:"record_suffix,omitempty"`

	// separated list.
	// String to use as template for each record instead of the default comma-separated list.
	// +kubebuilder:validation:Optional
	RecordTemplate *string `json:"recordTemplate,omitempty" tf:"record_template,omitempty"`

	// (Number) Specifies the sampling rate. Defaults to 1.
	// Specifies the sampling rate. Defaults to `1`.
	// +kubebuilder:validation:Optional
	SampleRate *float64 `json:"sampleRate,omitempty" tf:"sample_rate,omitempty"`

	// (String) Specifies the format for timestamps. Available values: unixnano, unix, rfc3339. Defaults to unixnano.
	// Specifies the format for timestamps. Available values: `unixnano`, `unix`, `rfc3339`. Defaults to `unixnano`.
	// +kubebuilder:validation:Optional
	TimestampFormat *string `json:"timestampFormat,omitempty" tf:"timestamp_format,omitempty"`
}

func (*OutputOptionsParameters) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OutputOptionsParameters.

func (*OutputOptionsParameters) DeepCopyInto

func (in *OutputOptionsParameters) DeepCopyInto(out *OutputOptionsParameters)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OwnershipChallenge

type OwnershipChallenge struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`
	// +kubebuilder:validation:XValidation:rule="!('*' in self.managementPolicies || 'Create' in self.managementPolicies || 'Update' in self.managementPolicies) || has(self.forProvider.destinationConf) || (has(self.initProvider) && has(self.initProvider.destinationConf))",message="spec.forProvider.destinationConf is a required parameter"
	Spec   OwnershipChallengeSpec   `json:"spec"`
	Status OwnershipChallengeStatus `json:"status,omitempty"`
}

OwnershipChallenge is the Schema for the OwnershipChallenges API. Provides a resource which manages Cloudflare Logpush ownership challenges to use in a Logpush Job. On it's own, doesn't do much however this resource should be used in conjunction to create Logpush jobs. +kubebuilder:printcolumn:name="READY",type="string",JSONPath=".status.conditions[?(@.type=='Ready')].status" +kubebuilder:printcolumn:name="SYNCED",type="string",JSONPath=".status.conditions[?(@.type=='Synced')].status" +kubebuilder:printcolumn:name="EXTERNAL-NAME",type="string",JSONPath=".metadata.annotations.crossplane\\.io/external-name" +kubebuilder:printcolumn:name="AGE",type="date",JSONPath=".metadata.creationTimestamp" +kubebuilder:subresource:status +kubebuilder:resource:scope=Cluster,categories={crossplane,managed,cloudflare}

func (*OwnershipChallenge) DeepCopy

func (in *OwnershipChallenge) DeepCopy() *OwnershipChallenge

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OwnershipChallenge.

func (*OwnershipChallenge) DeepCopyInto

func (in *OwnershipChallenge) DeepCopyInto(out *OwnershipChallenge)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*OwnershipChallenge) DeepCopyObject

func (in *OwnershipChallenge) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

func (*OwnershipChallenge) GetCondition

func (mg *OwnershipChallenge) GetCondition(ct xpv1.ConditionType) xpv1.Condition

GetCondition of this OwnershipChallenge.

func (*OwnershipChallenge) GetConnectionDetailsMapping

func (tr *OwnershipChallenge) GetConnectionDetailsMapping() map[string]string

GetConnectionDetailsMapping for this OwnershipChallenge

func (*OwnershipChallenge) GetDeletionPolicy

func (mg *OwnershipChallenge) GetDeletionPolicy() xpv1.DeletionPolicy

GetDeletionPolicy of this OwnershipChallenge.

func (*OwnershipChallenge) GetID

func (tr *OwnershipChallenge) GetID() string

GetID returns ID of underlying Terraform resource of this OwnershipChallenge

func (*OwnershipChallenge) GetInitParameters

func (tr *OwnershipChallenge) GetInitParameters() (map[string]any, error)

GetInitParameters of this OwnershipChallenge

func (*OwnershipChallenge) GetManagementPolicies

func (mg *OwnershipChallenge) GetManagementPolicies() xpv1.ManagementPolicies

GetManagementPolicies of this OwnershipChallenge.

func (*OwnershipChallenge) GetObservation

func (tr *OwnershipChallenge) GetObservation() (map[string]any, error)

GetObservation of this OwnershipChallenge

func (*OwnershipChallenge) GetParameters

func (tr *OwnershipChallenge) GetParameters() (map[string]any, error)

GetParameters of this OwnershipChallenge

func (*OwnershipChallenge) GetProviderConfigReference

func (mg *OwnershipChallenge) GetProviderConfigReference() *xpv1.Reference

GetProviderConfigReference of this OwnershipChallenge.

func (*OwnershipChallenge) GetPublishConnectionDetailsTo

func (mg *OwnershipChallenge) GetPublishConnectionDetailsTo() *xpv1.PublishConnectionDetailsTo

GetPublishConnectionDetailsTo of this OwnershipChallenge.

func (*OwnershipChallenge) GetTerraformResourceType

func (mg *OwnershipChallenge) GetTerraformResourceType() string

GetTerraformResourceType returns Terraform resource type for this OwnershipChallenge

func (*OwnershipChallenge) GetTerraformSchemaVersion

func (tr *OwnershipChallenge) GetTerraformSchemaVersion() int

GetTerraformSchemaVersion returns the associated Terraform schema version

func (*OwnershipChallenge) GetWriteConnectionSecretToReference

func (mg *OwnershipChallenge) GetWriteConnectionSecretToReference() *xpv1.SecretReference

GetWriteConnectionSecretToReference of this OwnershipChallenge.

func (*OwnershipChallenge) LateInitialize

func (tr *OwnershipChallenge) LateInitialize(attrs []byte) (bool, error)

LateInitialize this OwnershipChallenge using its observed tfState. returns True if there are any spec changes for the resource.

func (*OwnershipChallenge) ResolveReferences

func (mg *OwnershipChallenge) ResolveReferences(ctx context.Context, c client.Reader) error

ResolveReferences of this OwnershipChallenge.

func (*OwnershipChallenge) SetConditions

func (mg *OwnershipChallenge) SetConditions(c ...xpv1.Condition)

SetConditions of this OwnershipChallenge.

func (*OwnershipChallenge) SetDeletionPolicy

func (mg *OwnershipChallenge) SetDeletionPolicy(r xpv1.DeletionPolicy)

SetDeletionPolicy of this OwnershipChallenge.

func (*OwnershipChallenge) SetManagementPolicies

func (mg *OwnershipChallenge) SetManagementPolicies(r xpv1.ManagementPolicies)

SetManagementPolicies of this OwnershipChallenge.

func (*OwnershipChallenge) SetObservation

func (tr *OwnershipChallenge) SetObservation(obs map[string]any) error

SetObservation for this OwnershipChallenge

func (*OwnershipChallenge) SetParameters

func (tr *OwnershipChallenge) SetParameters(params map[string]any) error

SetParameters for this OwnershipChallenge

func (*OwnershipChallenge) SetProviderConfigReference

func (mg *OwnershipChallenge) SetProviderConfigReference(r *xpv1.Reference)

SetProviderConfigReference of this OwnershipChallenge.

func (*OwnershipChallenge) SetPublishConnectionDetailsTo

func (mg *OwnershipChallenge) SetPublishConnectionDetailsTo(r *xpv1.PublishConnectionDetailsTo)

SetPublishConnectionDetailsTo of this OwnershipChallenge.

func (*OwnershipChallenge) SetWriteConnectionSecretToReference

func (mg *OwnershipChallenge) SetWriteConnectionSecretToReference(r *xpv1.SecretReference)

SetWriteConnectionSecretToReference of this OwnershipChallenge.

type OwnershipChallengeInitParameters

type OwnershipChallengeInitParameters struct {

	// (String) Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See Logpush destination documentation. Modifying this attribute will force creation of a new resource.
	// Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See [Logpush destination documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#destination). **Modifying this attribute will force creation of a new resource.**
	DestinationConf *string `json:"destinationConf,omitempty" tf:"destination_conf,omitempty"`
}

func (*OwnershipChallengeInitParameters) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OwnershipChallengeInitParameters.

func (*OwnershipChallengeInitParameters) DeepCopyInto

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OwnershipChallengeList

type OwnershipChallengeList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []OwnershipChallenge `json:"items"`
}

OwnershipChallengeList contains a list of OwnershipChallenges

func (*OwnershipChallengeList) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OwnershipChallengeList.

func (*OwnershipChallengeList) DeepCopyInto

func (in *OwnershipChallengeList) DeepCopyInto(out *OwnershipChallengeList)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*OwnershipChallengeList) DeepCopyObject

func (in *OwnershipChallengeList) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

func (*OwnershipChallengeList) GetItems

func (l *OwnershipChallengeList) GetItems() []resource.Managed

GetItems of this OwnershipChallengeList.

type OwnershipChallengeObservation

type OwnershipChallengeObservation struct {

	// (String) The account identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The account identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	AccountID *string `json:"accountId,omitempty" tf:"account_id,omitempty"`

	// (String) Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See Logpush destination documentation. Modifying this attribute will force creation of a new resource.
	// Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See [Logpush destination documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#destination). **Modifying this attribute will force creation of a new resource.**
	DestinationConf *string `json:"destinationConf,omitempty" tf:"destination_conf,omitempty"`

	// (String) The ID of this resource.
	ID *string `json:"id,omitempty" tf:"id,omitempty"`

	// (String) The filename of the ownership challenge which	contains the contents required for Logpush Job creation.
	// The filename of the ownership challenge which	contains the contents required for Logpush Job creation.
	OwnershipChallengeFilename *string `json:"ownershipChallengeFilename,omitempty" tf:"ownership_challenge_filename,omitempty"`

	// (String) The zone identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The zone identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	ZoneID *string `json:"zoneId,omitempty" tf:"zone_id,omitempty"`
}

func (*OwnershipChallengeObservation) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OwnershipChallengeObservation.

func (*OwnershipChallengeObservation) DeepCopyInto

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OwnershipChallengeParameters

type OwnershipChallengeParameters struct {

	// (String) The account identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The account identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	// +crossplane:generate:reference:type=github.com/milkpirate/provider-cloudflare/apis/account/v1alpha1.Account
	// +kubebuilder:validation:Optional
	AccountID *string `json:"accountId,omitempty" tf:"account_id,omitempty"`

	// Reference to a Account in account to populate accountId.
	// +kubebuilder:validation:Optional
	AccountIDRef *v1.Reference `json:"accountIdRef,omitempty" tf:"-"`

	// Selector for a Account in account to populate accountId.
	// +kubebuilder:validation:Optional
	AccountIDSelector *v1.Selector `json:"accountIdSelector,omitempty" tf:"-"`

	// (String) Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See Logpush destination documentation. Modifying this attribute will force creation of a new resource.
	// Uniquely identifies a resource (such as an s3 bucket) where data will be pushed. Additional configuration parameters supported by the destination may be included. See [Logpush destination documentation](https://developers.cloudflare.com/logs/logpush/logpush-configuration-api/understanding-logpush-api/#destination). **Modifying this attribute will force creation of a new resource.**
	// +kubebuilder:validation:Optional
	DestinationConf *string `json:"destinationConf,omitempty" tf:"destination_conf,omitempty"`

	// (String) The zone identifier to target for the resource. Must provide only one of account_id, zone_id.
	// The zone identifier to target for the resource. Must provide only one of `account_id`, `zone_id`.
	// +crossplane:generate:reference:type=github.com/milkpirate/provider-cloudflare/apis/zone/v1alpha1.Zone
	// +kubebuilder:validation:Optional
	ZoneID *string `json:"zoneId,omitempty" tf:"zone_id,omitempty"`

	// Reference to a Zone in zone to populate zoneId.
	// +kubebuilder:validation:Optional
	ZoneIDRef *v1.Reference `json:"zoneIdRef,omitempty" tf:"-"`

	// Selector for a Zone in zone to populate zoneId.
	// +kubebuilder:validation:Optional
	ZoneIDSelector *v1.Selector `json:"zoneIdSelector,omitempty" tf:"-"`
}

func (*OwnershipChallengeParameters) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OwnershipChallengeParameters.

func (*OwnershipChallengeParameters) DeepCopyInto

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OwnershipChallengeSpec

type OwnershipChallengeSpec struct {
	v1.ResourceSpec `json:",inline"`
	ForProvider     OwnershipChallengeParameters `json:"forProvider"`
	// THIS IS A BETA FIELD. It will be honored
	// unless the Management Policies feature flag is disabled.
	// InitProvider holds the same fields as ForProvider, with the exception
	// of Identifier and other resource reference fields. The fields that are
	// in InitProvider are merged into ForProvider when the resource is created.
	// The same fields are also added to the terraform ignore_changes hook, to
	// avoid updating them after creation. This is useful for fields that are
	// required on creation, but we do not desire to update them after creation,
	// for example because of an external controller is managing them, like an
	// autoscaler.
	InitProvider OwnershipChallengeInitParameters `json:"initProvider,omitempty"`
}

OwnershipChallengeSpec defines the desired state of OwnershipChallenge

func (*OwnershipChallengeSpec) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OwnershipChallengeSpec.

func (*OwnershipChallengeSpec) DeepCopyInto

func (in *OwnershipChallengeSpec) DeepCopyInto(out *OwnershipChallengeSpec)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type OwnershipChallengeStatus

type OwnershipChallengeStatus struct {
	v1.ResourceStatus `json:",inline"`
	AtProvider        OwnershipChallengeObservation `json:"atProvider,omitempty"`
}

OwnershipChallengeStatus defines the observed state of OwnershipChallenge.

func (*OwnershipChallengeStatus) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OwnershipChallengeStatus.

func (*OwnershipChallengeStatus) DeepCopyInto

func (in *OwnershipChallengeStatus) DeepCopyInto(out *OwnershipChallengeStatus)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL