v1beta1

package
v1.72.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 25, 2022 License: Apache-2.0 Imports: 6 Imported by: 0

Documentation

Overview

Generate deepcopy object for logging/v1beta1 API group

Package v1beta1 contains API Schema definitions for the logging v1beta1 API group. +k8s:openapi-gen=true +k8s:deepcopy-gen=package,register +k8s:conversion-gen=github.com/GoogleCloudPlatform/k8s-config-connector/pkg/apis/logging +k8s:defaulter-gen=TypeMeta +groupName=logging.cnrm.cloud.google.com

Index

Constants

This section is empty.

Variables

View Source
var (
	// SchemeGroupVersion is the group version used to register these objects.
	SchemeGroupVersion = schema.GroupVersion{Group: "logging.cnrm.cloud.google.com", Version: "v1beta1"}

	// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
	SchemeBuilder = &scheme.Builder{GroupVersion: SchemeGroupVersion}

	// AddToScheme is a global function that registers this API group & version to a scheme
	AddToScheme = SchemeBuilder.AddToScheme

	LoggingLogBucketGVK = schema.GroupVersionKind{
		Group:   SchemeGroupVersion.Group,
		Version: SchemeGroupVersion.Version,
		Kind:    reflect.TypeOf(LoggingLogBucket{}).Name(),
	}

	LoggingLogExclusionGVK = schema.GroupVersionKind{
		Group:   SchemeGroupVersion.Group,
		Version: SchemeGroupVersion.Version,
		Kind:    reflect.TypeOf(LoggingLogExclusion{}).Name(),
	}

	LoggingLogMetricGVK = schema.GroupVersionKind{
		Group:   SchemeGroupVersion.Group,
		Version: SchemeGroupVersion.Version,
		Kind:    reflect.TypeOf(LoggingLogMetric{}).Name(),
	}

	LoggingLogSinkGVK = schema.GroupVersionKind{
		Group:   SchemeGroupVersion.Group,
		Version: SchemeGroupVersion.Version,
		Kind:    reflect.TypeOf(LoggingLogSink{}).Name(),
	}
)

Functions

This section is empty.

Types

type LoggingLogBucket added in v1.72.0

type LoggingLogBucket struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   LoggingLogBucketSpec   `json:"spec,omitempty"`
	Status LoggingLogBucketStatus `json:"status,omitempty"`
}

LoggingLogBucket is the Schema for the logging API +k8s:openapi-gen=true

func (*LoggingLogBucket) DeepCopy added in v1.72.0

func (in *LoggingLogBucket) DeepCopy() *LoggingLogBucket

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogBucket.

func (*LoggingLogBucket) DeepCopyInto added in v1.72.0

func (in *LoggingLogBucket) DeepCopyInto(out *LoggingLogBucket)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogBucket) DeepCopyObject added in v1.72.0

func (in *LoggingLogBucket) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogBucketList added in v1.72.0

type LoggingLogBucketList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []LoggingLogBucket `json:"items"`
}

LoggingLogBucketList contains a list of LoggingLogBucket

func (*LoggingLogBucketList) DeepCopy added in v1.72.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogBucketList.

func (*LoggingLogBucketList) DeepCopyInto added in v1.72.0

func (in *LoggingLogBucketList) DeepCopyInto(out *LoggingLogBucketList)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogBucketList) DeepCopyObject added in v1.72.0

func (in *LoggingLogBucketList) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogBucketSpec added in v1.72.0

type LoggingLogBucketSpec struct {
	/* The BillingAccount that this resource belongs to. Only one of [billingAccountRef, folderRef, organizationRef, projectRef] may be specified. */
	// +optional
	BillingAccountRef *v1alpha1.ResourceRef `json:"billingAccountRef,omitempty"`

	/* Describes this bucket. */
	// +optional
	Description *string `json:"description,omitempty"`

	/* The Folder that this resource belongs to. Only one of [billingAccountRef, folderRef, organizationRef, projectRef] may be specified. */
	// +optional
	FolderRef *v1alpha1.ResourceRef `json:"folderRef,omitempty"`

	/* The location of the resource. The supported locations are: global, us-central1, us-east1, us-west1, asia-east1, europe-west1. */
	Location string `json:"location"`

	/* Whether the bucket has been locked. The retention period on a locked bucket may not be changed. Locked buckets may only be deleted if they are empty. */
	// +optional
	Locked *bool `json:"locked,omitempty"`

	/* The Organization that this resource belongs to. Only one of [billingAccountRef, folderRef, organizationRef, projectRef] may be specified. */
	// +optional
	OrganizationRef *v1alpha1.ResourceRef `json:"organizationRef,omitempty"`

	/* The Project that this resource belongs to. Only one of [billingAccountRef, folderRef, organizationRef, projectRef] may be specified. */
	// +optional
	ProjectRef *v1alpha1.ResourceRef `json:"projectRef,omitempty"`

	/* Immutable. Optional. The name of the resource. Used for creation and acquisition. When unset, the value of `metadata.name` is used as the default. */
	// +optional
	ResourceID *string `json:"resourceID,omitempty"`

	/* Logs will be retained by default for this amount of time, after which they will automatically be deleted. The minimum retention period is 1 day. If this value is set to zero at bucket creation time, the default time of 30 days will be used. */
	// +optional
	RetentionDays *int `json:"retentionDays,omitempty"`
}

func (*LoggingLogBucketSpec) DeepCopy added in v1.72.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogBucketSpec.

func (*LoggingLogBucketSpec) DeepCopyInto added in v1.72.0

func (in *LoggingLogBucketSpec) DeepCopyInto(out *LoggingLogBucketSpec)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LoggingLogBucketStatus added in v1.72.0

type LoggingLogBucketStatus struct {
	/* Conditions represent the latest available observations of the
	   LoggingLogBucket's current state. */
	Conditions []v1alpha1.Condition `json:"conditions,omitempty"`
	/* Output only. The creation timestamp of the bucket. This is not set for any of the default buckets. */
	CreateTime string `json:"createTime,omitempty"`
	/* Output only. The bucket lifecycle state. Possible values: LIFECYCLE_STATE_UNSPECIFIED, ACTIVE, DELETE_REQUESTED */
	LifecycleState string `json:"lifecycleState,omitempty"`
	/* ObservedGeneration is the generation of the resource that was most recently observed by the Config Connector controller. If this is equal to metadata.generation, then that means that the current reported status reflects the most recent desired state of the resource. */
	ObservedGeneration int `json:"observedGeneration,omitempty"`
	/* Output only. The last update timestamp of the bucket. */
	UpdateTime string `json:"updateTime,omitempty"`
}

func (*LoggingLogBucketStatus) DeepCopy added in v1.72.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogBucketStatus.

func (*LoggingLogBucketStatus) DeepCopyInto added in v1.72.0

func (in *LoggingLogBucketStatus) DeepCopyInto(out *LoggingLogBucketStatus)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LoggingLogExclusion added in v1.52.0

type LoggingLogExclusion struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   LoggingLogExclusionSpec   `json:"spec,omitempty"`
	Status LoggingLogExclusionStatus `json:"status,omitempty"`
}

LoggingLogExclusion is the Schema for the logging API +k8s:openapi-gen=true

func (*LoggingLogExclusion) DeepCopy added in v1.52.0

func (in *LoggingLogExclusion) DeepCopy() *LoggingLogExclusion

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogExclusion.

func (*LoggingLogExclusion) DeepCopyInto added in v1.52.0

func (in *LoggingLogExclusion) DeepCopyInto(out *LoggingLogExclusion)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogExclusion) DeepCopyObject added in v1.52.0

func (in *LoggingLogExclusion) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogExclusionList added in v1.52.0

type LoggingLogExclusionList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []LoggingLogExclusion `json:"items"`
}

LoggingLogExclusionList contains a list of LoggingLogExclusion

func (*LoggingLogExclusionList) DeepCopy added in v1.52.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogExclusionList.

func (*LoggingLogExclusionList) DeepCopyInto added in v1.52.0

func (in *LoggingLogExclusionList) DeepCopyInto(out *LoggingLogExclusionList)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogExclusionList) DeepCopyObject added in v1.52.0

func (in *LoggingLogExclusionList) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogExclusionSpec added in v1.52.0

type LoggingLogExclusionSpec struct {
	/* The BillingAccount that this resource belongs to. Only one of [projectRef, folderRef, organizationRef, billingAccountRef] may be specified. */
	// +optional
	BillingAccountRef *v1alpha1.ResourceRef `json:"billingAccountRef,omitempty"`

	/* Optional. A description of this exclusion. */
	// +optional
	Description *string `json:"description,omitempty"`

	/* Optional. If set to True, then this exclusion is disabled and it does not exclude any log entries. You can update an exclusion to change the value of this field. */
	// +optional
	Disabled *bool `json:"disabled,omitempty"`

	/* Required. An (https://cloud.google.com/logging/docs/view/advanced-queries#sample), you can exclude less than 100% of the matching log entries. For example, the following query matches 99% of low-severity log entries from Google Cloud Storage buckets: `"resource.type=gcs_bucket severity */
	Filter string `json:"filter"`

	/* The Folder that this resource belongs to. Only one of [projectRef, folderRef, organizationRef, billingAccountRef] may be specified. */
	// +optional
	FolderRef *v1alpha1.ResourceRef `json:"folderRef,omitempty"`

	/* The Organization that this resource belongs to. Only one of [projectRef, folderRef, organizationRef, billingAccountRef] may be specified. */
	// +optional
	OrganizationRef *v1alpha1.ResourceRef `json:"organizationRef,omitempty"`

	/* The Project that this resource belongs to. Only one of [projectRef, folderRef, organizationRef, billingAccountRef] may be specified. */
	// +optional
	ProjectRef *v1alpha1.ResourceRef `json:"projectRef,omitempty"`

	/* Immutable. Optional. The name of the resource. Used for creation and acquisition. When unset, the value of `metadata.name` is used as the default. */
	// +optional
	ResourceID *string `json:"resourceID,omitempty"`
}

func (*LoggingLogExclusionSpec) DeepCopy added in v1.52.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogExclusionSpec.

func (*LoggingLogExclusionSpec) DeepCopyInto added in v1.52.0

func (in *LoggingLogExclusionSpec) DeepCopyInto(out *LoggingLogExclusionSpec)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LoggingLogExclusionStatus added in v1.52.0

type LoggingLogExclusionStatus struct {
	/* Conditions represent the latest available observations of the
	   LoggingLogExclusion's current state. */
	Conditions []v1alpha1.Condition `json:"conditions,omitempty"`
	/* Output only. The creation timestamp of the exclusion. This field may not be present for older exclusions. */
	CreateTime string `json:"createTime,omitempty"`
	/* ObservedGeneration is the generation of the resource that was most recently observed by the Config Connector controller. If this is equal to metadata.generation, then that means that the current reported status reflects the most recent desired state of the resource. */
	ObservedGeneration int `json:"observedGeneration,omitempty"`
	/* Output only. The last update timestamp of the exclusion. This field may not be present for older exclusions. */
	UpdateTime string `json:"updateTime,omitempty"`
}

func (*LoggingLogExclusionStatus) DeepCopy added in v1.52.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogExclusionStatus.

func (*LoggingLogExclusionStatus) DeepCopyInto added in v1.52.0

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LoggingLogMetric added in v1.71.0

type LoggingLogMetric struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   LoggingLogMetricSpec   `json:"spec,omitempty"`
	Status LoggingLogMetricStatus `json:"status,omitempty"`
}

LoggingLogMetric is the Schema for the logging API +k8s:openapi-gen=true

func (*LoggingLogMetric) DeepCopy added in v1.71.0

func (in *LoggingLogMetric) DeepCopy() *LoggingLogMetric

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogMetric.

func (*LoggingLogMetric) DeepCopyInto added in v1.71.0

func (in *LoggingLogMetric) DeepCopyInto(out *LoggingLogMetric)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogMetric) DeepCopyObject added in v1.71.0

func (in *LoggingLogMetric) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogMetricList added in v1.71.0

type LoggingLogMetricList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []LoggingLogMetric `json:"items"`
}

LoggingLogMetricList contains a list of LoggingLogMetric

func (*LoggingLogMetricList) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogMetricList.

func (*LoggingLogMetricList) DeepCopyInto added in v1.71.0

func (in *LoggingLogMetricList) DeepCopyInto(out *LoggingLogMetricList)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogMetricList) DeepCopyObject added in v1.71.0

func (in *LoggingLogMetricList) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogMetricSpec added in v1.71.0

type LoggingLogMetricSpec struct {
	/* Optional. The `bucket_options` are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values. */
	// +optional
	BucketOptions *LogmetricBucketOptions `json:"bucketOptions,omitempty"`

	/* Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters. */
	// +optional
	Description *string `json:"description,omitempty"`

	/* Optional. If set to True, then this metric is disabled and it does not generate any points. */
	// +optional
	Disabled *bool `json:"disabled,omitempty"`

	/* Required. An [advanced logs filter](https://cloud.google.com/logging/docs/view/advanced_filters) which is used to match log entries. Example: "resource.type=gae_app AND severity>=ERROR" The maximum length of the filter is 20000 characters. */
	Filter string `json:"filter"`

	/* Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the `value_extractor` field. The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its `false`. Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project. */
	// +optional
	LabelExtractors map[string]string `json:"labelExtractors,omitempty"`

	/* Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the `filter` expression. The `name`, `type`, and `description` fields in the `metric_descriptor` are output only, and is constructed using the `name` and `description` field in the LogMetric. To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a `value_extractor` expression in the LogMetric. Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the `label_extractors` map. The `metric_kind` and `value_type` fields in the `metric_descriptor` cannot be updated once initially configured. New labels can be added in the `metric_descriptor`, but existing labels cannot be modified except for their description. */
	// +optional
	MetricDescriptor *LogmetricMetricDescriptor `json:"metricDescriptor,omitempty"`

	/* The Project that this resource belongs to. */
	ProjectRef v1alpha1.ResourceRef `json:"projectRef"`

	/* Immutable. Optional. The name of the resource. Used for creation and acquisition. When unset, the value of `metadata.name` is used as the default. */
	// +optional
	ResourceID *string `json:"resourceID,omitempty"`

	/* Optional. A `value_extractor` is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: `EXTRACT(field)` or `REGEXP_EXTRACT(field, regex)`. The argument are: 1. field: The name of the log entry field from which the value is to be extracted. 2. regex: A regular expression using the Google RE2 syntax (https://github.com/google/re2/wiki/Syntax) with a single capture group to extract data from the specified log entry field. The value of the field is converted to a string before applying the regex. It is an error to specify a regex that does not include exactly one capture group. The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution. Example: `REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(d+).*")` */
	// +optional
	ValueExtractor *string `json:"valueExtractor,omitempty"`
}

func (*LoggingLogMetricSpec) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogMetricSpec.

func (*LoggingLogMetricSpec) DeepCopyInto added in v1.71.0

func (in *LoggingLogMetricSpec) DeepCopyInto(out *LoggingLogMetricSpec)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LoggingLogMetricStatus added in v1.71.0

type LoggingLogMetricStatus struct {
	/* Conditions represent the latest available observations of the
	   LoggingLogMetric's current state. */
	Conditions []v1alpha1.Condition `json:"conditions,omitempty"`
	/* Output only. The creation timestamp of the metric. This field may not be present for older metrics. */
	CreateTime string `json:"createTime,omitempty"`
	/*  */
	MetricDescriptor LogmetricMetricDescriptorStatus `json:"metricDescriptor,omitempty"`
	/* ObservedGeneration is the generation of the resource that was most recently observed by the Config Connector controller. If this is equal to metadata.generation, then that means that the current reported status reflects the most recent desired state of the resource. */
	ObservedGeneration int `json:"observedGeneration,omitempty"`
	/* Output only. The last update timestamp of the metric. This field may not be present for older metrics. */
	UpdateTime string `json:"updateTime,omitempty"`
}

func (*LoggingLogMetricStatus) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogMetricStatus.

func (*LoggingLogMetricStatus) DeepCopyInto added in v1.71.0

func (in *LoggingLogMetricStatus) DeepCopyInto(out *LoggingLogMetricStatus)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LoggingLogSink

type LoggingLogSink struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   LoggingLogSinkSpec   `json:"spec,omitempty"`
	Status LoggingLogSinkStatus `json:"status,omitempty"`
}

LoggingLogSink is the Schema for the logging API +k8s:openapi-gen=true

func (*LoggingLogSink) DeepCopy

func (in *LoggingLogSink) DeepCopy() *LoggingLogSink

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogSink.

func (*LoggingLogSink) DeepCopyInto

func (in *LoggingLogSink) DeepCopyInto(out *LoggingLogSink)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogSink) DeepCopyObject

func (in *LoggingLogSink) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogSinkList

type LoggingLogSinkList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []LoggingLogSink `json:"items"`
}

LoggingLogSinkList contains a list of LoggingLogSink

func (*LoggingLogSinkList) DeepCopy

func (in *LoggingLogSinkList) DeepCopy() *LoggingLogSinkList

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogSinkList.

func (*LoggingLogSinkList) DeepCopyInto

func (in *LoggingLogSinkList) DeepCopyInto(out *LoggingLogSinkList)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*LoggingLogSinkList) DeepCopyObject

func (in *LoggingLogSinkList) DeepCopyObject() runtime.Object

DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.

type LoggingLogSinkSpec

type LoggingLogSinkSpec struct {
	/* Options that affect sinks exporting data to BigQuery. */
	// +optional
	BigqueryOptions *LogsinkBigqueryOptions `json:"bigqueryOptions,omitempty"`

	/* A description of this sink. The maximum length of the description is 8000 characters. */
	// +optional
	Description *string `json:"description,omitempty"`

	/*  */
	Destination LogsinkDestination `json:"destination"`

	/* If set to True, then this sink is disabled and it does not export any log entries. */
	// +optional
	Disabled *bool `json:"disabled,omitempty"`

	/* Log entries that match any of the exclusion filters will not be exported. If a log entry is matched by both filter and one of exclusion_filters it will not be exported. */
	// +optional
	Exclusions []LogsinkExclusions `json:"exclusions,omitempty"`

	/* The filter to apply when exporting logs. Only log entries that match the filter are exported. */
	// +optional
	Filter *string `json:"filter,omitempty"`

	/* The folder in which to create the sink. Only one of projectRef,
	folderRef, or organizationRef may be specified. */
	// +optional
	FolderRef *v1alpha1.ResourceRef `json:"folderRef,omitempty"`

	/* Immutable. Whether or not to include children organizations in the sink export. If true, logs associated with child projects are also exported; otherwise only logs relating to the provided organization are included. */
	// +optional
	IncludeChildren *bool `json:"includeChildren,omitempty"`

	/* The organization in which to create the sink. Only one of projectRef,
	folderRef, or organizationRef may be specified. */
	// +optional
	OrganizationRef *v1alpha1.ResourceRef `json:"organizationRef,omitempty"`

	/* The project in which to create the sink. Only one of projectRef,
	folderRef, or organizationRef may be specified. */
	// +optional
	ProjectRef *v1alpha1.ResourceRef `json:"projectRef,omitempty"`

	/* Immutable. Optional. The name of the resource. Used for creation and acquisition. When unset, the value of `metadata.name` is used as the default. */
	// +optional
	ResourceID *string `json:"resourceID,omitempty"`

	/* Immutable. Whether or not to create a unique identity associated with this sink. If false (the default), then the writer_identity used is serviceAccount:cloud-logs@system.gserviceaccount.com. If true, then a unique service account is created and used for this sink. If you wish to publish logs across projects, you must set unique_writer_identity to true. */
	// +optional
	UniqueWriterIdentity *bool `json:"uniqueWriterIdentity,omitempty"`
}

func (*LoggingLogSinkSpec) DeepCopy

func (in *LoggingLogSinkSpec) DeepCopy() *LoggingLogSinkSpec

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogSinkSpec.

func (*LoggingLogSinkSpec) DeepCopyInto

func (in *LoggingLogSinkSpec) DeepCopyInto(out *LoggingLogSinkSpec)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LoggingLogSinkStatus

type LoggingLogSinkStatus struct {
	/* Conditions represent the latest available observations of the
	   LoggingLogSink's current state. */
	Conditions []v1alpha1.Condition `json:"conditions,omitempty"`
	/* ObservedGeneration is the generation of the resource that was most recently observed by the Config Connector controller. If this is equal to metadata.generation, then that means that the current reported status reflects the most recent desired state of the resource. */
	ObservedGeneration int `json:"observedGeneration,omitempty"`
	/* The identity associated with this sink. This identity must be granted write access to the configured destination. */
	WriterIdentity string `json:"writerIdentity,omitempty"`
}

func (*LoggingLogSinkStatus) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LoggingLogSinkStatus.

func (*LoggingLogSinkStatus) DeepCopyInto

func (in *LoggingLogSinkStatus) DeepCopyInto(out *LoggingLogSinkStatus)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricBucketOptions added in v1.71.0

type LogmetricBucketOptions struct {
	/* The explicit buckets. */
	// +optional
	ExplicitBuckets *LogmetricExplicitBuckets `json:"explicitBuckets,omitempty"`

	/* The exponential buckets. */
	// +optional
	ExponentialBuckets *LogmetricExponentialBuckets `json:"exponentialBuckets,omitempty"`

	/* The linear bucket. */
	// +optional
	LinearBuckets *LogmetricLinearBuckets `json:"linearBuckets,omitempty"`
}

func (*LogmetricBucketOptions) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricBucketOptions.

func (*LogmetricBucketOptions) DeepCopyInto added in v1.71.0

func (in *LogmetricBucketOptions) DeepCopyInto(out *LogmetricBucketOptions)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricExplicitBuckets added in v1.71.0

type LogmetricExplicitBuckets struct {
	/* The values must be monotonically increasing. */
	// +optional
	Bounds []float64 `json:"bounds,omitempty"`
}

func (*LogmetricExplicitBuckets) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricExplicitBuckets.

func (*LogmetricExplicitBuckets) DeepCopyInto added in v1.71.0

func (in *LogmetricExplicitBuckets) DeepCopyInto(out *LogmetricExplicitBuckets)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricExponentialBuckets added in v1.71.0

type LogmetricExponentialBuckets struct {
	/* Must be greater than 1. */
	// +optional
	GrowthFactor *float64 `json:"growthFactor,omitempty"`

	/* Must be greater than 0. */
	// +optional
	NumFiniteBuckets *int `json:"numFiniteBuckets,omitempty"`

	/* Must be greater than 0. */
	// +optional
	Scale *float64 `json:"scale,omitempty"`
}

func (*LogmetricExponentialBuckets) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricExponentialBuckets.

func (*LogmetricExponentialBuckets) DeepCopyInto added in v1.71.0

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricLabels added in v1.71.0

type LogmetricLabels struct {
	/* A human-readable description for the label. */
	// +optional
	Description *string `json:"description,omitempty"`

	/* The label key. */
	// +optional
	Key *string `json:"key,omitempty"`

	/* The type of data that can be assigned to the label. Possible values: STRING, BOOL, INT64, DOUBLE, DISTRIBUTION, MONEY */
	// +optional
	ValueType *string `json:"valueType,omitempty"`
}

func (*LogmetricLabels) DeepCopy added in v1.71.0

func (in *LogmetricLabels) DeepCopy() *LogmetricLabels

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricLabels.

func (*LogmetricLabels) DeepCopyInto added in v1.71.0

func (in *LogmetricLabels) DeepCopyInto(out *LogmetricLabels)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricLinearBuckets added in v1.71.0

type LogmetricLinearBuckets struct {
	/* Must be greater than 0. */
	// +optional
	NumFiniteBuckets *int `json:"numFiniteBuckets,omitempty"`

	/* Lower bound of the first bucket. */
	// +optional
	Offset *float64 `json:"offset,omitempty"`

	/* Must be greater than 0. */
	// +optional
	Width *float64 `json:"width,omitempty"`
}

func (*LogmetricLinearBuckets) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricLinearBuckets.

func (*LogmetricLinearBuckets) DeepCopyInto added in v1.71.0

func (in *LogmetricLinearBuckets) DeepCopyInto(out *LogmetricLinearBuckets)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricMetadata added in v1.71.0

type LogmetricMetadata struct {
	/* The delay of data points caused by ingestion. Data points older than this age are guaranteed to be ingested and available to be read, excluding data loss due to errors. */
	// +optional
	IngestDelay *string `json:"ingestDelay,omitempty"`

	/* The sampling period of metric data points. For metrics which are written periodically, consecutive data points are stored at this time interval, excluding data loss due to errors. Metrics with a higher granularity have a smaller sampling period. */
	// +optional
	SamplePeriod *string `json:"samplePeriod,omitempty"`
}

func (*LogmetricMetadata) DeepCopy added in v1.71.0

func (in *LogmetricMetadata) DeepCopy() *LogmetricMetadata

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricMetadata.

func (*LogmetricMetadata) DeepCopyInto added in v1.71.0

func (in *LogmetricMetadata) DeepCopyInto(out *LogmetricMetadata)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricMetricDescriptor added in v1.71.0

type LogmetricMetricDescriptor struct {
	/* A concise name for the metric, which can be displayed in user interfaces. Use sentence case without an ending period, for example "Request count". This field is optional but it is recommended to be set for any metrics associated with user-visible concepts, such as Quota. */
	// +optional
	DisplayName *string `json:"displayName,omitempty"`

	/* The set of labels that can be used to describe a specific instance of this metric type. For example, the `appengine.googleapis.com/http/server/response_latencies` metric type has a label for the HTTP response code, `response_code`, so you can look at latencies for successful responses or just for responses that failed. */
	// +optional
	Labels []LogmetricLabels `json:"labels,omitempty"`

	/* Optional. The launch stage of the metric definition. Possible values: UNIMPLEMENTED, PRELAUNCH, EARLY_ACCESS, ALPHA, BETA, GA, DEPRECATED */
	// +optional
	LaunchStage *string `json:"launchStage,omitempty"`

	/* Optional. Metadata which can be used to guide usage of the metric. */
	// +optional
	Metadata *LogmetricMetadata `json:"metadata,omitempty"`

	/* Whether the metric records instantaneous values, changes to a value, etc. Some combinations of `metric_kind` and `value_type` might not be supported. Possible values: GAUGE, DELTA, CUMULATIVE */
	// +optional
	MetricKind *string `json:"metricKind,omitempty"`

	/* The units in which the metric value is reported. It is only applicable if the `value_type` is `INT64`, `DOUBLE`, or `DISTRIBUTION`. The `unit` defines the representation of the stored metric values. Different systems might scale the values to be more easily displayed (so a value of `0.02kBy` _might_ be displayed as `20By`, and a value of `3523kBy` _might_ be displayed as `3.5MBy`). However, if the `unit` is `kBy`, then the value of the metric is always in thousands of bytes, no matter how it might be displayed. If you want a custom metric to record the exact number of CPU-seconds used by a job, you can create an `INT64 CUMULATIVE` metric whose `unit` is `s{CPU}` (or equivalently `1s{CPU}` or just `s`). If the job uses 12,005 CPU-seconds, then the value is written as `12005`. Alternatively, if you want a custom metric to record data in a more granular way, you can create a `DOUBLE CUMULATIVE` metric whose `unit` is `ks{CPU}`, and then write the value `12.005` (which is `12005/1000`), or use `Kis{CPU}` and write `11.723` (which is `12005/1024`). The supported units are a subset of [The Unified Code for Units of Measure](https://unitsofmeasure.org/ucum.html) standard: **Basic units (UNIT)** * `bit` bit * `By` byte * `s` second * `min` minute * `h` hour * `d` day * `1` dimensionless **Prefixes (PREFIX)** * `k` kilo (10^3) * `M` mega (10^6) * `G` giga (10^9) * `T` tera (10^12) * `P` peta (10^15) * `E` exa (10^18) * `Z` zetta (10^21) * `Y` yotta (10^24) * `m` milli (10^-3) * `u` micro (10^-6) * `n` nano (10^-9) * `p` pico (10^-12) * `f` femto (10^-15) * `a` atto (10^-18) * `z` zepto (10^-21) * `y` yocto (10^-24) * `Ki` kibi (2^10) * `Mi` mebi (2^20) * `Gi` gibi (2^30) * `Ti` tebi (2^40) * `Pi` pebi (2^50) **Grammar** The grammar also includes these connectors: * `/` division or ratio (as an infix operator). For examples, `kBy/{email}` or `MiBy/10ms` (although you should almost never have `/s` in a metric `unit`; rates should always be computed at query time from the underlying cumulative or delta value). * `.` multiplication or composition (as an infix operator). For examples, `GBy.d` or `k{watt}.h`. The grammar for a unit is as follows: Expression = Component: { "." Component } { "/" Component } ; Component = ( [ PREFIX ] UNIT | "%" ) [ Annotation ] | Annotation | "1" ; Annotation = "{" NAME "}" ; Notes: * `Annotation` is just a comment if it follows a `UNIT`. If the annotation is used alone, then the unit is equivalent to `1`. For examples, `{request}/s == 1/s`, `By{transmitted}/s == By/s`. * `NAME` is a sequence of non-blank printable ASCII characters not containing `{` or `}`. * `1` represents a unitary [dimensionless unit](https://en.wikipedia.org/wiki/Dimensionless_quantity) of 1, such as in `1/s`. It is typically used when none of the basic units are appropriate. For example, "new users per day" can be represented as `1/d` or `{new-users}/d` (and a metric value `5` would mean "5 new users). Alternatively, "thousands of page views per day" would be represented as `1000/d` or `k1/d` or `k{page_views}/d` (and a metric value of `5.3` would mean "5300 page views per day"). * `%` represents dimensionless value of 1/100, and annotates values giving a percentage (so the metric values are typically in the range of 0..100, and a metric value `3` means "3 percent"). * `10^2.%` indicates a metric contains a ratio, typically in the range 0..1, that will be multiplied by 100 and displayed as a percentage (so a metric value `0.03` means "3 percent"). */
	// +optional
	Unit *string `json:"unit,omitempty"`

	/* Whether the measurement is an integer, a floating-point number, etc. Some combinations of `metric_kind` and `value_type` might not be supported. Possible values: STRING, BOOL, INT64, DOUBLE, DISTRIBUTION, MONEY */
	// +optional
	ValueType *string `json:"valueType,omitempty"`
}

func (*LogmetricMetricDescriptor) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricMetricDescriptor.

func (*LogmetricMetricDescriptor) DeepCopyInto added in v1.71.0

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogmetricMetricDescriptorStatus added in v1.71.0

type LogmetricMetricDescriptorStatus struct {
	/* A detailed description of the metric, which can be used in documentation. */
	Description string `json:"description,omitempty"`

	/* Read-only. If present, then a time series, which is identified partially by a metric type and a MonitoredResourceDescriptor, that is associated with this metric type can only be associated with one of the monitored resource types listed here. */
	MonitoredResourceTypes []string `json:"monitoredResourceTypes,omitempty"`

	/* The resource name of the metric descriptor. */
	Name string `json:"name,omitempty"`

	/* The metric type, including its DNS name prefix. The type is not URL-encoded. All user-defined metric types have the DNS name `custom.googleapis.com` or `external.googleapis.com`. Metric types should use a natural hierarchical grouping. For example: "custom.googleapis.com/invoice/paid/amount" "external.googleapis.com/prometheus/up" "appengine.googleapis.com/http/server/response_latencies" */
	Type string `json:"type,omitempty"`
}

func (*LogmetricMetricDescriptorStatus) DeepCopy added in v1.71.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogmetricMetricDescriptorStatus.

func (*LogmetricMetricDescriptorStatus) DeepCopyInto added in v1.71.0

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogsinkBigqueryOptions added in v1.45.0

type LogsinkBigqueryOptions struct {
	/* Whether to use BigQuery's partition tables. By default, Logging creates dated tables based on the log entries' timestamps, e.g. syslog_20170523. With partitioned tables the date suffix is no longer present and special query syntax has to be used instead. In both cases, tables are sharded based on UTC timezone. */
	UsePartitionedTables bool `json:"usePartitionedTables"`
}

func (*LogsinkBigqueryOptions) DeepCopy added in v1.45.0

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogsinkBigqueryOptions.

func (*LogsinkBigqueryOptions) DeepCopyInto added in v1.45.0

func (in *LogsinkBigqueryOptions) DeepCopyInto(out *LogsinkBigqueryOptions)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogsinkDestination added in v1.45.0

type LogsinkDestination struct {
	/*  */
	// +optional
	BigQueryDatasetRef *v1alpha1.ResourceRef `json:"bigQueryDatasetRef,omitempty"`

	/*  */
	// +optional
	PubSubTopicRef *v1alpha1.ResourceRef `json:"pubSubTopicRef,omitempty"`

	/*  */
	// +optional
	StorageBucketRef *v1alpha1.ResourceRef `json:"storageBucketRef,omitempty"`
}

func (*LogsinkDestination) DeepCopy added in v1.45.0

func (in *LogsinkDestination) DeepCopy() *LogsinkDestination

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogsinkDestination.

func (*LogsinkDestination) DeepCopyInto added in v1.45.0

func (in *LogsinkDestination) DeepCopyInto(out *LogsinkDestination)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type LogsinkExclusions added in v1.45.0

type LogsinkExclusions struct {
	/* A description of this exclusion. */
	// +optional
	Description *string `json:"description,omitempty"`

	/* If set to True, then this exclusion is disabled and it does not exclude any log entries. */
	// +optional
	Disabled *bool `json:"disabled,omitempty"`

	/* An advanced logs filter that matches the log entries to be excluded. By using the sample function, you can exclude less than 100% of the matching log entries. */
	Filter string `json:"filter"`

	/* A client-assigned identifier, such as "load-balancer-exclusion". Identifiers are limited to 100 characters and can include only letters, digits, underscores, hyphens, and periods. First character has to be alphanumeric. */
	Name string `json:"name"`
}

func (*LogsinkExclusions) DeepCopy added in v1.45.0

func (in *LogsinkExclusions) DeepCopy() *LogsinkExclusions

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LogsinkExclusions.

func (*LogsinkExclusions) DeepCopyInto added in v1.45.0

func (in *LogsinkExclusions) DeepCopyInto(out *LogsinkExclusions)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL