output

package
v3.0.0-...-cc23366 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 29, 2024 License: Apache-2.0 Imports: 5 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AzureBlob

type AzureBlob struct {
	// Azure Storage account name
	AccountName string `json:"accountName"`
	// Specify the Azure Storage Shared Key to authenticate against the storage account
	SharedKey *plugins.Secret `json:"sharedKey"`
	// Name of the container that will contain the blobs
	ContainerName string `json:"containerName"`
	// Specify the desired blob type. Must be `appendblob` or `blockblob`
	// +kubebuilder:validation:Enum:=appendblob;blockblob
	BlobType string `json:"blobType,omitempty"`
	// Creates container if ContainerName is not set.
	// +kubebuilder:validation:Enum:=on;off
	AutoCreateContainer string `json:"autoCreateContainer,omitempty"`
	// Optional path to store the blobs.
	Path string `json:"path,omitempty"`
	// Optional toggle to use an Azure emulator
	// +kubebuilder:validation:Enum:=on;off
	EmulatorMode string `json:"emulatorMode,omitempty"`
	// HTTP Service of the endpoint (if using EmulatorMode)
	Endpoint string `json:"endpoint,omitempty"`
	// Enable/Disable TLS Encryption. Azure services require TLS to be enabled.
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

Azure Blob is the Azure Blob output plugin, allows to ingest your records into Azure Blob Storage. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/azure_blob**

func (*AzureBlob) DeepCopy

func (in *AzureBlob) DeepCopy() *AzureBlob

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AzureBlob.

func (*AzureBlob) DeepCopyInto

func (in *AzureBlob) DeepCopyInto(out *AzureBlob)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*AzureBlob) Name

func (_ *AzureBlob) Name() string

Name implement Section() method

func (*AzureBlob) Params

func (o *AzureBlob) Params(sl plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type AzureLogAnalytics

type AzureLogAnalytics struct {
	// Customer ID or Workspace ID
	CustomerID *plugins.Secret `json:"customerID"`
	// Specify the primary or the secondary client authentication key
	SharedKey *plugins.Secret `json:"sharedKey"`
	// Name of the event type.
	LogType string `json:"logType,omitempty"`
	// Set a record key that will populate 'logtype'. If the key is found, it will have precedence
	LogTypeKey string `json:"logTypeKey,omitempty"`
	// Specify the name of the key where the timestamp is stored.
	TimeKey string `json:"timeKey,omitempty"`
	// If set, overrides the timeKey value with the `time-generated-field` HTTP header value.
	TimeGenerated *bool `json:"timeGenerated,omitempty"`
}

Azure Log Analytics is the Azure Log Analytics output plugin, allows you to ingest your records into Azure Log Analytics Workspace. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/azure**

func (*AzureLogAnalytics) DeepCopy

func (in *AzureLogAnalytics) DeepCopy() *AzureLogAnalytics

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AzureLogAnalytics.

func (*AzureLogAnalytics) DeepCopyInto

func (in *AzureLogAnalytics) DeepCopyInto(out *AzureLogAnalytics)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*AzureLogAnalytics) Name

func (_ *AzureLogAnalytics) Name() string

Name implement Section() method

func (*AzureLogAnalytics) Params

Params implement Section() method

type CloudWatch

type CloudWatch struct {
	// AWS Region
	Region string `json:"region"`
	// Name of Cloudwatch Log Group to send log records to
	LogGroupName string `json:"logGroupName,omitempty"`
	// Template for Log Group name, overrides LogGroupName if set.
	LogGroupTemplate string `json:"logGroupTemplate,omitempty"`
	// The name of the CloudWatch Log Stream to send log records to
	LogStreamName string `json:"logStreamName,omitempty"`
	// Prefix for the Log Stream name. Not compatible with LogStreamName setting
	LogStreamPrefix string `json:"logStreamPrefix,omitempty"`
	// Template for Log Stream name. Overrides LogStreamPrefix and LogStreamName if set.
	LogStreamTemplate string `json:"logStreamTemplate,omitempty"`
	// If set, only the value of the key will be sent to CloudWatch
	LogKey string `json:"logKey,omitempty"`
	// Optional parameter to tell CloudWatch the format of the data
	LogFormat string `json:"logFormat,omitempty"`
	// Role ARN to use for cross-account access
	RoleArn string `json:"roleArn,omitempty"`
	// Automatically create the log group. Defaults to False.
	AutoCreateGroup *bool `json:"autoCreateGroup,omitempty"`
	// Number of days logs are retained for
	// +kubebuilder:validation:Enum:=1;3;5;7;14;30;60;90;120;150;180;365;400;545;731;1827;3653
	LogRetentionDays *int32 `json:"logRetentionDays,omitempty"`
	// Custom endpoint for CloudWatch logs API
	Endpoint string `json:"endpoint,omitempty"`
	// Optional string to represent the CloudWatch namespace.
	MetricNamespace string `json:"metricNamespace,omitempty"`
	// Optional lists of lists for dimension keys to be added to all metrics. Use comma separated strings
	// for one list of dimensions and semicolon separated strings for list of lists dimensions.
	MetricDimensions string `json:"metricDimensions,omitempty"`
	// Specify a custom STS endpoint for the AWS STS API
	StsEndpoint string `json:"stsEndpoint,omitempty"`
	// Automatically retry failed requests to CloudWatch once. Defaults to True.
	AutoRetryRequests *bool `json:"autoRetryRequests,omitempty"`
	// Specify an external ID for the STS API.
	ExternalID string `json:"externalID,omitempty"`
}

CloudWatch is the AWS CloudWatch output plugin, allows you to ingest your records into AWS CloudWatch. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch**

func (*CloudWatch) DeepCopy

func (in *CloudWatch) DeepCopy() *CloudWatch

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CloudWatch.

func (*CloudWatch) DeepCopyInto

func (in *CloudWatch) DeepCopyInto(out *CloudWatch)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*CloudWatch) Name

func (_ *CloudWatch) Name() string

Name implement Section() method

func (*CloudWatch) Params

func (o *CloudWatch) Params(sl plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type DataDog

type DataDog struct {
	// Host is the Datadog server where you are sending your logs.
	Host string `json:"host,omitempty"`
	// TLS controls whether to use end-to-end security communications security protocol.
	// Datadog recommends setting this to on.
	TLS *bool `json:"tls,omitempty"`
	// Compress  the payload in GZIP format.
	// Datadog supports and recommends setting this to gzip.
	Compress string `json:"compress,omitempty"`
	// Your Datadog API key.
	APIKey *plugins.Secret `json:"apikey,omitempty"`
	// Specify an HTTP Proxy.
	Proxy string `json:"proxy,omitempty"`
	// To activate the remapping, specify configuration flag provider.
	Provider string `json:"provider,omitempty"`
	// Date key name for output.
	JSONDateKey string `json:"json_date_key,omitempty"`
	// If enabled, a tag is appended to output. The key name is used tag_key property.
	IncludeTagKey *bool `json:"include_tag_key,omitempty"`
	// The key name of tag. If include_tag_key is false, This property is ignored.
	TagKey string `json:"tag_key,omitempty"`
	// The human readable name for your service generating the logs.
	Service string `json:"dd_service,omitempty"`
	// A human readable name for the underlying technology of your service.
	Source string `json:"dd_source,omitempty"`
	// The tags you want to assign to your logs in Datadog.
	Tags string `json:"dd_tags,omitempty"`
	// By default, the plugin searches for the key 'log' and remap the value to the key 'message'. If the property is set, the plugin will search the property name key.
	MessageKey string `json:"dd_message_key,omitempty"`
}

DataDog output plugin allows you to ingest your logs into Datadog. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/datadog**

func (*DataDog) DeepCopy

func (in *DataDog) DeepCopy() *DataDog

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDog.

func (*DataDog) DeepCopyInto

func (in *DataDog) DeepCopyInto(out *DataDog)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*DataDog) Name

func (_ *DataDog) Name() string

func (*DataDog) Params

func (s *DataDog) Params(sl plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type Elasticsearch

type Elasticsearch struct {
	// IP address or hostname of the target Elasticsearch instance
	Host string `json:"host,omitempty"`
	// TCP port of the target Elasticsearch instance
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Elasticsearch accepts new data on HTTP query path "/_bulk".
	// But it is also possible to serve Elasticsearch behind a reverse proxy on a subpath.
	// This option defines such path on the fluent-bit side.
	// It simply adds a path prefix in the indexing HTTP POST URI.
	Path string `json:"path,omitempty"`
	// Set payload compression mechanism. Option available is 'gzip'
	// +kubebuilder:validation:Enum=gzip
	Compress string `json:"compress,omitempty"`
	// Specify the buffer size used to read the response from the Elasticsearch HTTP service.
	// This option is useful for debugging purposes where is required to read full responses,
	// note that response size grows depending of the number of records inserted.
	// To set an unlimited amount of memory set this value to False,
	// otherwise the value must be according to the Unit Size specification.
	// +kubebuilder:validation:Pattern:="^\\d+(k|K|KB|kb|m|M|MB|mb|g|G|GB|gb)?$"
	BufferSize string `json:"bufferSize,omitempty"`
	// Newer versions of Elasticsearch allows setting up filters called pipelines.
	// This option allows defining which pipeline the database should use.
	// For performance reasons is strongly suggested parsing
	// and filtering on Fluent Bit side, avoid pipelines.
	Pipeline string `json:"pipeline,omitempty"`
	// Enable AWS Sigv4 Authentication for Amazon ElasticSearch Service.
	AWSAuth string `json:"awsAuth,omitempty"`
	// AWSAuthSecret Enable AWS Sigv4 Authentication for Amazon ElasticSearch Service.
	AWSAuthSecret *plugins.Secret `json:"awsAuthSecret,omitempty"`
	// Specify the AWS region for Amazon ElasticSearch Service.
	AWSRegion string `json:"awsRegion,omitempty"`
	// Specify the custom sts endpoint to be used with STS API for Amazon ElasticSearch Service.
	AWSSTSEndpoint string `json:"awsSTSEndpoint,omitempty"`
	// AWS IAM Role to assume to put records to your Amazon ES cluster.
	AWSRoleARN string `json:"awsRoleARN,omitempty"`
	// External ID for the AWS IAM Role specified with aws_role_arn.
	AWSExternalID string `json:"awsExternalID,omitempty"`
	// If you are using Elastic's Elasticsearch Service you can specify the cloud_id of the cluster running.
	CloudID string `json:"cloudID,omitempty"`
	// Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud.
	CloudAuth string `json:"cloudAuth,omitempty"`
	// CloudAuthSecret Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud.
	CloudAuthSecret *plugins.Secret `json:"cloudAuthSecret,omitempty"`
	// Optional username credential for Elastic X-Pack access
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Password for user defined in HTTP_User
	HTTPPasswd *plugins.Secret `json:"httpPassword,omitempty"`
	// Index name
	Index string `json:"index,omitempty"`
	// Type name
	Type string `json:"type,omitempty"`
	// Enable Logstash format compatibility.
	// This option takes a boolean value: True/False, On/Off
	LogstashFormat *bool `json:"logstashFormat,omitempty"`
	// When Logstash_Format is enabled, the Index name is composed using a prefix and the date,
	// e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'.
	// The last string appended belongs to the date when the data is being generated.
	LogstashPrefix string `json:"logstashPrefix,omitempty"`
	// Time format (based on strftime) to generate the second part of the Index name.
	LogstashDateFormat string `json:"logstashDateFormat,omitempty"`
	// When Logstash_Format is enabled, each record will get a new timestamp field.
	// The Time_Key property defines the name of that field.
	TimeKey string `json:"timeKey,omitempty"`
	// When Logstash_Format is enabled, this property defines the format of the timestamp.
	TimeKeyFormat string `json:"timeKeyFormat,omitempty"`
	// When Logstash_Format is enabled, enabling this property sends nanosecond precision timestamps.
	TimeKeyNanos *bool `json:"timeKeyNanos,omitempty"`
	// When enabled, it append the Tag name to the record.
	IncludeTagKey *bool `json:"includeTagKey,omitempty"`
	// When Include_Tag_Key is enabled, this property defines the key name for the tag.
	TagKey string `json:"tagKey,omitempty"`
	// When enabled, generate _id for outgoing records.
	// This prevents duplicate records when retrying ES.
	GenerateID *bool `json:"generateID,omitempty"`
	// If set, _id will be the value of the key from incoming record and Generate_ID option is ignored.
	IdKey string `json:"idKey,omitempty"`
	// Operation to use to write in bulk requests.
	WriteOperation string `json:"writeOperation,omitempty"`
	// When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.
	ReplaceDots *bool `json:"replaceDots,omitempty"`
	// When enabled print the elasticsearch API calls to stdout (for diag only)
	TraceOutput *bool `json:"traceOutput,omitempty"`
	// When enabled print the elasticsearch API calls to stdout when elasticsearch returns an error
	TraceError *bool `json:"traceError,omitempty"`
	// Use current time for index generation instead of message record
	CurrentTimeIndex *bool `json:"currentTimeIndex,omitempty"`
	// Prefix keys with this string
	LogstashPrefixKey string `json:"logstashPrefixKey,omitempty"`
	// When enabled, mapping types is removed and Type option is ignored. Types are deprecated in APIs in v7.0. This options is for v7.0 or later.
	SuppressTypeName string `json:"suppressTypeName,omitempty"`
	*plugins.TLS     `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
	// Limit the maximum number of Chunks in the filesystem for the current output logical destination.
	TotalLimitSize string `json:"totalLimitSize,omitempty"`
}

Elasticsearch is the es output plugin, allows to ingest your records into an Elasticsearch database. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch**

func (*Elasticsearch) DeepCopy

func (in *Elasticsearch) DeepCopy() *Elasticsearch

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Elasticsearch.

func (*Elasticsearch) DeepCopyInto

func (in *Elasticsearch) DeepCopyInto(out *Elasticsearch)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Elasticsearch) Name

func (_ *Elasticsearch) Name() string

Name implement Section() method

func (*Elasticsearch) Params

func (es *Elasticsearch) Params(sl plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type File

type File struct {
	// Absolute directory path to store files. If not set, Fluent Bit will write the files on it's own positioned directory.
	Path string `json:"path,omitempty"`
	// Set file name to store the records. If not set, the file name will be the tag associated with the records.
	File string `json:"file,omitempty"`
	// The format of the file content. See also Format section. Default: out_file.
	// +kubebuilder:validation:Enum:=out_file;plain;csv;ltsv;template
	Format string `json:"format,omitempty"`
	// The character to separate each pair. Applicable only if format is csv or ltsv.
	Delimiter string `json:"delimiter,omitempty"`
	// The character to separate each pair. Applicable only if format is ltsv.
	LabelDelimiter string `json:"labelDelimiter,omitempty"`
	// The format string. Applicable only if format is template.
	Template string `json:"template,omitempty"`
}

The file output plugin allows to write the data received through the input plugin to file. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/file**

func (*File) DeepCopy

func (in *File) DeepCopy() *File

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new File.

func (*File) DeepCopyInto

func (in *File) DeepCopyInto(out *File)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*File) Name

func (_ *File) Name() string

func (*File) Params

func (f *File) Params(_ plugins.SecretLoader) (*params.KVs, error)

type Firehose

type Firehose struct {
	// The AWS region.
	Region string `json:"region"`
	// The name of the Kinesis Firehose Delivery stream that you want log records sent to.
	DeliveryStream string `json:"deliveryStream"`
	// Add the timestamp to the record under this key. By default, the timestamp from Fluent Bit will not be added to records sent to Kinesis.
	TimeKey *string `json:"timeKey,omitempty"`
	// strftime compliant format string for the timestamp; for example, %Y-%m-%dT%H *string This option is used with time_key. You can also use %L for milliseconds and %f for microseconds. If you are using ECS FireLens, make sure you are running Amazon ECS Container Agent v1.42.0 or later, otherwise the timestamps associated with your container logs will only have second precision.
	TimeKeyFormat *string `json:"timeKeyFormat,omitempty"`
	// By default, the whole log record will be sent to Kinesis. If you specify a key name(s) with this option, then only those keys and values will be sent to Kinesis. For example, if you are using the Fluentd Docker log driver, you can specify data_keys log and only the log message will be sent to Kinesis. If you specify multiple keys, they should be comma delimited.
	DataKeys *string `json:"dataKeys,omitempty"`
	// By default, the whole log record will be sent to Firehose. If you specify a key name with this option, then only the value of that key will be sent to Firehose. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to Firehose.
	LogKey *string `json:"logKey,omitempty"`
	// ARN of an IAM role to assume (for cross account access).
	RoleARN *string `json:"roleARN,omitempty"`
	// Specify a custom endpoint for the Kinesis Firehose API.
	Endpoint *string `json:"endpoint,omitempty"`
	// Specify a custom endpoint for the STS API; used to assume your custom role provided with role_arn.
	STSEndpoint *string `json:"stsEndpoint,omitempty"`
	// Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues.
	AutoRetryRequests *bool `json:"autoRetryRequests,omitempty"`
}

The Firehose output plugin, allows to ingest your records into AWS Firehose. <br /> It uses the new high performance kinesis_firehose plugin (written in C) instead <br /> of the older firehose plugin (written in Go). <br /> The fluent-bit container must have the plugin installed. <br /> https://docs.fluentbit.io/manual/pipeline/outputs/firehose <br /> https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit <br />

func (*Firehose) DeepCopy

func (in *Firehose) DeepCopy() *Firehose

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Firehose.

func (*Firehose) DeepCopyInto

func (in *Firehose) DeepCopyInto(out *Firehose)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Firehose) Name

func (_ *Firehose) Name() string

implement Section() method

func (*Firehose) Params

func (l *Firehose) Params(sl plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type Forward

type Forward struct {
	// Target host where Fluent-Bit or Fluentd are listening for Forward messages.
	Host string `json:"host,omitempty"`
	// TCP Port of the target service.
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Overwrite the tag as we transmit. This allows the receiving pipeline start
	// fresh, or to attribute source.
	Tag string `json:"tag,omitempty"`
	// Set timestamps in integer format, it enable compatibility mode for Fluentd v0.12 series.
	TimeAsInteger *bool `json:"timeAsInteger,omitempty"`
	// Always send options (with "size"=count of messages)
	SendOptions *bool `json:"sendOptions,omitempty"`
	// Send "chunk"-option and wait for "ack" response from server.
	// Enables at-least-once and receiving server can control rate of traffic.
	// (Requires Fluentd v0.14.0+ server)
	RequireAckResponse *bool `json:"requireAckResponse,omitempty"`
	// A key string known by the remote Fluentd used for authorization.
	SharedKey string `json:"sharedKey,omitempty"`
	// Use this option to connect to Fluentd with a zero-length secret.
	EmptySharedKey *bool `json:"emptySharedKey,omitempty"`
	// Specify the username to present to a Fluentd server that enables user_auth.
	Username *plugins.Secret `json:"username,omitempty"`
	// Specify the password corresponding to the username.
	Password *plugins.Secret `json:"password,omitempty"`
	// Default value of the auto-generated certificate common name (CN).
	SelfHostname string `json:"selfHostname,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

Forward is the protocol used by Fluentd to route messages between peers. <br /> The forward output plugin allows to provide interoperability between Fluent Bit and Fluentd. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/forward**

func (*Forward) DeepCopy

func (in *Forward) DeepCopy() *Forward

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Forward.

func (*Forward) DeepCopyInto

func (in *Forward) DeepCopyInto(out *Forward)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Forward) Name

func (_ *Forward) Name() string

func (*Forward) Params

func (f *Forward) Params(sl plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type Gelf

type Gelf struct {
	// IP address or hostname of the target Graylog server.
	Host string `json:"host,omitempty"`
	// The port that the target Graylog server is listening on.
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// The protocol to use (tls, tcp or udp).
	// +kubebuilder:validation:Enum:=tls;tcp;udp
	Mode string `json:"mode,omitempty"`
	// ShortMessageKey is the key to use as the short message.
	ShortMessageKey string `json:"shortMessageKey,omitempty"`
	// TimestampKey is the key which its value is used as the timestamp of the message.
	TimestampKey string `json:"timestampKey,omitempty"`
	// HostKey is the key which its value is used as the name of the host, source or application that sent this message.
	HostKey string `json:"hostKey,omitempty"`
	// FullMessageKey is the key to use as the long message that can i.e. contain a backtrace.
	FullMessageKey string `json:"fullMessageKey,omitempty"`
	// LevelKey is the key to be used as the log level.
	LevelKey string `json:"levelKey,omitempty"`
	// If transport protocol is udp, it sets the size of packets to be sent.
	PacketSize *int32 `json:"packetSize,omitempty"`
	// If transport protocol is udp, it defines if UDP packets should be compressed.
	Compress     *bool `json:"compress,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

The Gelf output plugin allows to send logs in GELF format directly to a Graylog input using TLS, TCP or UDP protocols. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/gelf**

func (*Gelf) DeepCopy

func (in *Gelf) DeepCopy() *Gelf

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Gelf.

func (*Gelf) DeepCopyInto

func (in *Gelf) DeepCopyInto(out *Gelf)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Gelf) Name

func (_ *Gelf) Name() string

func (*Gelf) Params

func (g *Gelf) Params(sl plugins.SecretLoader) (*params.KVs, error)

type HTTP

type HTTP struct {
	// IP address or hostname of the target HTTP Server
	Host string `json:"host,omitempty"`
	// Basic Auth Username
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Basic Auth Password. Requires HTTP_User to be set
	HTTPPasswd *plugins.Secret `json:"httpPassword,omitempty"`
	// TCP port of the target HTTP Server
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Specify an HTTP Proxy. The expected format of this value is http://host:port.
	// Note that https is not supported yet.
	Proxy string `json:"proxy,omitempty"`
	// Specify an optional HTTP URI for the target web server, e.g: /something
	Uri string `json:"uri,omitempty"`
	// Set payload compression mechanism. Option available is 'gzip'
	Compress string `json:"compress,omitempty"`
	// Specify the data format to be used in the HTTP request body, by default it uses msgpack.
	// Other supported formats are json, json_stream and json_lines and gelf.
	// +kubebuilder:validation:Enum:=msgpack;json;json_stream;json_lines;gelf
	Format string `json:"format,omitempty"`
	// Specify if duplicated headers are allowed.
	// If a duplicated header is found, the latest key/value set is preserved.
	AllowDuplicatedHeaders *bool `json:"allowDuplicatedHeaders,omitempty"`
	// Specify an optional HTTP header field for the original message tag.
	HeaderTag string `json:"headerTag,omitempty"`
	// Add a HTTP header key/value pair. Multiple headers can be set.
	Headers map[string]string `json:"headers,omitempty"`
	// Specify the name of the time key in the output record.
	// To disable the time key just set the value to false.
	JsonDateKey string `json:"jsonDateKey,omitempty"`
	// Specify the format of the date. Supported formats are double, epoch
	// and iso8601 (eg: 2018-05-30T09:39:52.000681Z)
	JsonDateFormat string `json:"jsonDateFormat,omitempty"`
	// Specify the key to use for timestamp in gelf format
	GelfTimestampKey string `json:"gelfTimestampKey,omitempty"`
	// Specify the key to use for the host in gelf format
	GelfHostKey string `json:"gelfHostKey,omitempty"`
	// Specify the key to use as the short message in gelf format
	GelfShortMessageKey string `json:"gelfShortMessageKey,omitempty"`
	// Specify the key to use for the full message in gelf format
	GelfFullMessageKey string `json:"gelfFullMessageKey,omitempty"`
	// Specify the key to use for the level in gelf format
	GelfLevelKey string `json:"gelfLevelKey,omitempty"`
	// HTTP output plugin supports TTL/SSL, for more details about the properties available
	// and general configuration, please refer to the TLS/SSL section.
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

The http output plugin allows to flush your records into a HTTP endpoint. <br /> For now the functionality is pretty basic and it issues a POST request with the data records in MessagePack (or JSON) format. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/http**

func (*HTTP) DeepCopy

func (in *HTTP) DeepCopy() *HTTP

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTP.

func (*HTTP) DeepCopyInto

func (in *HTTP) DeepCopyInto(out *HTTP)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*HTTP) Name

func (*HTTP) Name() string

implement Name method

func (*HTTP) Params

func (h *HTTP) Params(sl plugins.SecretLoader) (*params.KVs, error)

implement Params method

type InfluxDB

type InfluxDB struct {
	// IP address or hostname of the target InfluxDB service.
	Host string `json:"host"`
	// TCP port of the target InfluxDB service.
	//  +kubebuilder:validation:Maximum=65535
	//  +kubebuilder:validation:Minimum=1
	Port *int32 `json:"port,omitempty"`
	// InfluxDB database name where records will be inserted.
	Database string `json:"database,omitempty"`
	// InfluxDB bucket name where records will be inserted - if specified, database is ignored and v2 of API is used
	Bucket string `json:"bucket,omitempty"`
	// InfluxDB organization name where the bucket is (v2 only)
	Org string `json:"org,omitempty"`
	// The name of the tag whose value is incremented for the consecutive simultaneous events.
	SequenceTag string `json:"sequenceTag,omitempty"`
	// Optional username for HTTP Basic Authentication
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Password for user defined in HTTP_User
	HTTPPasswd *plugins.Secret `json:"httpPassword,omitempty"`
	// Authentication token used with InfluxDB v2 - if specified, both HTTPUser and HTTPPasswd are ignored
	HTTPToken *plugins.Secret `json:"httpToken,omitempty"`
	// List of keys that needs to be tagged
	TagKeys []string `json:"tagKeys,omitempty"`
	// Automatically tag keys where value is string.
	AutoTags *bool `json:"autoTags,omitempty"`
	// Dynamically tag keys which are in the string array at Tags_List_Key key.
	TagsListEnabled *bool `json:"tagsListEnabled,omitempty"`
	// Key of the string array optionally contained within each log record that contains tag keys for that record
	TagsListKey  string `json:"tagListKey,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

The influxdb output plugin, allows to flush your records into a InfluxDB time series database. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/influxdb**

func (*InfluxDB) DeepCopy

func (in *InfluxDB) DeepCopy() *InfluxDB

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InfluxDB.

func (*InfluxDB) DeepCopyInto

func (in *InfluxDB) DeepCopyInto(out *InfluxDB)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*InfluxDB) Name

func (_ *InfluxDB) Name() string

Name implement Section() method

func (*InfluxDB) Params

func (o *InfluxDB) Params(sl plugins.SecretLoader) (*params.KVs, error)

type Kafka

type Kafka struct {
	// Specify data format, options available: json, msgpack.
	Format string `json:"format,omitempty"`
	// Optional key to store the message
	MessageKey string `json:"messageKey,omitempty"`
	// If set, the value of Message_Key_Field in the record will indicate the message key.
	// If not set nor found in the record, Message_Key will be used (if set).
	MessageKeyField string `json:"messageKeyField,omitempty"`
	// Set the key to store the record timestamp
	TimestampKey string `json:"timestampKey,omitempty"`
	// iso8601 or double
	TimestampFormat string `json:"timestampFormat,omitempty"`
	// Single of multiple list of Kafka Brokers, e.g: 192.168.1.3:9092, 192.168.1.4:9092.
	Brokers string `json:"brokers,omitempty"`
	// Single entry or list of topics separated by comma (,) that Fluent Bit will use to send messages to Kafka.
	// If only one topic is set, that one will be used for all records.
	// Instead if multiple topics exists, the one set in the record by Topic_Key will be used.
	Topics string `json:"topics,omitempty"`
	// If multiple Topics exists, the value of Topic_Key in the record will indicate the topic to use.
	// E.g: if Topic_Key is router and the record is {"key1": 123, "router": "route_2"},
	// Fluent Bit will use topic route_2. Note that if the value of Topic_Key is not present in Topics,
	// then by default the first topic in the Topics list will indicate the topic to be used.
	TopicKey string `json:"topicKey,omitempty"`
	// {property} can be any librdkafka properties
	Rdkafka map[string]string `json:"rdkafka,omitempty"`
	//adds unknown topics (found in Topic_Key) to Topics. So in Topics only a default topic needs to be configured
	DynamicTopic *bool `json:"dynamicTopic,omitempty"`
	//Fluent Bit queues data into rdkafka library,
	//if for some reason the underlying library cannot flush the records the queue might fills up blocking new addition of records.
	//The queue_full_retries option set the number of local retries to enqueue the data.
	//The default value is 10 times, the interval between each retry is 1 second.
	//Setting the queue_full_retries value to 0 set's an unlimited number of retries.
	QueueFullRetries *int64 `json:"queueFullRetries,omitempty"`
}

Kafka output plugin allows to ingest your records into an Apache Kafka service. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/kafka**

func (*Kafka) DeepCopy

func (in *Kafka) DeepCopy() *Kafka

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Kafka.

func (*Kafka) DeepCopyInto

func (in *Kafka) DeepCopyInto(out *Kafka)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Kafka) Name

func (*Kafka) Name() string

func (*Kafka) Params

func (k *Kafka) Params(_ plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type Kinesis

type Kinesis struct {
	// The AWS region.
	Region string `json:"region"`
	// The name of the Kinesis Streams Delivery stream that you want log records sent to.
	Stream string `json:"stream"`
	// Add the timestamp to the record under this key. By default the timestamp from Fluent Bit will not be added to records sent to Kinesis.
	TimeKey string `json:"timeKey,omitempty"`
	// strftime compliant format string for the timestamp; for example, the default is '%Y-%m-%dT%H:%M:%S'. Supports millisecond precision with '%3N' and supports nanosecond precision with '%9N' and '%L'; for example, adding '%3N' to support millisecond '%Y-%m-%dT%H:%M:%S.%3N'. This option is used with time_key.
	TimeKeyFormat string `json:"timeKeyFormat,omitempty"`
	// By default, the whole log record will be sent to Kinesis. If you specify a key name with this option, then only the value of that key will be sent to Kinesis. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to Kinesis.
	LogKey string `json:"logKey,omitempty"`
	// ARN of an IAM role to assume (for cross account access).
	RoleARN string `json:"roleARN,omitempty"`
	// Specify a custom endpoint for the Kinesis API.
	Endpoint string `json:"endpoint,omitempty"`
	// Custom endpoint for the STS API.
	STSEndpoint string `json:"stsEndpoint,omitempty"`
	// Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. This option defaults to true.
	AutoRetryRequests *bool `json:"autoRetryRequests,omitempty"`
	// Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID.
	ExternalID string `json:"externalID,omitempty"`
}

The Kinesis output plugin, allows to ingest your records into AWS Kinesis. <br /> It uses the new high performance and highly efficient kinesis plugin is called kinesis_streams instead of the older Golang Fluent Bit plugin released in 2019. https://docs.fluentbit.io/manual/pipeline/outputs/kinesis <br /> https://github.com/aws/amazon-kinesis-streams-for-fluent-bit <br />

func (*Kinesis) DeepCopy

func (in *Kinesis) DeepCopy() *Kinesis

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Kinesis.

func (*Kinesis) DeepCopyInto

func (in *Kinesis) DeepCopyInto(out *Kinesis)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Kinesis) Name

func (*Kinesis) Name() string

Name implement Section() method

func (*Kinesis) Params

func (k *Kinesis) Params(_ plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type Loki

type Loki struct {
	// Loki hostname or IP address.
	Host string `json:"host"`
	// Loki TCP port
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Specify a custom HTTP URI. It must start with forward slash.
	Uri string `json:"uri,omitempty"`
	// Set HTTP basic authentication user name.
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Password for user defined in HTTP_User
	// Set HTTP basic authentication password
	HTTPPasswd *plugins.Secret `json:"httpPassword,omitempty"`
	// Set bearer token authentication token value.
	// Can be used as alterntative to HTTP basic authentication
	BearerToken *plugins.Secret `json:"bearerToken,omitempty"`
	// Tenant ID used by default to push logs to Loki.
	// If omitted or empty it assumes Loki is running in single-tenant mode and no X-Scope-OrgID header is sent.
	TenantID *plugins.Secret `json:"tenantID,omitempty"`
	// Stream labels for API request. It can be multiple comma separated of strings specifying  key=value pairs.
	// In addition to fixed parameters, it also allows to add custom record keys (similar to label_keys property).
	Labels []string `json:"labels,omitempty"`
	// Optional list of record keys that will be placed as stream labels.
	// This configuration property is for records key only.
	LabelKeys []string `json:"labelKeys,omitempty"`
	// Specify the label map file path. The file defines how to extract labels from each record.
	LabelMapPath string `json:"labelMapPath,omitempty"`
	// Optional list of keys to remove.
	RemoveKeys []string `json:"removeKeys,omitempty"`
	// If set to true and after extracting labels only a single key remains, the log line sent to Loki will be the value of that key in line_format.
	// +kubebuilder:validation:Enum:=on;off
	DropSingleKey string `json:"dropSingleKey,omitempty"`
	// Format to use when flattening the record to a log line. Valid values are json or key_value.
	// If set to json,  the log line sent to Loki will be the Fluent Bit record dumped as JSON.
	// If set to key_value, the log line will be each item in the record concatenated together (separated by a single space) in the format.
	// +kubebuilder:validation:Enum:=json;key_value
	LineFormat string `json:"lineFormat,omitempty"`
	// If set to true, it will add all Kubernetes labels to the Stream labels.
	// +kubebuilder:validation:Enum:=on;off
	AutoKubernetesLabels string `json:"autoKubernetesLabels,omitempty"`
	// Specify the name of the key from the original record that contains the Tenant ID.
	// The value of the key is set as X-Scope-OrgID of HTTP header. It is useful to set Tenant ID dynamically.
	TenantIDKey  string `json:"tenantIDKey,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

The loki output plugin, allows to ingest your records into a Loki service. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/loki**

func (*Loki) DeepCopy

func (in *Loki) DeepCopy() *Loki

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Loki.

func (*Loki) DeepCopyInto

func (in *Loki) DeepCopyInto(out *Loki)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Loki) Name

func (_ *Loki) Name() string

implement Section() method

func (*Loki) Params

func (l *Loki) Params(sl plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type Null

type Null struct{}

The null output plugin just throws away events.

func (*Null) DeepCopy

func (in *Null) DeepCopy() *Null

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Null.

func (*Null) DeepCopyInto

func (in *Null) DeepCopyInto(out *Null)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Null) Name

func (_ *Null) Name() string

func (*Null) Params

func (_ *Null) Params(_ plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type OpenSearch

type OpenSearch struct {
	// IP address or hostname of the target OpenSearch instance, default `127.0.0.1`
	Host string `json:"host,omitempty"`
	// TCP port of the target OpenSearch instance, default `9200`
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// OpenSearch accepts new data on HTTP query path "/_bulk".
	// But it is also possible to serve OpenSearch behind a reverse proxy on a subpath.
	// This option defines such path on the fluent-bit side.
	// It simply adds a path prefix in the indexing HTTP POST URI.
	Path string `json:"path,omitempty"`
	// Specify the buffer size used to read the response from the OpenSearch HTTP service.
	// This option is useful for debugging purposes where is required to read full responses,
	// note that response size grows depending of the number of records inserted.
	// To set an unlimited amount of memory set this value to False,
	// otherwise the value must be according to the Unit Size specification.
	// +kubebuilder:validation:Pattern:="^\\d+(k|K|KB|kb|m|M|MB|mb|g|G|GB|gb)?$"
	BufferSize string `json:"bufferSize,omitempty"`
	// OpenSearch allows to setup filters called pipelines.
	// This option allows to define which pipeline the database should use.
	// For performance reasons is strongly suggested to do parsing
	// and filtering on Fluent Bit side, avoid pipelines.
	Pipeline string `json:"pipeline,omitempty"`
	// Enable AWS Sigv4 Authentication for Amazon OpenSearch Service.
	AWSAuth string `json:"awsAuth,omitempty"`
	// Specify the AWS region for Amazon OpenSearch Service.
	AWSRegion string `json:"awsRegion,omitempty"`
	// Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service.
	AWSSTSEndpoint string `json:"awsSTSEndpoint,omitempty"`
	// AWS IAM Role to assume to put records to your Amazon cluster.
	AWSRoleARN string `json:"awsRoleARN,omitempty"`
	// External ID for the AWS IAM Role specified with aws_role_arn.
	AWSExternalID string `json:"awsExternalID,omitempty"`
	// Optional username credential for access
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Password for user defined in HTTP_User
	HTTPPasswd *plugins.Secret `json:"httpPassword,omitempty"`
	// Index name
	Index string `json:"index,omitempty"`
	// Type name
	Type string `json:"type,omitempty"`
	// Enable Logstash format compatibility.
	// This option takes a boolean value: True/False, On/Off
	LogstashFormat *bool `json:"logstashFormat,omitempty"`
	// When Logstash_Format is enabled, the Index name is composed using a prefix and the date,
	// e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'.
	// The last string appended belongs to the date when the data is being generated.
	LogstashPrefix string `json:"logstashPrefix,omitempty"`
	// Time format (based on strftime) to generate the second part of the Index name.
	LogstashDateFormat string `json:"logstashDateFormat,omitempty"`
	// When Logstash_Format is enabled, each record will get a new timestamp field.
	// The Time_Key property defines the name of that field.
	TimeKey string `json:"timeKey,omitempty"`
	// When Logstash_Format is enabled, this property defines the format of the timestamp.
	TimeKeyFormat string `json:"timeKeyFormat,omitempty"`
	// When Logstash_Format is enabled, enabling this property sends nanosecond precision timestamps.
	TimeKeyNanos *bool `json:"timeKeyNanos,omitempty"`
	// When enabled, it append the Tag name to the record.
	IncludeTagKey *bool `json:"includeTagKey,omitempty"`
	// When Include_Tag_Key is enabled, this property defines the key name for the tag.
	TagKey string `json:"tagKey,omitempty"`
	// When enabled, generate _id for outgoing records.
	// This prevents duplicate records when retrying OpenSearch.
	GenerateID *bool `json:"generateID,omitempty"`
	// If set, _id will be the value of the key from incoming record and Generate_ID option is ignored.
	IdKey string `json:"idKey,omitempty"`
	// Operation to use to write in bulk requests.
	WriteOperation string `json:"writeOperation,omitempty"`
	// When enabled, replace field name dots with underscore, required by Elasticsearch 2.0-2.3.
	ReplaceDots *bool `json:"replaceDots,omitempty"`
	// When enabled print the elasticsearch API calls to stdout (for diag only)
	TraceOutput *bool `json:"traceOutput,omitempty"`
	// When enabled print the elasticsearch API calls to stdout when elasticsearch returns an error
	TraceError *bool `json:"traceError,omitempty"`
	// Use current time for index generation instead of message record
	CurrentTimeIndex *bool `json:"currentTimeIndex,omitempty"`
	// Prefix keys with this string
	LogstashPrefixKey string `json:"logstashPrefixKey,omitempty"`
	// When enabled, mapping types is removed and Type option is ignored. Types are deprecated in APIs in v7.0. This options is for v7.0 or later.
	SuppressTypeName *bool `json:"suppressTypeName,omitempty"`
	// Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0.
	Workers      *int32 `json:"Workers,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
	// Limit the maximum number of Chunks in the filesystem for the current output logical destination.
	TotalLimitSize string `json:"totalLimitSize,omitempty"`
	// +kubebuilder:validation:Enum=gzip
	Compress string `json:"compress,omitempty"`
}

OpenSearch is the opensearch output plugin, allows to ingest your records into an OpenSearch database. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/opensearch**

func (*OpenSearch) DeepCopy

func (in *OpenSearch) DeepCopy() *OpenSearch

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OpenSearch.

func (*OpenSearch) DeepCopyInto

func (in *OpenSearch) DeepCopyInto(out *OpenSearch)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*OpenSearch) Name

func (_ *OpenSearch) Name() string

Name implement Section() method

func (*OpenSearch) Params

func (o *OpenSearch) Params(sl plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type OpenTelemetry

type OpenTelemetry struct {
	// IP address or hostname of the target HTTP Server, default `127.0.0.1`
	Host string `json:"host,omitempty"`
	// TCP port of the target OpenSearch instance, default `80`
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Optional username credential for access
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Password for user defined in HTTP_User
	HTTPPasswd *plugins.Secret `json:"httpPassword,omitempty"`
	// Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT. Note that HTTPS is not currently supported.
	// It is recommended not to set this and to configure the HTTP proxy environment variables instead as they support both HTTP and HTTPS.
	Proxy string `json:"proxy,omitempty"`
	// Specify an optional HTTP URI for the target web server listening for metrics, e.g: /v1/metrics
	MetricsUri string `json:"metricsUri,omitempty"`
	// Specify an optional HTTP URI for the target web server listening for logs, e.g: /v1/logs
	LogsUri string `json:"logsUri,omitempty"`
	// Specify an optional HTTP URI for the target web server listening for traces, e.g: /v1/traces
	TracesUri string `json:"tracesUri,omitempty"`
	// Add a HTTP header key/value pair. Multiple headers can be set.
	Header map[string]string `json:"header,omitempty"`
	// Log the response payload within the Fluent Bit log.
	LogResponsePayload *bool `json:"logResponsePayload,omitempty"`
	// This allows you to add custom labels to all metrics exposed through the OpenTelemetry exporter. You may have multiple of these fields.
	AddLabel map[string]string `json:"addLabel,omitempty"`
	// If true, remaining unmatched keys are added as attributes.
	LogsBodyKeyAttributes *bool `json:"logsBodyKeyAttributes,omitempty"`
	// The log body key to look up in the log events body/message. Sets the Body field of the opentelemtry logs data model.
	LogsBodyKey  string `json:"logsBodyKey,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

The OpenTelemetry plugin allows you to take logs, metrics, and traces from Fluent Bit and submit them to an OpenTelemetry HTTP endpoint. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/opentelemetry**

func (*OpenTelemetry) DeepCopy

func (in *OpenTelemetry) DeepCopy() *OpenTelemetry

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OpenTelemetry.

func (*OpenTelemetry) DeepCopyInto

func (in *OpenTelemetry) DeepCopyInto(out *OpenTelemetry)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*OpenTelemetry) Name

func (_ *OpenTelemetry) Name() string

Name implement Section() method

func (*OpenTelemetry) Params

func (o *OpenTelemetry) Params(sl plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type PrometheusExporter

type PrometheusExporter struct {
	// IP address or hostname of the target HTTP Server, default: 0.0.0.0
	Host string `json:"host"`
	// This is the port Fluent Bit will bind to when hosting prometheus metrics.
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	//This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields
	AddLabels map[string]string `json:"addLabels,omitempty"`
}

PrometheusExporter An output plugin to expose Prometheus Metrics. <br /> The prometheus exporter allows you to take metrics from Fluent Bit and expose them such that a Prometheus instance can scrape them. <br /> **Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics** <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/prometheus-exporter**

func (*PrometheusExporter) DeepCopy

func (in *PrometheusExporter) DeepCopy() *PrometheusExporter

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PrometheusExporter.

func (*PrometheusExporter) DeepCopyInto

func (in *PrometheusExporter) DeepCopyInto(out *PrometheusExporter)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*PrometheusExporter) Name

func (_ *PrometheusExporter) Name() string

implement Section() method

func (*PrometheusExporter) Params

implement Section() method

type PrometheusRemoteWrite

type PrometheusRemoteWrite struct {
	// IP address or hostname of the target HTTP Server, default: 127.0.0.1
	Host string `json:"host"`
	// Basic Auth Username
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Basic Auth Password.
	// Requires HTTP_user to be se
	HTTPPasswd *plugins.Secret `json:"httpPasswd,omitempty"`
	// TCP port of the target HTTP Serveri, default:80
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Specify an HTTP Proxy. The expected format of this value is http://HOST:PORT.
	Proxy string `json:"proxy,omitempty"`
	//Specify an optional HTTP URI for the target web server, e.g: /something ,default: /
	URI string `json:"uri,omitempty"`
	//Add a HTTP header key/value pair. Multiple headers can be set.
	Headers map[string]string `json:"headers,omitempty"`
	//Log the response payload within the Fluent Bit log,default: false
	LogResponsePayload *bool `json:"logResponsePayload,omitempty"`
	//This allows you to add custom labels to all metrics exposed through the prometheus exporter. You may have multiple of these fields
	AddLabels map[string]string `json:"addLabels,omitempty"`
	//Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0,default : 2
	Workers *int32 `json:"workers,omitempty"`

	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

An output plugin to submit Prometheus Metrics using the remote write protocol. <br /> The prometheus remote write plugin allows you to take metrics from Fluent Bit and submit them to a Prometheus server through the remote write mechanism. <br /> **Important Note: The prometheus exporter only works with metric plugins, such as Node Exporter Metrics** <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/prometheus-remote-write**

func (*PrometheusRemoteWrite) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PrometheusRemoteWrite.

func (*PrometheusRemoteWrite) DeepCopyInto

func (in *PrometheusRemoteWrite) DeepCopyInto(out *PrometheusRemoteWrite)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*PrometheusRemoteWrite) Name

func (_ *PrometheusRemoteWrite) Name() string

implement Section() method

func (*PrometheusRemoteWrite) Params

implement Section() method

type S3

type S3 struct {
	// The AWS region of your S3 bucket
	Region string `json:"Region"`
	// S3 Bucket name
	Bucket string `json:"Bucket"`
	// Specify the name of the time key in the output record. To disable the time key just set the value to false.
	JsonDateKey string `json:"JsonDateKey,omitempty"`
	// Specify the format of the date. Supported formats are double, epoch, iso8601 (eg: 2018-05-30T09:39:52.000681Z) and java_sql_timestamp (eg: 2018-05-30 09:39:52.000681)
	JsonDateFormat string `json:"JsonDateFormat,omitempty"`
	// Specifies the size of files in S3. Minimum size is 1M. With use_put_object On the maximum size is 1G. With multipart upload mode, the maximum size is 50G.
	TotalFileSize string `json:"TotalFileSize,omitempty"`
	// The size of each 'part' for multipart uploads. Max: 50M
	UploadChunkSize string `json:"UploadChunkSize,omitempty"`
	// Whenever this amount of time has elapsed, Fluent Bit will complete an upload and create a new file in S3. For example, set this value to 60m and you will get a new file every hour.
	UploadTimeout string `json:"UploadTimeout,omitempty"`
	// Directory to locally buffer data before sending.
	StoreDir string `json:"StoreDir,omitempty"`
	// The size of the limitation for disk usage in S3.
	StoreDirLimitSize string `json:"StoreDirLimitSize,omitempty"`
	// Format string for keys in S3.
	S3KeyFormat string `json:"S3KeyFormat,omitempty"`
	// A series of characters which will be used to split the tag into 'parts' for use with the s3_key_format option.
	S3KeyFormatTagDelimiters string `json:"S3KeyFormatTagDelimiters,omitempty"`
	// Disables behavior where UUID string is automatically appended to end of S3 key name when $UUID is not provided in s3_key_format. $UUID, time formatters, $TAG, and other dynamic key formatters all work as expected while this feature is set to true.
	StaticFilePath *bool `json:"StaticFilePath,omitempty"`
	// Use the S3 PutObject API, instead of the multipart upload API.
	UsePutObject *bool `json:"UsePutObject,omitempty"`
	// ARN of an IAM role to assume
	RoleArn string `json:"RoleArn,omitempty"`
	// Custom endpoint for the S3 API.
	Endpoint string `json:"Endpoint,omitempty"`
	// Custom endpoint for the STS API.
	StsEndpoint string `json:"StsEndpoint,omitempty"`
	// Predefined Canned ACL Policy for S3 objects.
	CannedAcl string `json:"CannedAcl,omitempty"`
	// Compression type for S3 objects.
	Compression string `json:"Compression,omitempty"`
	// A standard MIME type for the S3 object; this will be set as the Content-Type HTTP header.
	ContentType string `json:"ContentType,omitempty"`
	// Send the Content-MD5 header with PutObject and UploadPart requests, as is required when Object Lock is enabled.
	SendContentMd5 *bool `json:"SendContentMd5,omitempty"`
	// Immediately retry failed requests to AWS services once.
	AutoRetryRequests *bool `json:"AutoRetryRequests,omitempty"`
	// By default, the whole log record will be sent to S3. If you specify a key name with this option, then only the value of that key will be sent to S3.
	LogKey string `json:"LogKey,omitempty"`
	// Normally, when an upload request fails, there is a high chance for the last received chunk to be swapped with a later chunk, resulting in data shuffling. This feature prevents this shuffling by using a queue logic for uploads.
	PreserveDataOrdering *bool `json:"PreserveDataOrdering,omitempty"`
	// Specify the storage class for S3 objects. If this option is not specified, objects will be stored with the default 'STANDARD' storage class.
	StorageClass string `json:"StorageClass,omitempty"`
	// Integer value to set the maximum number of retries allowed.
	RetryLimit *int32 `json:"RetryLimit,omitempty"`
	// Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID.
	ExternalId string `json:"ExternalId,omitempty"`
	// Option to specify an AWS Profile for credentials.
	Profile      string `json:"Profile,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
}

The S3 output plugin, allows to flush your records into a S3 time series database. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/s3**

func (*S3) DeepCopy

func (in *S3) DeepCopy() *S3

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new S3.

func (*S3) DeepCopyInto

func (in *S3) DeepCopyInto(out *S3)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*S3) Name

func (_ *S3) Name() string

Name implement Section() method

func (*S3) Params

func (o *S3) Params(sl plugins.SecretLoader) (*params.KVs, error)

type Splunk

type Splunk struct {
	// IP address or hostname of the target OpenSearch instance, default `127.0.0.1`
	Host string `json:"host,omitempty"`
	// TCP port of the target Splunk instance, default `8088`
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Specify the Authentication Token for the HTTP Event Collector interface.
	SplunkToken *plugins.Secret `json:"splunkToken,omitempty"`
	//Buffer size used to receive Splunk HTTP responses: Default `2M`
	// +kubebuilder:validation:Pattern:="^\\d+(k|K|KB|kb|m|M|MB|mb|g|G|GB|gb)?$"
	HTTPBufferSize string `json:"httpBufferSize,omitempty"`
	// Set payload compression mechanism. The only available option is gzip.
	Compress string `json:"compress,omitempty"`
	// Specify X-Splunk-Request-Channel Header for the HTTP Event Collector interface.
	Channel string `json:"channel,omitempty"`
	// Optional username credential for access
	HTTPUser *plugins.Secret `json:"httpUser,omitempty"`
	// Password for user defined in HTTP_User
	HTTPPasswd *plugins.Secret `json:"httpPassword,omitempty"`
	// If the HTTP server response code is 400 (bad request) and this flag is enabled, it will print the full HTTP request
	// and response to the stdout interface. This feature is available for debugging purposes.
	HTTPDebugBadRequest *bool `json:"httpDebugBadRequest,omitempty"`
	// When enabled, the record keys and values are set in the top level of the map instead of under the event key. Refer to
	// the Sending Raw Events section from the docs more details to make this option work properly.
	SplunkSendRaw *bool `json:"splunkSendRaw,omitempty"`
	//Specify the key name that will be used to send a single value as part of the record.
	EventKey string `json:"eventKey,omitempty"`
	//Specify the key name that contains the host value. This option allows a record accessors pattern.
	EventHost string `json:"eventHost,omitempty"`
	//Set the source value to assign to the event data.
	EventSource string `json:"eventSource,omitempty"`
	//Set the sourcetype value to assign to the event data.
	EventSourcetype string `json:"eventSourcetype,omitempty"`
	// Set a record key that will populate 'sourcetype'. If the key is found, it will have precedence
	// over the value set in event_sourcetype.
	EventSourcetypeKey string `json:"eventSourcetypeKey,omitempty"`
	// The name of the index by which the event data is to be indexed.
	EventIndex string `json:"eventIndex,omitempty"`
	// Set a record key that will populate the index field. If the key is found, it will have precedence
	// over the value set in event_index.
	EventIndexKey string `json:"eventIndexKey,omitempty"`
	//Set event fields for the record. This option is an array and the format is "key_name
	// record_accessor_pattern".
	EventFields []string `json:"eventFields,omitempty"`

	// Enables dedicated thread(s) for this output. Default value `2` is set since version 1.8.13. For previous versions is 0.
	Workers      *int32 `json:"Workers,omitempty"`
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

Splunk output plugin allows to ingest your records into a Splunk Enterprise service through the HTTP Event Collector (HEC) interface. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/splunk**

func (*Splunk) DeepCopy

func (in *Splunk) DeepCopy() *Splunk

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Splunk.

func (*Splunk) DeepCopyInto

func (in *Splunk) DeepCopyInto(out *Splunk)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Splunk) Name

func (_ *Splunk) Name() string

Name implement Section() method

func (*Splunk) Params

func (o *Splunk) Params(sl plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type Stackdriver

type Stackdriver struct {
	// Path to GCP Credentials JSON file
	GoogleServiceCredentials string `json:"googleServiceCredentials,omitempty"`
	// Email associated with the service
	ServiceAccountEmail *plugins.Secret `json:"serviceAccountEmail,omitempty"`
	// Private Key associated with the service
	ServiceAccountSecret *plugins.Secret `json:"serviceAccountSecret,omitempty"`
	// Metadata Server Prefix
	MetadataServer string `json:"metadataServer,omitempty"`
	// GCP/AWS region to store data. Required if Resource is generic_node or generic_task
	Location string `json:"location,omitempty"`
	// Namespace identifier. Required if Resource is generic_node or generic_task
	Namespace string `json:"namespace,omitempty"`
	// Node identifier within the namespace. Required if Resource is generic_node or generic_task
	NodeID string `json:"nodeID,omitempty"`
	// Identifier for a grouping of tasks. Required if Resource is generic_task
	Job string `json:"job,omitempty"`
	// Identifier for a task within a namespace. Required if Resource is generic_task
	TaskID string `json:"taskID,omitempty"`
	// The GCP Project that should receive the logs
	ExportToProjectID string `json:"exportToProjectID,omitempty"`
	// Set resource types of data
	Resource string `json:"resource,omitempty"`
	// Name of the cluster that the pod is running in. Required if Resource is k8s_container, k8s_node, or k8s_pod
	K8sClusterName string `json:"k8sClusterName,omitempty"`
	// Location of the cluster that contains the pods/nodes. Required if Resource is k8s_container, k8s_node, or k8s_pod
	K8sClusterLocation string `json:"k8sClusterLocation,omitempty"`
	// Used by Stackdriver to find related labels and extract them to LogEntry Labels
	LabelsKey string `json:"labelsKey,omitempty"`
	// Optional list of comma separated of strings for key/value pairs
	Labels []string `json:"labels,omitempty"`
	// The value of this field is set as the logName field in Stackdriver
	LogNameKey string `json:"logNameKey,omitempty"`
	// Used to validate the tags of logs that when the Resource is k8s_container, k8s_node, or k8s_pod
	TagPrefix string `json:"tagPrefix,omitempty"`
	// Specify the key that contains the severity information for the logs
	SeverityKey string `json:"severityKey,omitempty"`
	// Rewrite the trace field to be formatted for use with GCP Cloud Trace
	AutoformatStackdriverTrace *bool `json:"autoformatStackdriverTrace,omitempty"`
	// Number of dedicated threads for the Stackdriver Output Plugin
	Workers *int32 `json:"workers,omitempty"`
	// A custom regex to extract fields from the local_resource_id of the logs
	CustomK8sRegex string `json:"customK8sRegex,omitempty"`
	// Optional list of comma seperated strings. Setting these fields overrides the Stackdriver monitored resource API values
	ResourceLabels []string `json:"resourceLabels,omitempty"`
}

Stackdriver is the Stackdriver output plugin, allows you to ingest your records into GCP Stackdriver. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/stackdriver**

func (*Stackdriver) DeepCopy

func (in *Stackdriver) DeepCopy() *Stackdriver

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Stackdriver.

func (*Stackdriver) DeepCopyInto

func (in *Stackdriver) DeepCopyInto(out *Stackdriver)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Stackdriver) Name

func (_ *Stackdriver) Name() string

Name implement Section() method

func (*Stackdriver) Params

func (o *Stackdriver) Params(sl plugins.SecretLoader) (*params.KVs, error)

Params implement Section() method

type Stdout

type Stdout struct {
	// Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream.
	// +kubebuilder:validation:Enum:=msgpack;json;json_lines;json_stream
	Format string `json:"format,omitempty"`
	// Specify the name of the date field in output.
	JsonDateKey string `json:"jsonDateKey,omitempty"`
	// Specify the format of the date. Supported formats are double,  iso8601 (eg: 2018-05-30T09:39:52.000681Z) and epoch.
	// +kubebuilder:validation:Enum:= double;iso8601;epoch
	JsonDateFormat string `json:"jsonDateFormat,omitempty"`
}

The stdout output plugin allows to print to the standard output the data received through the input plugin. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/standard-output**

func (*Stdout) DeepCopy

func (in *Stdout) DeepCopy() *Stdout

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Stdout.

func (*Stdout) DeepCopyInto

func (in *Stdout) DeepCopyInto(out *Stdout)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Stdout) Name

func (_ *Stdout) Name() string

func (*Stdout) Params

func (s *Stdout) Params(_ plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type Syslog

type Syslog struct {
	// Host domain or IP address of the remote Syslog server.
	Host string `json:"host,omitempty"`
	// TCP or UDP port of the remote Syslog server.
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Mode of the desired transport type, the available options are tcp, tls and udp.
	Mode string `json:"mode,omitempty"`
	// Syslog protocol format to use, the available options are rfc3164 and rfc5424.
	SyslogFormat string `json:"syslogFormat,omitempty"`
	// Maximum size allowed per message, in bytes.
	SyslogMaxSize *int32 `json:"syslogMaxSize,omitempty"`
	// Key from the original record that contains the Syslog severity number.
	SyslogSeverityKey string `json:"syslogSeverityKey,omitempty"`
	// Key from the original record that contains the Syslog facility number.
	SyslogFacilityKey string `json:"syslogFacilityKey,omitempty"`
	// Key name from the original record that contains the hostname that generated the message.
	SyslogHostnameKey string `json:"syslogHostnameKey,omitempty"`
	// Key name from the original record that contains the application name that generated the message.
	SyslogAppnameKey string `json:"syslogAppnameKey,omitempty"`
	// Key name from the original record that contains the Process ID that generated the message.
	SyslogProcessIDKey string `json:"syslogProcessIDKey,omitempty"`
	// Key name from the original record that contains the Message ID associated to the message.
	SyslogMessageIDKey string `json:"syslogMessageIDKey,omitempty"`
	// Key name from the original record that contains the Structured Data (SD) content.
	SyslogSDKey string `json:"syslogSDKey,omitempty"`
	// Key key name that contains the message to deliver.
	SyslogMessageKey string `json:"syslogMessageKey,omitempty"`
	// Syslog output plugin supports TTL/SSL, for more details about the properties available
	// and general configuration, please refer to the TLS/SSL section.
	*plugins.TLS `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
	// Limit the maximum number of Chunks in the filesystem for the current output logical destination.
	TotalLimitSize string `json:"totalLimitSize,omitempty"`
}

Syslog output plugin allows you to deliver messages to Syslog servers. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/syslog**

func (*Syslog) DeepCopy

func (in *Syslog) DeepCopy() *Syslog

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Syslog.

func (*Syslog) DeepCopyInto

func (in *Syslog) DeepCopyInto(out *Syslog)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*Syslog) Name

func (_ *Syslog) Name() string

func (*Syslog) Params

func (s *Syslog) Params(sl plugins.SecretLoader) (*params.KVs, error)

implement Section() method

type TCP

type TCP struct {
	// Target host where Fluent-Bit or Fluentd are listening for Forward messages.
	Host string `json:"host,omitempty"`
	// TCP Port of the target service.
	// +kubebuilder:validation:Minimum:=1
	// +kubebuilder:validation:Maximum:=65535
	Port *int32 `json:"port,omitempty"`
	// Specify the data format to be printed. Supported formats are msgpack json, json_lines and json_stream.
	// +kubebuilder:validation:Enum:=msgpack;json;json_lines;json_stream
	Format string `json:"format,omitempty"`
	// TSpecify the name of the time key in the output record.
	// To disable the time key just set the value to false.
	JsonDateKey string `json:"jsonDateKey,omitempty"`
	// Specify the format of the date. Supported formats are double, epoch
	// and iso8601 (eg: 2018-05-30T09:39:52.000681Z)
	// +kubebuilder:validation:Enum:=double;epoch;iso8601
	JsonDateFormat string `json:"jsonDateFormat,omitempty"`
	*plugins.TLS   `json:"tls,omitempty"`
	// Include fluentbit networking options for this output-plugin
	*plugins.Networking `json:"networking,omitempty"`
}

The tcp output plugin allows to send records to a remote TCP server. <br /> The payload can be formatted in different ways as required. <br /> **For full documentation, refer to https://docs.fluentbit.io/manual/pipeline/outputs/tcp-and-tls**

func (*TCP) DeepCopy

func (in *TCP) DeepCopy() *TCP

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TCP.

func (*TCP) DeepCopyInto

func (in *TCP) DeepCopyInto(out *TCP)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

func (*TCP) Name

func (_ *TCP) Name() string

func (*TCP) Params

func (t *TCP) Params(sl plugins.SecretLoader) (*params.KVs, error)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL