clickhousekafkauserconfig

package
v0.16.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 15, 2023 License: Apache-2.0 Imports: 0 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type ClickhouseKafkaUserConfig

type ClickhouseKafkaUserConfig struct {
	// +kubebuilder:validation:MaxItems=100
	// Tables to create
	Tables []*Tables `groups:"create,update" json:"tables,omitempty"`
}

Integration user config

func (*ClickhouseKafkaUserConfig) DeepCopy

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClickhouseKafkaUserConfig.

func (*ClickhouseKafkaUserConfig) DeepCopyInto

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type Columns

type Columns struct {
	// +kubebuilder:validation:MinLength=1
	// +kubebuilder:validation:MaxLength=40
	// Column name
	Name string `groups:"create,update" json:"name"`

	// +kubebuilder:validation:MinLength=1
	// +kubebuilder:validation:MaxLength=1000
	// Column type
	Type string `groups:"create,update" json:"type"`
}

Table column

func (*Columns) DeepCopy

func (in *Columns) DeepCopy() *Columns

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Columns.

func (*Columns) DeepCopyInto

func (in *Columns) DeepCopyInto(out *Columns)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type Tables

type Tables struct {
	// +kubebuilder:validation:Enum="smallest";"earliest";"beginning";"largest";"latest";"end"
	// Action to take when there is no initial offset in offset store or the desired offset is out of range
	AutoOffsetReset *string `groups:"create,update" json:"auto_offset_reset,omitempty"`

	// +kubebuilder:validation:MaxItems=100
	// Table columns
	Columns []*Columns `groups:"create,update" json:"columns"`

	// +kubebuilder:validation:Enum="Avro";"CSV";"JSONAsString";"JSONCompactEachRow";"JSONCompactStringsEachRow";"JSONEachRow";"JSONStringsEachRow";"MsgPack";"TSKV";"TSV";"TabSeparated";"RawBLOB";"AvroConfluent"
	// Message data format
	DataFormat string `groups:"create,update" json:"data_format"`

	// +kubebuilder:validation:Enum="basic";"best_effort";"best_effort_us"
	// Method to read DateTime from text input formats
	DateTimeInputFormat *string `groups:"create,update" json:"date_time_input_format,omitempty"`

	// +kubebuilder:validation:MinLength=1
	// +kubebuilder:validation:MaxLength=249
	// Kafka consumers group
	GroupName string `groups:"create,update" json:"group_name"`

	// +kubebuilder:validation:Enum="default";"stream"
	// How to handle errors for Kafka engine
	HandleErrorMode *string `groups:"create,update" json:"handle_error_mode,omitempty"`

	// +kubebuilder:validation:Minimum=0
	// +kubebuilder:validation:Maximum=1000000000
	// Number of row collected by poll(s) for flushing data from Kafka
	MaxBlockSize *int `groups:"create,update" json:"max_block_size,omitempty"`

	// +kubebuilder:validation:Minimum=1
	// +kubebuilder:validation:Maximum=1000000000
	// The maximum number of rows produced in one kafka message for row-based formats
	MaxRowsPerMessage *int `groups:"create,update" json:"max_rows_per_message,omitempty"`

	// +kubebuilder:validation:MinLength=1
	// +kubebuilder:validation:MaxLength=40
	// Name of the table
	Name string `groups:"create,update" json:"name"`

	// +kubebuilder:validation:Minimum=1
	// +kubebuilder:validation:Maximum=10
	// The number of consumers per table per replica
	NumConsumers *int `groups:"create,update" json:"num_consumers,omitempty"`

	// +kubebuilder:validation:Minimum=0
	// +kubebuilder:validation:Maximum=1000000000
	// Maximum amount of messages to be polled in a single Kafka poll
	PollMaxBatchSize *int `groups:"create,update" json:"poll_max_batch_size,omitempty"`

	// +kubebuilder:validation:Minimum=0
	// +kubebuilder:validation:Maximum=1000000000
	// Skip at least this number of broken messages from Kafka topic per block
	SkipBrokenMessages *int `groups:"create,update" json:"skip_broken_messages,omitempty"`

	// +kubebuilder:validation:MaxItems=100
	// Kafka topics
	Topics []*Topics `groups:"create,update" json:"topics"`
}

Table to create

func (*Tables) DeepCopy

func (in *Tables) DeepCopy() *Tables

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Tables.

func (*Tables) DeepCopyInto

func (in *Tables) DeepCopyInto(out *Tables)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

type Topics

type Topics struct {
	// +kubebuilder:validation:MinLength=1
	// +kubebuilder:validation:MaxLength=249
	// Name of the topic
	Name string `groups:"create,update" json:"name"`
}

Kafka topic

func (*Topics) DeepCopy

func (in *Topics) DeepCopy() *Topics

DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Topics.

func (*Topics) DeepCopyInto

func (in *Topics) DeepCopyInto(out *Topics)

DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL