clickhouse

package
v0.40.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 5, 2024 License: BSD-3-Clause Imports: 23 Imported by: 0

README

clickhouse output

It sends the event batches to Clickhouse database using Native format and Native protocol.

File.d uses low level Go client - ch-go to provide these features.

Config params

addresses []Address required

TCP Clickhouse addresses, e.g.: 127.0.0.1:9000. Check the insert_strategy to find out how File.d will behave with a list of addresses.

Accepts strings or objects, e.g.:

addresses:
  - 127.0.0.1:9000 # the same as {addr:'127.0.0.1:9000',weight:1}
  - addr: 127.0.0.1:9001
    weight: 2

When some addresses get weight greater than 1 and round_robin insert strategy is used, it works as classical weighted round robin. Given {(a_1,w_1),(a_1,w_1),...,{a_n,w_n}}, where a_i is the ith address and w_i is the ith address' weight, requests are sent in order: w_1 times to a_1, w_2 times to a_2, ..., w_n times to a_n, w_1 times to a_1 and so on.


insert_strategy string default=round_robin options=round_robin|in_order

If more than one addresses are set, File.d will insert batches depends on the strategy: round_robin - File.d will send requests in the round-robin order. in_order - File.d will send requests starting from the first address, ending with the number of retries.


ca_cert string

CA certificate in PEM encoding. This can be a path or the content of the certificate.


database string default=default

Clickhouse database name to search the table.


user string default=default

Clickhouse database user.


password string

Clickhouse database password.


quota_key string

Clickhouse quota key. https://clickhouse.com/docs/en/operations/quotas


table string required

Clickhouse target table.


columns []Column required

Clickhouse table columns. Each column must contain name and type. File.d supports next data types:

  • Signed and unsigned integers from 8 to 64 bits. If you set 128-256 bits - File.d will cast the number to the int64.
  • DateTime, DateTime64
  • String
  • Enum8, Enum16
  • Bool
  • Nullable
  • IPv4, IPv6
  • LowCardinality(String)
  • Array(String)

If you need more types, please, create an issue.


strict_types bool default=false

If true, file.d fails when types are mismatched.

If false, file.d will cast any JSON type to the column type.

For example, if strict_types is false and an event value is a Number, but the column type is a Bool, the Number will be converted to the "true" if the value is "1". But if the value is an Object and the column is an Int File.d converts the Object to "0" to prevent fall.

In the non-strict mode, for String and Array(String) columns the value will be encoded to JSON.

If the strict mode is enabled file.d fails (exit with code 1) in above examples.


retry int default=10

Retries of insertion. If File.d cannot insert for this number of attempts, File.d will fall with non-zero exit code or skip message (see fatal_on_failed_insert).


fatal_on_failed_insert bool default=false

After an insert error, fall with a non-zero exit code or not Experimental feature


clickhouse_settings Settings

Additional settings to the Clickhouse. Settings list: https://clickhouse.com/docs/en/operations/settings/settings


retention cfg.Duration default=50ms

Retention milliseconds for retry to DB.


retention_exponentially_multiplier int default=2

Multiplier for exponential increase of retention between retries


insert_timeout cfg.Duration default=10s

Timeout for each insert request.


max_conns cfg.Expression default=gomaxprocs*4

Max connections in the connection pool.


min_conns cfg.Expression default=gomaxprocs*1

Min connections in the connection pool.


max_conn_lifetime cfg.Duration default=30m

How long a connection lives before it is killed and recreated.


max_conn_idle_time cfg.Duration default=5m

How long an unused connection lives before it is killed.


health_check_period cfg.Duration default=1m

How often to check that idle connections is time to kill.


workers_count cfg.Expression default=gomaxprocs*4

How much workers will be instantiated to send batches. It also configures the amount of minimum and maximum number of database connections.


batch_size cfg.Expression default=capacity/4

Maximum quantity of events to pack into one batch.


batch_size_bytes cfg.Expression default=0

A minimum size of events in a batch to send. If both batch_size and batch_size_bytes are set, they will work together.


batch_flush_timeout cfg.Duration default=200ms

After this timeout batch will be sent even if batch isn't completed.



Generated using insane-doc

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrInvalidTimeType = errors.New("invalid node type for the time")
)
View Source
var (
	ErrNodeIsNil = errors.New("node is nil, but column is not")
)

Functions

func Factory

func Factory() (pipeline.AnyPlugin, pipeline.AnyConfig)

Types

type Address added in v0.17.0

type Address struct {
	Addr   string `json:"addr"`
	Weight int    `json:"weight"`
}

func (*Address) UnmarshalJSON added in v0.17.0

func (a *Address) UnmarshalJSON(b []byte) error

type Clickhouse

type Clickhouse interface {
	Close()
	Do(ctx context.Context, query ch.Query) error
}

type ColBool

type ColBool struct {
	// contains filtered or unexported fields
}

ColBool represents Clickhouse Bool type.

func NewColBool

func NewColBool(nullable bool) *ColBool

func (*ColBool) Append

func (t *ColBool) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColBool) EncodeColumn

func (t *ColBool) EncodeColumn(buffer *proto.Buffer)

func (*ColBool) Reset

func (t *ColBool) Reset()

func (*ColBool) Rows

func (t *ColBool) Rows() int

func (*ColBool) Type

func (t *ColBool) Type() proto.ColumnType

type ColDateTime

type ColDateTime struct {
	// contains filtered or unexported fields
}

ColDateTime represents Clickhouse DateTime type.

func NewColDateTime

func NewColDateTime(col *proto.ColDateTime) *ColDateTime

func (*ColDateTime) Append

func (t *ColDateTime) Append(node InsaneNode) error

func (*ColDateTime) EncodeColumn

func (t *ColDateTime) EncodeColumn(buffer *proto.Buffer)

func (*ColDateTime) Reset

func (t *ColDateTime) Reset()

func (*ColDateTime) Rows

func (t *ColDateTime) Rows() int

func (*ColDateTime) Type

func (t *ColDateTime) Type() proto.ColumnType

type ColDateTime64

type ColDateTime64 struct {
	// contains filtered or unexported fields
}

ColDateTime64 represents Clickhouse DateTime64 type.

func NewColDateTime64

func NewColDateTime64(col *proto.ColDateTime64, scale int64) *ColDateTime64

func (*ColDateTime64) Append

func (t *ColDateTime64) Append(node InsaneNode) error

func (*ColDateTime64) EncodeColumn

func (t *ColDateTime64) EncodeColumn(buffer *proto.Buffer)

func (*ColDateTime64) Reset

func (t *ColDateTime64) Reset()

func (*ColDateTime64) Rows

func (t *ColDateTime64) Rows() int

func (*ColDateTime64) Type

func (t *ColDateTime64) Type() proto.ColumnType

type ColEnum16

type ColEnum16 struct {
	// contains filtered or unexported fields
}

ColEnum16 represents Clickhouse Enum16 type.

func (*ColEnum16) Append

func (t *ColEnum16) Append(node InsaneNode) error

func (*ColEnum16) EncodeColumn

func (t *ColEnum16) EncodeColumn(buffer *proto.Buffer)

func (*ColEnum16) Prepare

func (t *ColEnum16) Prepare() error

Prepare the column before sending.

func (*ColEnum16) Reset

func (t *ColEnum16) Reset()

func (*ColEnum16) Rows

func (t *ColEnum16) Rows() int

func (*ColEnum16) Type

func (t *ColEnum16) Type() proto.ColumnType

type ColEnum8

type ColEnum8 struct {
	// contains filtered or unexported fields
}

ColEnum8 represents Clickhouse Enum8 type.

func NewColEnum16

func NewColEnum16(col *proto.ColEnum) *ColEnum8

func NewColEnum8

func NewColEnum8(col *proto.ColEnum) *ColEnum8

func (*ColEnum8) Append

func (t *ColEnum8) Append(node InsaneNode) error

func (*ColEnum8) EncodeColumn

func (t *ColEnum8) EncodeColumn(buffer *proto.Buffer)

func (*ColEnum8) Prepare

func (t *ColEnum8) Prepare() error

Prepare the column before sending.

func (*ColEnum8) Reset

func (t *ColEnum8) Reset()

func (*ColEnum8) Rows

func (t *ColEnum8) Rows() int

func (*ColEnum8) Type

func (t *ColEnum8) Type() proto.ColumnType

type ColFloat32

type ColFloat32 struct {
	// contains filtered or unexported fields
}

ColFloat32 represents Clickhouse Float32 type.

func NewColFloat32

func NewColFloat32(nullable bool) *ColFloat32

func (*ColFloat32) Append

func (t *ColFloat32) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColFloat32) EncodeColumn

func (t *ColFloat32) EncodeColumn(buffer *proto.Buffer)

func (*ColFloat32) Reset

func (t *ColFloat32) Reset()

func (*ColFloat32) Rows

func (t *ColFloat32) Rows() int

func (*ColFloat32) Type

func (t *ColFloat32) Type() proto.ColumnType

type ColFloat64

type ColFloat64 struct {
	// contains filtered or unexported fields
}

ColFloat64 represents Clickhouse Float64 type.

func NewColFloat64

func NewColFloat64(nullable bool) *ColFloat64

func (*ColFloat64) Append

func (t *ColFloat64) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColFloat64) EncodeColumn

func (t *ColFloat64) EncodeColumn(buffer *proto.Buffer)

func (*ColFloat64) Reset

func (t *ColFloat64) Reset()

func (*ColFloat64) Rows

func (t *ColFloat64) Rows() int

func (*ColFloat64) Type

func (t *ColFloat64) Type() proto.ColumnType

type ColIPv4

type ColIPv4 struct {
	// contains filtered or unexported fields
}

ColIPv4 represents Clickhouse IPv4 type.

func NewColIPv4

func NewColIPv4(nullable bool) *ColIPv4

func (*ColIPv4) Append

func (t *ColIPv4) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColIPv4) EncodeColumn

func (t *ColIPv4) EncodeColumn(buffer *proto.Buffer)

func (*ColIPv4) Reset

func (t *ColIPv4) Reset()

func (*ColIPv4) Rows

func (t *ColIPv4) Rows() int

func (*ColIPv4) Type

func (t *ColIPv4) Type() proto.ColumnType

type ColIPv6

type ColIPv6 struct {
	// contains filtered or unexported fields
}

ColIPv6 represents Clickhouse IPv6 type.

func NewColIPv6

func NewColIPv6(nullable bool) *ColIPv6

func (*ColIPv6) Append

func (t *ColIPv6) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColIPv6) EncodeColumn

func (t *ColIPv6) EncodeColumn(buffer *proto.Buffer)

func (*ColIPv6) Reset

func (t *ColIPv6) Reset()

func (*ColIPv6) Rows

func (t *ColIPv6) Rows() int

func (*ColIPv6) Type

func (t *ColIPv6) Type() proto.ColumnType

type ColInt128

type ColInt128 struct {
	// contains filtered or unexported fields
}

ColInt128 represents Clickhouse Int128 type.

func NewColInt128

func NewColInt128(nullable bool) *ColInt128

func (*ColInt128) Append

func (t *ColInt128) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColInt128) EncodeColumn

func (t *ColInt128) EncodeColumn(buffer *proto.Buffer)

func (*ColInt128) Reset

func (t *ColInt128) Reset()

func (*ColInt128) Rows

func (t *ColInt128) Rows() int

func (*ColInt128) Type

func (t *ColInt128) Type() proto.ColumnType

type ColInt16

type ColInt16 struct {
	// contains filtered or unexported fields
}

ColInt16 represents Clickhouse Int16 type.

func NewColInt16

func NewColInt16(nullable bool) *ColInt16

func (*ColInt16) Append

func (t *ColInt16) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColInt16) EncodeColumn

func (t *ColInt16) EncodeColumn(buffer *proto.Buffer)

func (*ColInt16) Reset

func (t *ColInt16) Reset()

func (*ColInt16) Rows

func (t *ColInt16) Rows() int

func (*ColInt16) Type

func (t *ColInt16) Type() proto.ColumnType

type ColInt256

type ColInt256 struct {
	// contains filtered or unexported fields
}

ColInt256 represents Clickhouse Int256 type.

func NewColInt256

func NewColInt256(nullable bool) *ColInt256

func (*ColInt256) Append

func (t *ColInt256) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColInt256) EncodeColumn

func (t *ColInt256) EncodeColumn(buffer *proto.Buffer)

func (*ColInt256) Reset

func (t *ColInt256) Reset()

func (*ColInt256) Rows

func (t *ColInt256) Rows() int

func (*ColInt256) Type

func (t *ColInt256) Type() proto.ColumnType

type ColInt32

type ColInt32 struct {
	// contains filtered or unexported fields
}

ColInt32 represents Clickhouse Int32 type.

func NewColInt32

func NewColInt32(nullable bool) *ColInt32

func (*ColInt32) Append

func (t *ColInt32) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColInt32) EncodeColumn

func (t *ColInt32) EncodeColumn(buffer *proto.Buffer)

func (*ColInt32) Reset

func (t *ColInt32) Reset()

func (*ColInt32) Rows

func (t *ColInt32) Rows() int

func (*ColInt32) Type

func (t *ColInt32) Type() proto.ColumnType

type ColInt64

type ColInt64 struct {
	// contains filtered or unexported fields
}

ColInt64 represents Clickhouse Int64 type.

func NewColInt64

func NewColInt64(nullable bool) *ColInt64

func (*ColInt64) Append

func (t *ColInt64) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColInt64) EncodeColumn

func (t *ColInt64) EncodeColumn(buffer *proto.Buffer)

func (*ColInt64) Reset

func (t *ColInt64) Reset()

func (*ColInt64) Rows

func (t *ColInt64) Rows() int

func (*ColInt64) Type

func (t *ColInt64) Type() proto.ColumnType

type ColInt8

type ColInt8 struct {
	// contains filtered or unexported fields
}

ColInt8 represents Clickhouse Int8 type.

func NewColInt8

func NewColInt8(nullable bool) *ColInt8

func (*ColInt8) Append

func (t *ColInt8) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColInt8) EncodeColumn

func (t *ColInt8) EncodeColumn(buffer *proto.Buffer)

func (*ColInt8) Reset

func (t *ColInt8) Reset()

func (*ColInt8) Rows

func (t *ColInt8) Rows() int

func (*ColInt8) Type

func (t *ColInt8) Type() proto.ColumnType

type ColString

type ColString struct {
	// contains filtered or unexported fields
}

ColString represents Clickhouse String type.

func NewColString

func NewColString(nullable, lowCardinality bool) *ColString

func (*ColString) Append

func (t *ColString) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColString) EncodeColumn

func (t *ColString) EncodeColumn(buffer *proto.Buffer)

func (*ColString) EncodeState added in v0.11.0

func (t *ColString) EncodeState(b *proto.Buffer)

func (*ColString) Prepare added in v0.11.0

func (t *ColString) Prepare() error

Prepare the column before sending.

func (*ColString) Reset

func (t *ColString) Reset()

func (*ColString) Rows

func (t *ColString) Rows() int

func (*ColString) Type

func (t *ColString) Type() proto.ColumnType

type ColStringArray added in v0.11.0

type ColStringArray struct {
	*proto.ColArr[string]
}

func NewColStringArray added in v0.11.0

func NewColStringArray() *ColStringArray

func (*ColStringArray) Append added in v0.11.0

func (t *ColStringArray) Append(array InsaneNode) error

type ColUInt128

type ColUInt128 struct {
	// contains filtered or unexported fields
}

ColUInt128 represents Clickhouse UInt128 type.

func NewColUInt128

func NewColUInt128(nullable bool) *ColUInt128

func (*ColUInt128) Append

func (t *ColUInt128) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColUInt128) EncodeColumn

func (t *ColUInt128) EncodeColumn(buffer *proto.Buffer)

func (*ColUInt128) Reset

func (t *ColUInt128) Reset()

func (*ColUInt128) Rows

func (t *ColUInt128) Rows() int

func (*ColUInt128) Type

func (t *ColUInt128) Type() proto.ColumnType

type ColUInt16

type ColUInt16 struct {
	// contains filtered or unexported fields
}

ColUInt16 represents Clickhouse UInt16 type.

func NewColUInt16

func NewColUInt16(nullable bool) *ColUInt16

func (*ColUInt16) Append

func (t *ColUInt16) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColUInt16) EncodeColumn

func (t *ColUInt16) EncodeColumn(buffer *proto.Buffer)

func (*ColUInt16) Reset

func (t *ColUInt16) Reset()

func (*ColUInt16) Rows

func (t *ColUInt16) Rows() int

func (*ColUInt16) Type

func (t *ColUInt16) Type() proto.ColumnType

type ColUInt256

type ColUInt256 struct {
	// contains filtered or unexported fields
}

ColUInt256 represents Clickhouse UInt256 type.

func NewColUInt256

func NewColUInt256(nullable bool) *ColUInt256

func (*ColUInt256) Append

func (t *ColUInt256) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColUInt256) EncodeColumn

func (t *ColUInt256) EncodeColumn(buffer *proto.Buffer)

func (*ColUInt256) Reset

func (t *ColUInt256) Reset()

func (*ColUInt256) Rows

func (t *ColUInt256) Rows() int

func (*ColUInt256) Type

func (t *ColUInt256) Type() proto.ColumnType

type ColUInt32

type ColUInt32 struct {
	// contains filtered or unexported fields
}

ColUInt32 represents Clickhouse UInt32 type.

func NewColUInt32

func NewColUInt32(nullable bool) *ColUInt32

func (*ColUInt32) Append

func (t *ColUInt32) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColUInt32) EncodeColumn

func (t *ColUInt32) EncodeColumn(buffer *proto.Buffer)

func (*ColUInt32) Reset

func (t *ColUInt32) Reset()

func (*ColUInt32) Rows

func (t *ColUInt32) Rows() int

func (*ColUInt32) Type

func (t *ColUInt32) Type() proto.ColumnType

type ColUInt64

type ColUInt64 struct {
	// contains filtered or unexported fields
}

ColUInt64 represents Clickhouse UInt64 type.

func NewColUInt64

func NewColUInt64(nullable bool) *ColUInt64

func (*ColUInt64) Append

func (t *ColUInt64) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColUInt64) EncodeColumn

func (t *ColUInt64) EncodeColumn(buffer *proto.Buffer)

func (*ColUInt64) Reset

func (t *ColUInt64) Reset()

func (*ColUInt64) Rows

func (t *ColUInt64) Rows() int

func (*ColUInt64) Type

func (t *ColUInt64) Type() proto.ColumnType

type ColUInt8

type ColUInt8 struct {
	// contains filtered or unexported fields
}

ColUInt8 represents Clickhouse UInt8 type.

func NewColUInt8

func NewColUInt8(nullable bool) *ColUInt8

func (*ColUInt8) Append

func (t *ColUInt8) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColUInt8) EncodeColumn

func (t *ColUInt8) EncodeColumn(buffer *proto.Buffer)

func (*ColUInt8) Reset

func (t *ColUInt8) Reset()

func (*ColUInt8) Rows

func (t *ColUInt8) Rows() int

func (*ColUInt8) Type

func (t *ColUInt8) Type() proto.ColumnType

type ColUUID added in v0.11.1

type ColUUID struct {
	// contains filtered or unexported fields
}

ColUUID represents Clickhouse UUID type.

func NewColUUID added in v0.11.1

func NewColUUID(nullable bool) *ColUUID

func (*ColUUID) Append added in v0.11.1

func (t *ColUUID) Append(node InsaneNode) error

Append the insaneJSON.Node to the batch.

func (*ColUUID) EncodeColumn added in v0.11.1

func (t *ColUUID) EncodeColumn(buffer *proto.Buffer)

func (*ColUUID) Reset added in v0.11.1

func (t *ColUUID) Reset()

func (*ColUUID) Rows added in v0.11.1

func (t *ColUUID) Rows() int

func (*ColUUID) Type added in v0.11.1

func (t *ColUUID) Type() proto.ColumnType

type Column

type Column struct {
	Name string `json:"name"`
	Type string `json:"type"`
}

type Config

type Config struct {
	// > @3@4@5@6
	// >
	// > TCP Clickhouse addresses, e.g.: 127.0.0.1:9000.
	// > Check the insert_strategy to find out how File.d will behave with a list of addresses.
	// >
	// > Accepts strings or objects, e.g.:
	// > “`yaml
	// > addresses:
	// >   - 127.0.0.1:9000 # the same as {addr:'127.0.0.1:9000',weight:1}
	// >   - addr: 127.0.0.1:9001
	// >     weight: 2
	// > “`
	// >
	// > When some addresses get weight greater than 1 and round_robin insert strategy is used,
	// > it works as classical weighted round robin. Given {(a_1,w_1),(a_1,w_1),...,{a_n,w_n}},
	// > where a_i is the ith address and w_i is the ith address' weight, requests are sent in order:
	// > w_1 times to a_1, w_2 times to a_2, ..., w_n times to a_n, w_1 times to a_1 and so on.
	Addresses []Address `json:"addresses" required:"true" slice:"true"` // *

	// > @3@4@5@6
	// >
	// > If more than one addresses are set, File.d will insert batches depends on the strategy:
	// > round_robin - File.d will send requests in the round-robin order.
	// > in_order - File.d will send requests starting from the first address, ending with the number of retries.
	InsertStrategy  string `json:"insert_strategy" default:"round_robin" options:"round_robin|in_order"` // *
	InsertStrategy_ InsertStrategy

	// > @3@4@5@6
	// >
	// > CA certificate in PEM encoding. This can be a path or the content of the certificate.
	CACert string `json:"ca_cert" default:""` // *

	// > @3@4@5@6
	// >
	// > Clickhouse database name to search the table.
	Database string `json:"database" default:"default"` // *

	// > @3@4@5@6
	// >
	// > Clickhouse database user.
	User string `json:"user" default:"default"` // *

	// > @3@4@5@6
	// >
	// > Clickhouse database password.
	Password string `json:"password" default:""` // *

	// > @3@4@5@6
	// >
	// > Clickhouse quota key.
	// > https://clickhouse.com/docs/en/operations/quotas
	QuotaKey string `json:"quota_key" default:""` // *

	// > @3@4@5@6
	// >
	// > Clickhouse target table.
	Table string `json:"table" required:"true"` // *

	// > @3@4@5@6
	// >
	// > Clickhouse table columns. Each column must contain `name` and `type`.
	// > File.d supports next data types:
	// > * Signed and unsigned integers from 8 to 64 bits.
	// > If you set 128-256 bits - File.d will cast the number to the int64.
	// > * DateTime, DateTime64
	// > * String
	// > * Enum8, Enum16
	// > * Bool
	// > * Nullable
	// > * IPv4, IPv6
	// > * LowCardinality(String)
	// > * Array(String)
	// >
	// > If you need more types, please, create an issue.
	Columns []Column `json:"columns" required:"true"` // *

	// > @3@4@5@6
	// >
	// > If true, file.d fails when types are mismatched.
	// >
	// > If false, file.d will cast any JSON type to the column type.
	// >
	// > For example, if strict_types is false and an event value is a Number,
	// > but the column type is a Bool, the Number will be converted to the "true"
	// > if the value is "1".
	// > But if the value is an Object and the column is an Int
	// > File.d converts the Object to "0" to prevent fall.
	// >
	// > In the non-strict mode, for String and Array(String) columns the value will be encoded to JSON.
	// >
	// > If the strict mode is enabled file.d fails (exit with code 1) in above examples.
	StrictTypes bool `json:"strict_types" default:"false"` // *

	// > @3@4@5@6
	// >
	// > The level of the Compression.
	// > Disabled - lowest CPU overhead.
	// > LZ4 - medium CPU overhead.
	// > ZSTD - high CPU overhead.
	// > None - uses no compression but data has checksums.
	Compression string `default:"disabled" options:"disabled|lz4|zstd|none"` // *

	// > @3@4@5@6
	// >
	// > Retries of insertion. If File.d cannot insert for this number of attempts,
	// > File.d will fall with non-zero exit code or skip message (see fatal_on_failed_insert).
	Retry int `json:"retry" default:"10"` // *

	// > @3@4@5@6
	// >
	// > After an insert error, fall with a non-zero exit code or not
	// > **Experimental feature**
	FatalOnFailedInsert bool `json:"fatal_on_failed_insert" default:"false"` // *

	// > @3@4@5@6
	// >
	// > Additional settings to the Clickhouse.
	// > Settings list: https://clickhouse.com/docs/en/operations/settings/settings
	ClickhouseSettings Settings `json:"clickhouse_settings"` // *

	// > @3@4@5@6
	// >
	// > Retention milliseconds for retry to DB.
	Retention  cfg.Duration `json:"retention" default:"50ms" parse:"duration"` // *
	Retention_ time.Duration

	// > @3@4@5@6
	// >
	// > Multiplier for exponential increase of retention between retries
	RetentionExponentMultiplier int `json:"retention_exponentially_multiplier" default:"2"` // *

	// > @3@4@5@6
	// >
	// > Timeout for each insert request.
	InsertTimeout  cfg.Duration `json:"insert_timeout" default:"10s" parse:"duration"` // *
	InsertTimeout_ time.Duration

	// > @3@4@5@6
	// >
	// > Max connections in the connection pool.
	MaxConns  cfg.Expression `json:"max_conns" default:"gomaxprocs*4"  parse:"expression"` // *
	MaxConns_ int32

	// > @3@4@5@6
	// >
	// > Min connections in the connection pool.
	MinConns  cfg.Expression `json:"min_conns" default:"gomaxprocs*1"  parse:"expression"` // *
	MinConns_ int32

	// > @3@4@5@6
	// >
	// > How long a connection lives before it is killed and recreated.
	MaxConnLifetime  cfg.Duration `json:"max_conn_lifetime" default:"30m" parse:"duration"` // *
	MaxConnLifetime_ time.Duration

	// > @3@4@5@6
	// >
	// > How long an unused connection lives before it is killed.
	MaxConnIdleTime  cfg.Duration `json:"max_conn_idle_time" default:"5m" parse:"duration"` // *
	MaxConnIdleTime_ time.Duration

	// > @3@4@5@6
	// >
	// > How often to check that idle connections is time to kill.
	HealthCheckPeriod  cfg.Duration `json:"health_check_period" default:"1m" parse:"duration"` // *
	HealthCheckPeriod_ time.Duration

	// > @3@4@5@6
	// >
	// > How much workers will be instantiated to send batches.
	// > It also configures the amount of minimum and maximum number of database connections.
	WorkersCount  cfg.Expression `json:"workers_count" default:"gomaxprocs*4" parse:"expression"` // *
	WorkersCount_ int

	// > @3@4@5@6
	// >
	// > Maximum quantity of events to pack into one batch.
	BatchSize  cfg.Expression `json:"batch_size" default:"capacity/4"  parse:"expression"` // *
	BatchSize_ int

	// > @3@4@5@6
	// >
	// > A minimum size of events in a batch to send.
	// > If both batch_size and batch_size_bytes are set, they will work together.
	BatchSizeBytes  cfg.Expression `json:"batch_size_bytes" default:"0" parse:"expression"` // *
	BatchSizeBytes_ int

	// > @3@4@5@6
	// >
	// > After this timeout batch will be sent even if batch isn't completed.
	BatchFlushTimeout  cfg.Duration `json:"batch_flush_timeout" default:"200ms" parse:"duration"` // *
	BatchFlushTimeout_ time.Duration
}

! config-params ^ config-params

type InsaneColInput

type InsaneColInput interface {
	proto.ColInput
	Append(node InsaneNode) error
	Reset()
}

type InsaneColumn

type InsaneColumn struct {
	Name     string
	ColInput InsaneColInput
}

type InsaneNode added in v0.9.5

type InsaneNode interface {
	AsInt() (int, error)
	AsFloat32() (float32, error)
	AsFloat64() (float64, error)
	AsString() (string, error)
	AsBool() (bool, error)
	AsInt64() (int64, error)
	AsStringArray() ([]string, error)
	AsUUID() (uuid.UUID, error)
	AsIPv4() (proto.IPv4, error)
	AsIPv6() (proto.IPv6, error)
	AsTime(scale int64) (time.Time, error)

	IsNull() bool
}

type InsertStrategy

type InsertStrategy byte
const (
	StrategyRoundRobin InsertStrategy = iota
	StrategyInOrder
)

type NonStrictNode added in v0.9.5

type NonStrictNode struct {
	*insaneJSON.Node
}

func (NonStrictNode) AsBool added in v0.9.5

func (n NonStrictNode) AsBool() (bool, error)

func (NonStrictNode) AsFloat32 added in v0.12.0

func (n NonStrictNode) AsFloat32() (float32, error)

func (NonStrictNode) AsFloat64 added in v0.12.0

func (n NonStrictNode) AsFloat64() (float64, error)

func (NonStrictNode) AsIPv4 added in v0.12.0

func (n NonStrictNode) AsIPv4() (proto.IPv4, error)

func (NonStrictNode) AsIPv6 added in v0.12.0

func (n NonStrictNode) AsIPv6() (proto.IPv6, error)

func (NonStrictNode) AsInt added in v0.9.5

func (n NonStrictNode) AsInt() (int, error)

func (NonStrictNode) AsInt64 added in v0.9.5

func (n NonStrictNode) AsInt64() (int64, error)

func (NonStrictNode) AsString added in v0.9.5

func (n NonStrictNode) AsString() (string, error)

func (NonStrictNode) AsStringArray added in v0.11.0

func (n NonStrictNode) AsStringArray() ([]string, error)

func (NonStrictNode) AsTime added in v0.12.0

func (n NonStrictNode) AsTime(scale int64) (time.Time, error)

func (NonStrictNode) AsUUID added in v0.12.0

func (n NonStrictNode) AsUUID() (uuid.UUID, error)

type Plugin

type Plugin struct {
	// contains filtered or unexported fields
}

func (*Plugin) Out

func (p *Plugin) Out(event *pipeline.Event)

func (*Plugin) Start

func (p *Plugin) Start(config pipeline.AnyConfig, params *pipeline.OutputPluginParams)

func (*Plugin) Stop

func (p *Plugin) Stop()

type Setting

type Setting struct {
	Key       string `json:"key"`
	Value     string `json:"value"`
	Important bool   `json:"important"`
}

type Settings

type Settings []Setting

type StrictNode added in v0.11.0

type StrictNode struct {
	*insaneJSON.StrictNode
}

func (StrictNode) AsFloat32 added in v0.12.0

func (s StrictNode) AsFloat32() (float32, error)

func (StrictNode) AsFloat64 added in v0.12.0

func (s StrictNode) AsFloat64() (float64, error)

func (StrictNode) AsIPv4 added in v0.12.0

func (s StrictNode) AsIPv4() (proto.IPv4, error)

func (StrictNode) AsIPv6 added in v0.12.0

func (s StrictNode) AsIPv6() (proto.IPv6, error)

func (StrictNode) AsStringArray added in v0.11.0

func (s StrictNode) AsStringArray() ([]string, error)

func (StrictNode) AsTime added in v0.12.0

func (s StrictNode) AsTime(scale int64) (time.Time, error)

func (StrictNode) AsUUID added in v0.12.0

func (s StrictNode) AsUUID() (uuid.UUID, error)

type ZeroValueNode added in v0.12.0

type ZeroValueNode struct{}

ZeroValueNode returns a null-value for all called methods. It is usually used to insert a zero-value into a column if the field type of the event does not match the column type.

func (ZeroValueNode) AsBool added in v0.12.0

func (z ZeroValueNode) AsBool() (bool, error)

func (ZeroValueNode) AsFloat32 added in v0.12.0

func (z ZeroValueNode) AsFloat32() (float32, error)

func (ZeroValueNode) AsFloat64 added in v0.12.0

func (z ZeroValueNode) AsFloat64() (float64, error)

func (ZeroValueNode) AsIPv4 added in v0.12.0

func (z ZeroValueNode) AsIPv4() (proto.IPv4, error)

func (ZeroValueNode) AsIPv6 added in v0.12.0

func (z ZeroValueNode) AsIPv6() (proto.IPv6, error)

func (ZeroValueNode) AsInt added in v0.12.0

func (z ZeroValueNode) AsInt() (int, error)

func (ZeroValueNode) AsInt64 added in v0.12.0

func (z ZeroValueNode) AsInt64() (int64, error)

func (ZeroValueNode) AsString added in v0.12.0

func (z ZeroValueNode) AsString() (string, error)

func (ZeroValueNode) AsStringArray added in v0.12.0

func (z ZeroValueNode) AsStringArray() ([]string, error)

func (ZeroValueNode) AsTime added in v0.12.0

func (z ZeroValueNode) AsTime(_ int64) (time.Time, error)

func (ZeroValueNode) AsUUID added in v0.12.0

func (z ZeroValueNode) AsUUID() (uuid.UUID, error)

func (ZeroValueNode) IsNull added in v0.12.0

func (z ZeroValueNode) IsNull() bool

Directories

Path Synopsis
Package mock_clickhouse is a generated GoMock package.
Package mock_clickhouse is a generated GoMock package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL