Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Dummy ¶
type Dummy struct { // Tag name associated to all records comming from this plugin. Tag string `json:"tag,omitempty"` // Dummy JSON record. Dummy string `json:"dummy,omitempty"` // Events number generated per second. Rate *int32 `json:"rate,omitempty"` // Sample events to generate. Samples *int32 `json:"samples,omitempty"` }
The dummy input plugin, generates dummy events. It is useful for testing, debugging, benchmarking and getting started with Fluent Bit.
func (*Dummy) DeepCopy ¶
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Dummy.
func (*Dummy) DeepCopyInto ¶
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type FluentbitMetrics ¶
type FluentbitMetrics struct { Tag string `json:"tag,omitempty"` // The rate at which metrics are collected from the host operating system. default is 2 seconds. ScrapeInterval string `json:"scrapeInterval,omitempty"` // Scrape metrics upon start, useful to avoid waiting for 'scrape_interval' for the first round of metrics. ScrapeOnStart *bool `json:"scrapeOnStart,omitempty"` }
func (*FluentbitMetrics) DeepCopy ¶
func (in *FluentbitMetrics) DeepCopy() *FluentbitMetrics
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FluentbitMetrics.
func (*FluentbitMetrics) DeepCopyInto ¶
func (in *FluentbitMetrics) DeepCopyInto(out *FluentbitMetrics)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*FluentbitMetrics) Name ¶
func (_ *FluentbitMetrics) Name() string
func (*FluentbitMetrics) Params ¶
func (f *FluentbitMetrics) Params(_ plugins.SecretLoader) (*params.KVs, error)
type NodeExporterMetrics ¶
type NodeExporterMetrics struct { // Tag name associated to all records comming from this plugin. Tag string `json:"tag,omitempty"` // The rate at which metrics are collected from the host operating system, default is 5 seconds. ScrapeInterval string `json:"scrapeInterval,omitempty"` Path *Path `json:"path,omitempty"` }
The NodeExporterMetrics input plugin, which based on Prometheus Node Exporter to collect system / host level metrics.
func (*NodeExporterMetrics) DeepCopy ¶
func (in *NodeExporterMetrics) DeepCopy() *NodeExporterMetrics
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeExporterMetrics.
func (*NodeExporterMetrics) DeepCopyInto ¶
func (in *NodeExporterMetrics) DeepCopyInto(out *NodeExporterMetrics)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*NodeExporterMetrics) Name ¶
func (_ *NodeExporterMetrics) Name() string
func (*NodeExporterMetrics) Params ¶
func (d *NodeExporterMetrics) Params(_ plugins.SecretLoader) (*params.KVs, error)
Params implement Section() method
type PrometheusScrapeMetrics ¶
type PrometheusScrapeMetrics struct { // Tag name associated to all records comming from this plugin Tag string `json:"tag,omitempty"` // The host of the prometheus metric endpoint that you want to scrape Host string `json:"host,omitempty"` // The port of the promethes metric endpoint that you want to scrape // +kubebuilder:validation:Minimum:=1 // +kubebuilder:validation:Maximum:=65535 Port *int32 `json:"port,omitempty"` // The interval to scrape metrics, default: 10s ScrapeInterval string `json:"scrapeInterval,omitempty"` // The metrics URI endpoint, that must start with a forward slash, deflaut: /metrics MetricsPath string `json:"metricsPath,omitempty"` }
Fluent Bit 1.9 includes additional metrics features to allow you to collect both logs and metrics with the same collector.
* The initial release of the Prometheus Scrape metric allows you to collect metrics from a Prometheus-based * endpoint at a set interval. These metrics can be routed to metric supported endpoints such as Prometheus Exporter, InfluxDB, or Prometheus Remote Write
func (*PrometheusScrapeMetrics) DeepCopy ¶
func (in *PrometheusScrapeMetrics) DeepCopy() *PrometheusScrapeMetrics
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PrometheusScrapeMetrics.
func (*PrometheusScrapeMetrics) DeepCopyInto ¶
func (in *PrometheusScrapeMetrics) DeepCopyInto(out *PrometheusScrapeMetrics)
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (*PrometheusScrapeMetrics) Name ¶
func (_ *PrometheusScrapeMetrics) Name() string
func (*PrometheusScrapeMetrics) Params ¶
func (p *PrometheusScrapeMetrics) Params(_ plugins.SecretLoader) (*params.KVs, error)
Params implement Section() method
type Systemd ¶
type Systemd struct { // Optional path to the Systemd journal directory, // if not set, the plugin will use default paths to read local-only logs. Path string `json:"path,omitempty"` // Specify the database file to keep track of monitored files and offsets. DB string `json:"db,omitempty"` // Set a default synchronization (I/O) method. values: Extra, Full, Normal, Off. // This flag affects how the internal SQLite engine do synchronization to disk, // for more details about each option please refer to this section. // note: this option was introduced on Fluent Bit v1.4.6. // +kubebuilder:validation:Enum:=Extra;Full;Normal;Off DBSync string `json:"dbSync,omitempty"` // The tag is used to route messages but on Systemd plugin there is an extra functionality: // if the tag includes a star/wildcard, it will be expanded with the Systemd Unit file (e.g: host.* => host.UNIT_NAME). Tag string `json:"tag,omitempty"` // Set a maximum number of fields (keys) allowed per record. MaxFields int `json:"maxFields,omitempty"` // When Fluent Bit starts, the Journal might have a high number of logs in the queue. // In order to avoid delays and reduce memory usage, this option allows to specify the maximum number of log entries that can be processed per round. // Once the limit is reached, Fluent Bit will continue processing the remaining log entries once Journald performs the notification. MaxEntries int `json:"maxEntries,omitempty"` // Allows to perform a query over logs that contains a specific Journald key/value pairs, e.g: _SYSTEMD_UNIT=UNIT. // The Systemd_Filter option can be specified multiple times in the input section to apply multiple filters as required. SystemdFilter []string `json:"systemdFilter,omitempty"` // Define the filter type when Systemd_Filter is specified multiple times. Allowed values are And and Or. // With And a record is matched only when all of the Systemd_Filter have a match. // With Or a record is matched when any of the Systemd_Filter has a match. // +kubebuilder:validation:Enum:=And;Or SystemdFilterType string `json:"systemdFilterType,omitempty"` // Start reading new entries. Skip entries already stored in Journald. // +kubebuilder:validation:Enum:=on;off ReadFromTail string `json:"readFromTail,omitempty"` // Remove the leading underscore of the Journald field (key). For example the Journald field _PID becomes the key PID. // +kubebuilder:validation:Enum:=on;off StripUnderscores string `json:"stripUnderscores,omitempty"` }
The Systemd input plugin allows to collect log messages from the Journald daemon on Linux environments.
func (*Systemd) DeepCopy ¶
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Systemd.
func (*Systemd) DeepCopyInto ¶
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
type Tail ¶
type Tail struct { // Set the initial buffer size to read files data. // This value is used too to increase buffer size. // The value must be according to the Unit Size specification. // +kubebuilder:validation:Pattern:="^\\d+(k|K|KB|kb|m|M|MB|mb|g|G|GB|gb)?$" BufferChunkSize string `json:"bufferChunkSize,omitempty"` // Set the limit of the buffer size per monitored file. // When a buffer needs to be increased (e.g: very long lines), // this value is used to restrict how much the memory buffer can grow. // If reading a file exceed this limit, the file is removed from the monitored file list // The value must be according to the Unit Size specification. // +kubebuilder:validation:Pattern:="^\\d+(k|K|KB|kb|m|M|MB|mb|g|G|GB|gb)?$" BufferMaxSize string `json:"bufferMaxSize,omitempty"` // Pattern specifying a specific log files or multiple ones through the use of common wildcards. Path string `json:"path,omitempty"` // If enabled, it appends the name of the monitored file as part of the record. // The value assigned becomes the key in the map. PathKey string `json:"pathKey,omitempty"` // Set one or multiple shell patterns separated by commas to exclude files matching a certain criteria, // e.g: exclude_path=*.gz,*.zip ExcludePath string `json:"excludePath,omitempty"` // For new discovered files on start (without a database offset/position), // read the content from the head of the file, not tail. ReadFromHead *bool `json:"readFromHead,omitempty"` // The interval of refreshing the list of watched files in seconds. RefreshIntervalSeconds *int64 `json:"refreshIntervalSeconds,omitempty"` // Specify the number of extra time in seconds to monitor a file once is rotated in case some pending data is flushed. RotateWaitSeconds *int64 `json:"rotateWaitSeconds,omitempty"` // Ignores records which are older than this time in seconds. // Supports m,h,d (minutes, hours, days) syntax. // Default behavior is to read all records from specified files. // Only available when a Parser is specificied and it can parse the time of a record. // +kubebuilder:validation:Pattern:="^\\d+(m|h|d)?$" IgnoreOlder string `json:"ignoredOlder,omitempty"` // When a monitored file reach it buffer capacity due to a very long line (Buffer_Max_Size), // the default behavior is to stop monitoring that file. // Skip_Long_Lines alter that behavior and instruct Fluent Bit to skip long lines // and continue processing other lines that fits into the buffer size. SkipLongLines *bool `json:"skipLongLines,omitempty"` // Specify the database file to keep track of monitored files and offsets. DB string `json:"db,omitempty"` // Set a default synchronization (I/O) method. Values: Extra, Full, Normal, Off. // +kubebuilder:validation:Enum:=Extra;Full;Normal;Off DBSync string `json:"dbSync,omitempty"` // Set a limit of memory that Tail plugin can use when appending data to the Engine. // If the limit is reach, it will be paused; when the data is flushed it resumes. MemBufLimit string `json:"memBufLimit,omitempty"` // Specify the name of a parser to interpret the entry as a structured message. Parser string `json:"parser,omitempty"` // When a message is unstructured (no parser applied), it's appended as a string under the key name log. // This option allows to define an alternative name for that key. Key string `json:"key,omitempty"` // Set a tag (with regex-extract fields) that will be placed on lines read. // E.g. kube.<namespace_name>.<pod_name>.<container_name> Tag string `json:"tag,omitempty"` // Set a regex to exctract fields from the file TagRegex string `json:"tagRegex,omitempty"` // If enabled, the plugin will try to discover multiline messages // and use the proper parsers to compose the outgoing messages. // Note that when this option is enabled the Parser option is not used. Multiline *bool `json:"multiline,omitempty"` // Wait period time in seconds to process queued multiline messages MultilineFlushSeconds *int64 `json:"multilineFlushSeconds,omitempty"` // Name of the parser that matchs the beginning of a multiline message. // Note that the regular expression defined in the parser must include a group name (named capture) ParserFirstline string `json:"parserFirstline,omitempty"` // Optional-extra parser to interpret and structure multiline entries. // This option can be used to define multiple parsers. ParserN []string `json:"parserN,omitempty"` // If enabled, the plugin will recombine split Docker log lines before passing them to any parser as configured above. // This mode cannot be used at the same time as Multiline. DockerMode *bool `json:"dockerMode,omitempty"` // Wait period time in seconds to flush queued unfinished split lines. DockerModeFlushSeconds *int64 `json:"dockerModeFlushSeconds,omitempty"` // Specify an optional parser for the first line of the docker multiline mode. The parser name to be specified must be registered in the parsers.conf file. DockerModeParser string `json:"dockerModeParser,omitempty"` // DisableInotifyWatcher will disable inotify and use the file stat watcher instead. DisableInotifyWatcher *bool `json:"disableInotifyWatcher,omitempty"` // This will help to reassembly multiline messages originally split by Docker or CRI //Specify one or Multiline Parser definition to apply to the content. MultilineParser string `json:"multilineParser,omitempty"` }
The Tail input plugin allows to monitor one or several text files. It has a similar behavior like tail -f shell command.
func (*Tail) DeepCopy ¶
DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Tail.
func (*Tail) DeepCopyInto ¶
DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.