Output plugins
devnull
It provides an API to test pipelines and other plugins.
More details...
elasticsearch
It sends events into Elasticsearch. It uses _bulk
API to send events in batches.
If a network error occurs, the batch will infinitely try to be delivered to the random endpoint.
More details...
gelf
It sends event batches to the GELF endpoint. Transport level protocol TCP or UDP is configurable.
It doesn't support UDP chunking. So don't use UDP if event size may be greater than 8192.
GELF messages are separated by null byte. Each message is a JSON with the following fields:
version
string=1.1
host
string
short_message
string
full_message
string
timestamp
number
level
number
_extra_field_1
string
_extra_field_2
string
_extra_field_3
string
Every field with an underscore prefix _
will be treated as an extra field.
Allowed characters in field names are letters, numbers, underscores, dashes, and dots.
More details...
kafka
It sends the event batches to kafka brokers using sarama
lib.
More details...
s3
Sends events to s3 output of one or multiple buckets.
bucket
is default bucket for events. Addition buckets can be described in multi_buckets
section, example down here.
Field "bucket_field_event" is filed name, that will be searched in event.
If appears we try to send event to this bucket instead of described here.
⚠ Currently bucket names for bucket and multi_buckets can't intersect.
⚠ If dynamic bucket moved to config it can leave some not send data behind.
To send this data to s3 move bucket dir from /var/log/dynamic_buckets/bucketName to /var/log/static_buckets/bucketName (/var/log is default path)
and restart file.d
Example
Standard example:
pipelines:
mkk:
settings:
capacity: 128
# input plugin is not important in this case, let's emulate http input.
input:
type: http
emulate_mode: "no"
address: ":9200"
actions:
- type: json_decode
field: message
output:
type: s3
file_plugin:
retention_interval: 10s
# endpoint, access_key, secret_key, bucket are required.
endpoint: "s3.fake_host.org:80"
access_key: "access_key1"
secret_key: "secret_key2"
bucket: "bucket-logs"
bucket_field_event: "bucket_name"
Example with fan-out buckets:
pipelines:
mkk:
settings:
capacity: 128
# input plugin is not important in this case, let's emulate http input.
input:
type: http
emulate_mode: "no"
address: ":9200"
actions:
- type: json_decode
field: message
output:
type: s3
file_plugin:
retention_interval: 10s
# endpoint, access_key, secret_key, bucket are required.
endpoint: "s3.fake_host.org:80"
access_key: "access_key1"
secret_key: "secret_key2"
bucket: "bucket-logs"
# bucket_field_event - event with such field will be sent to bucket with its value
# if such exists: {"bucket_name": "secret", "message": 123} to bucket "secret".
bucket_field_event: "bucket_name"
# multi_buckets is optional, contains array of buckets.
multi_buckets:
- endpoint: "otherS3.fake_host.org:80"
access_key: "access_key2"
secret_key: "secret_key2"
bucket: "bucket-logs-2"
- endpoint: "yet_anotherS3.fake_host.ru:80"
access_key: "access_key3"
secret_key: "secret_key3"
bucket: "bucket-logs-3"
More details...
splunk
It sends events to splunk.
More details...
stdout
It writes events to stdout(also known as console).
More details...
Generated using insane-doc