Elktail & Elkuser
THIS IS A FORK OF a bunch of things:
The original README is here .
Elktail is a command line utility to query and tail Elasticsearch logs. Even though it's powerful, using Kibana's web interface to search and analyze the logs is not always practical. Sometimes you just wish to tail -f
the logs that you normally view in Kibana to see what's happening right now. Elktail allows you to do just that, and more. Tail the logs. Search for errors and specific events on command line. Pipe the search results to any of the standard unix tools. Use it in scripts. Redirect the output to a file to effectively download a log from es / Kibana etc...
Elkuser is a command line utility for managing Elasticsearch roles, users and tokens specifically for use with elktail
.
Docker usage
There is a prebuilt OCI image that can be used like:
# find a tag here https://gitlab.com/piersharding/elktail/container_registry/2728088
docker run --rm -it --net host registry.gitlab.com/piersharding/elktail/elktail:<a tag> --url http://<to-your-elasticsearch>:9200 --raw
Installation
Download Binaries for elktail & elkuser
Latest builds can be found in the artefacts of the compile
job step - see https://gitlab.com/piersharding/elktail/-/pipelines .
Alternatively, releases are prepared here (download 'Other'): https://gitlab.com/piersharding/elktail/-/releases .
An example of installation for Linux amd64 is (you need jq
for this):
# get the latest Linux amd64 release eg: https://gitlab.com/api/v4/projects/33492293/packages/generic/elktail/1.13.0/elktail-linux-amd64-1.13.0
export ELKTAIL_RELEASE=`curl -s https://gitlab.com/api/v4/projects/33492293/releases | jq -r .[0].tag_name | sed s/v//`
curl -q -o elktail-linux-amd64-${ELKTAIL_RELEASE} \
https://gitlab.com/api/v4/projects/33492293/packages/generic/elktail/${ELKTAIL_RELEASE}/elktail-linux-amd64-${ELKTAIL_RELEASE} && \
chmod a+x elktail-linux-amd64-${ELKTAIL_RELEASE} && \
sudo mv elkuser-linux-amd64-${ELKTAIL_RELEASE} /usr/local/bin/elktail
curl -q -o elkuser-linux-amd64-${ELKTAIL_RELEASE} \
https://gitlab.com/api/v4/projects/33492293/packages/generic/elkuser/${ELKTAIL_RELEASE}/elkuser-linux-amd64-${ELKTAIL_RELEASE} && \
chmod a+x elkuser-linux-amd64-${ELKTAIL_RELEASE} && \
sudo mv elkuser-linux-amd64-${ELKTAIL_RELEASE} /usr/local/bin/elkuser
Basic Usage - elktail
If elktail
is invoked without any parameters, it will attempt to connect to ES instance at localhost:9200
and tail the logs in the latest logstash index (index that matches pattern filebeat-\d+\.\d+\.\d+-.*
), displaying the contents of message
field. If your logstash logs do not have message
field, you can change the output format using -l (--field) parameter. For example:
elktail -l '%@timestamp %log'
By default, elktail
will query, print, and exit. If you want to continuously tail the logs then specify -f
(--follow
)
JSON Path Output
The --format
string will be automatically interpreted as a JSON Path query, after the initial pass for %
formatting options have been processed. The query engine is the https://github.com/PaesslerAG/jsonpath library which is based on the query language described https://goessner.net/articles/JsonPath/ .
The JSON Path expressions are applied to each raw JSON log row extracted from ElasticSearch based on the query expression. Each expression is encapsulated in curly braces - {...}
, and the expression result is evaluated to a string. eg:
elktail --format '{["@timestamp"]}{.agent.name}{.input.type}{.kubernetes.namespace}{.message}' -d "input.type: journald"
NOTE: for magic characters in the element name, use the following syntax style to escape: .kubernetes.labels[\"app_kubernetes_io/component\"]
.
This query will find records with an input.type
equal to journald
and then will output the tab-delimitered timestamp, agent name, input type, Kubernetes Namespace and the message. The keen observer will note that journald
type records will never have a Kubernetes Namespace, so the failed JSON Path expression will fail to an empty string:
elktail -n 3 --format '{["@timestamp"]}{.agent.name}{.kubernetes.namespace}' -d "input.type: journald"
2022-02-22T06:58:57.010Z systems-k8s1-worker-1
2022-02-22T06:58:58.137Z systems-k8s1-worker-1
2022-02-22T06:58:58.363Z systems-k8s1-worker-4
To see the failed expression error messages, just turn up the verbosity with --v1
:
$ elktail -n 3 --format '{["@timestamp"]}{.agent.name}{.kubernetes.namespace}' -d --v1 "input.type: journald"
INFO: HTTP Client: URL [http://192.168.99.131:9200] ...
INFO: selectIndices: CatIndices took 126.4188ms
INFO: Using indices: [filebeat-7.17.0-2022.02.22-000049]
2022-02-22T06:59:37.010Z systems-k8s1-worker-1 ERR [jsonpath: $.kubernetes.namespace] unknown key kubernetes
2022-02-22T06:59:38.140Z systems-k8s1-worker-1 ERR [jsonpath: $.kubernetes.namespace] unknown key kubernetes
2022-02-22T06:59:38.365Z systems-k8s1-worker-4 ERR [jsonpath: $.kubernetes.namespace] unknown key kubernetes
If the .message
field is actually a serialised JSON value, then it is possible to introspect this by drilling the JSONPath terms down into it. elktail
will attempt to unpack .message
and drill into it.
eg. if we have a .message
with a value of:
"message":"{\"level\":\"info\",\"ts\":\"2022-02-15T05:28:08.400Z\",\"caller\":\"traceutil/trace.go:171\",\"msg\":\"trace[100325659] range\",\"detail\":\"{range_begin:/registry/mutatingwebhookconfigurations/vault-agent-injector-cfg; range_end:; response_count:1; response_revision:14508902; }\",\"duration\":\"146.104307ms\",\"start\":\"2022-02-15T05:28:08.253Z\",\"end\":\"2022-02-15T05:28:08.400Z\",\"steps\":[\"trace[100325659] 'agreement among raft nodes before linearized reading' (duration: 146.002796ms)\"],\"step_count\":1}"
Then drill into it with:
$ elktail -n 1 -a 2022-01-01T00:00 --format "MSG: {.message.level}" vault
MSG: warn
Further details on JSONPath can be found at the IETF https://tools.ietf.org/id/draft-goessner-dispatch-jsonpath-00.html .
Connecting Through a Socks5 Proxy
By far the easiest way to connect to a remote cluster that there is not direct network connection for, is to use an SSH based Socks5 proxy.
Create the proxy using ssh like:
ssh -D 6676 -A -N <remote server with DNS supported access to target cluster>
This will create a local proxy listening on localhost:6676
. Then add the socks switch:
elktail --socks localhost:6676 ....
Connecting Through SSH Tunnel
If ES instance's endpoint is not publicly available over the internet, you can also connect to it through ssh tunnel. For example, if ES instance is installed on elastic.example.com, but port 9200 is firewalled, you can connect through SSH Tunnel:
elktail --ssh elastic.example.com
Elktail will connect as current user to elastic.example.com and establish ssh tunnel to port 9200 and then connect to ES through it.
You can also specifiy the ssh user, ssh port and tunnel local port (9199 by default) in the following format:
elktail --ssh [localport:][user@]sshhost.tld[:sshport]
If forwarding to an internal host (ie. using the ssh host as a bastion/jump host), then specify --ssh
and --url
, eg:
$ elktail --ssh ubuntu@some.remote --url http://internal-es.from.jumphost:9200 ...
Remember to add your ssh keys to the ssh-agent
in your user session - eg:
$ ssh-add /path/to/pem/file.pem
Elktail Remembers Last Successful Connection
Once you successfully connect to ES, elktail
will remember connection parameters for future invocations. You can than invoke elktail
without any parameters and it will connect to the last ES server it successfully connected to previously.
For example, once you successfully connect to ES using:
elktail -url "http://elastic.example.com:9200" --save
You can then invoke elktail
without any parameters and it will again attempt to connect to elastic.example.com:9200
.
Configuration parameters for last successful connection are stored in ~/.elktail/
directory.
You can specify a configuration file using the --config /path/to/config/file.json
, so that you can maintain multiple canned queries.
Queries
Elktail also supports ES query string searches as the argument. For example, in order to tail logs from host myhost.example.com
that have log level of ERROR you could do the following:
elktail host:myhost.example.com AND level:error
A cheatsheet for the KQL syntax can be found here with a pointer to a blog post - https://www.timroes.de/kibana-search-cheatsheet . The official documentation is here https://www.elastic.co/guide/en/kibana/current/kuery-query.html .
Often, when you start out, you do not know what fields you want to output or to query by. A good place to start is to print the raw JSON output and then find fields of interest using the -r
option. This can be made easier with the help of -p
which will pretty-print the output:
$ elktail -p
which will dump a pretty-printed record.
Date Ranges and Elastic's Logstash Indices
Logstash stores the logs in elasticsearch in one-per-day indices. When specifying date range, elktail
needs to search through appropriate indices depending on the dates selected. Currently, this will only work if your index name pattern contains dates in YYYY.MM.dd format (which is logstash's default).
Specifying Date Ranges
Elktail supports specifying date range in order to query the logs at specific times. You can specify the date range by using after -a
and before -b
options followed by the date. When specifying dates use the following format: YYYY-MM-ddTHH:mm:ss.SSS (e.g 2016-06-17T15:20:00.000). Time part is optional and you can omit it (e.g. you can leave out seconds, milliseconds, or the whole time part and only specify the date).
Examples
Search for errors after 3PM, April 1st, 2016:
elktail -a 2016-04-01T15:00 level:error
Search for errors betweem 1PM and 3PM on July 1st, 2016:
elktail -a 2016-07-01T13:00 -b 2016-07-01T15:00 level:error
datemath
The -a
and -b
(after and before) date selector options can accept datemath expressions. Thes would typically be something like -a 'now-2d' -b 'now-1d'
for a 1 day window the day before.
Examples:
elktail -a "now-60m" -b "now-15m" level:error
Since tailing the logs when using date ranges does not really make sense, when you specify date range options list-only mode will be implied and following is automatically disabled (e.g. elktail
will behave as if you did not specify -f
option)
You can also specify a context time span along with the -a
(after) option using -C <datmath value>
. This will create a date range by adding and subtracting a time value from the after
date eg: -a '2022-05-01T00:00:00' -C '2d'
will create an interval of 2022-05-01 +/- 2 days.
Note: this must always be specified as seconds,days,weeks,months and years as this is added to after
as a mathematical expression eg: <after>||+<context-time>
.
See for details: https://www.elastic.co/guide/en/elasticsearch/reference/8.0/common-options.html#date-math and https://github.com/vectordotdev/go-datemath .
Other Options
NAME:
elktail - utility for tailing Filebeat logs stored in ElasticSearch
USAGE:
elktail [global options] [query-string]
Options marked with (*) are saved between invocations of the command. Each time you specify an option marked with (*) previously stored settings are erased.
VERSION:
1.19.0
GLOBAL OPTIONS:
GLOBAL
--apikey value (*) API key to use when accessing via TLS [$ELKTAIL_APIKEY]
--cacert value (*) ca certificate to use when accessing via TLS [$ELKTAIL_CACERT]
--cert value (*) certificate to use when accessing via TLS [$ELKTAIL_CERT]
--descending Output rows in descending order (default: false)
--fields Output mapping fields and exit. Pass a single argument as a filter regex eg: --fields '^(agent|kubernetes)' (default: false)
--key value (*) key to use when accessing via TLS [$ELKTAIL_KEY]
--print-version, -V Print version (default: false)
--socks value, --socks-proxy value (*) Use a socks proxy to connect. Format for the argument is : eg: localhost:6676 - note will automatically pick up HTTP_PROXY env var if this not set [$ELKTAIL_SOCKS_PROXY_URL]
--ssh value, --ssh-tunnel value (*) Use ssh tunnel to connect. Format for the argument is [localport:][user@]sshhost.tld[:sshport] [$ELKTAIL_SSH_TUNNEL]
--ssh-hop value, --ssh-tunnel-hop value (*) second ssh hop - must be used incojunction with --ssh. Format for the argument is sshhost.tld[:sshport] [$ELKTAIL_SSH_HOP]
--terms Output query terms values in results (default: false)
--v1 Enable verbose output (for debugging) (default: false)
--v2 Enable even more verbose output (for debugging) (default: false)
--v3 Same as v2 but also trace requests and responses (for debugging) (default: false)
-C value, --context-time value +/- context time span relative to --after (ignores --before, defaults rows to 10,000) - expressed as datemath expression, and selects all records using search criteria eg: -C "15m" is +/- 15 minutes
-F value, --format value (*) Message format for the entries - field names are referenced using % sign, for example '%@timestamp %message' (default: "[%@timestamp] %agent.name :: %message")
-I, --insecure Insecure skip verify server certificate (default: false)
-N, --count Count records and exit (default: false)
-U value, --url value (*) ElasticSearch URL (default: "http://localhost:9200") [$ELKTAIL_URL]
-a value, --after value List results after or equal to specified date/time (example: -a "2022-06-17T00:00" also takes datemath expressions eg: -a "now-1d")
-b value, --before value List results before or equal to specified date/time (example: -b "2022-06-17T23:59" also takes datemath expressions eg: -a "now-15m")
-c value, --config value Configuration file - can be either .json or .yaml (default: "/home/piers/.elktail/default.json") [$ELKTAIL_CONFIG]
-d, --delimited Add a tab delimiter to output on field boundaries of --format (default: false)
-f, --follow Follow/stream result, like tail -f (default: false)
-h, --help Show help (default: false)
-i value, --index-pattern value (*) Index pattern - elktail will attempt to tail only the latest of logstash's indexes matched by the pattern (default: "filebeat-\\d+\\.\\d+\\.\\d+.*")
-l, --lineno Output record line numbers (default: false)
-n value, --number value (*) Number of entries fetched initially (default: 250)
-p, --pretty-print Output raw pretty printed (JSON) records (default: false)
-r, --raw Output raw (JSON) records (default: false)
-s, --save Save query terms - next invocation of elktail (without parameters) will use saved query terms. Any additional terms specified will be applied with AND operator to saved terms (default: false)
-t value, --timestamp-field value (*) Timestamp field name used for tailing entries (default: @timestamp) (default: "@timestamp")
-u value, --user value (*) Username and password for authentication. curl-like format (separated by colon) [$ELKTAIL_USER]
-w value, --tunnel-wait value (*) Number of seconds pause to enable tunnel to establish (default: 1) [$ELKTAIL_SSH_WAIT]
-z, --add-id Add record ID to output (default: false)
For extended help, go to https://gitlab.com/piersharding/elktail
Basic Usage - elkuser
The goal of elkuser
is to provide an administration interface that makes it easier to pass credentials and base line configuration to users for the elktail
.
Configuration
It is best to use a configuration file that has been generated by elktail. This can be either json or yaml, but should set out the outline of the configuration that you would wish to pass on to users - eg: the elasticsearch URL, CaCert, etc.
Example:
SearchTarget:
Url: https://elasticsearch.local.net:9200
IndexPattern: filebeat-.*
Cert: config/ca/kibana-tls-signed.crt
CaCert: config/ca/ca-bundle.pem
Key: config/ca/kibana-tls.key
APIKey: an-api-key==
QueryDefinition:
Terms:
- 'input.type: log'
Format: '%@timestamp: [%systemd.unit]:: %message '
TimestampField: '@timestamp'
InitialEntries: 250
TunnelWait: 1
User: ""
Password: ""
SSHTunnelParams: ""
SSHTunnelHop: ""
With this, the basic incantation is:
elkuser --config ./config.yaml --user superuser:superuserpassword ...
It is necessary to call elkuser
with an administrative user/password as the security
APIs will not accept tokens.
Commands
The main commands handle role, user and token management. Each has sub-commands.
The general usage strategy is to:
- create a role that provides read only access
- create user accounts for each user
- with each user account generate a token
- and then finally generate a config file for the user as a starting point for using
elktail
.
All commands and sub-commands have command line help explaining usage and available switches. Use --help
to investigate.
Role Management
List existing roles with:
elkuser --config ./config.yaml --user superuser:superuserpassword role list
Create or update a role. The default role name is elktail-viewer
. This has sufficient privileges to list indicies, read indicies and create own api-key (access token).
elkuser --config ./config.yaml --user superuser:superuserpassword role create
# add --confirm to update existing role
Delete an existing role with:
elkuser --config ./config.yaml --user superuser:superuserpassword role delete <role name>
NAME:
elkuser role - Role management
USAGE:
elkuser role command [command options]
COMMANDS:
help, h Shows a list of commands or help for one command
role:
list, l list roles: elkuser role list
create, c create/update a role: elkuser role create
delete, d delete an existing role: elkuser role delete
OPTIONS:
-h, --help Show help (default: false)
User Management
List existing users with:
elkuser --config ./config.yaml --user superuser:superuserpassword user list
Create or update a user. The default role is elktail-viewer
(see above). This has sufficient privileges to list indicies, read indicies and create own api-key (access token). It is crucial to remember the password used. If one is not provided, then one will be generated and printed out. This password is required for the next stage of generating a token on behalf of a user.
elkuser --config ./config.yaml --user superuser:superuserpassword user create --role elktail-viewer <user name> (<new user password>)
# add --confirm to update existing user
Delete an existing user with:
elkuser --config ./config.yaml --user superuser:superuserpassword user delete <user name>
NAME:
elkuser user - User management
USAGE:
elkuser user command [command options]
COMMANDS:
help, h Shows a list of commands or help for one command
user:
list, l list users: elkuser user list
create, c create a new user: elkuser user create ()
password, p Update user password: elkuser --user user password
delete, d delete an existing user: elkuser user delete
OPTIONS:
-h, --help Show help (default: false)
API Key Management
List existing tokens with:
elkuser --config ./config.yaml --user superuser:superuserpassword apikey list
Create a apikey. The default role is elktail-viewer
(see above). This has sufficient privileges to list indicies, read indicies and create own api-key (access apikey). The apikey will be printed out. This can never be retrieved again, so note it carefully.
# creat eon behalf of user
elkuser --config ./config.yaml --user superuser:superuserpassword apikey create <user name> <password>
# or create for the current user
elkuser --config ./config.yaml --user user:userpassword apikey create
Delete an existing apikey with:
elkuser --config ./config.yaml --user superuser:superuserpassword token delete <apikey id>
user elkuser apikey list
to identify the apikey id.
Generate a configuration for a user with a apikey:
elkuser --config ./config.yaml --user superuser:superuserpassword apikey config <apikey>
ComandLine: elktail --url https://elasticsearch.local.net:9200 --index-pattern "filebeat-.*" --cacert config/ca/ca-bundle.pem --cert config/ca/kibana-tls-signed.crt --key config/ca/kibana-tls.key --apikey a-token== --format "%@timestamp: [%systemd.unit]:: %message " --timestamp-field @timestamp --number 250 input.type: log
Config File:
ConfigFile: ./logs.json
InitialEntries: 250
Password: ""
QueryDefinition:
Format: '%@timestamp: [%systemd.unit]:: %message '
Terms:
- 'input.type: log'
TimestampField: '@timestamp'
SSHTunnelHop: ""
SSHTunnelParams: ""
SearchTarget:
APIKey: a-token==
CaCert: |
-----BEGIN CERTIFICATE-----
MIIGITCCBA.....
-----END CERTIFICATE-----
Cert: ""
IndexPattern: filebeat-.*
Insecure: false
Key: ""
Url: https://elasticsearch.local.net:9200
TunnelWait: 1
User: ""
NAME:
elkuser apikey - API Key management
USAGE:
elkuser apikey command [command options]
COMMANDS:
help, h Shows a list of commands or help for one command
apikey:
list, l list apikeys: elkuser apikey list
config generate apikey config: elkuser apikey config ()
create, c create a new apikey for a given user: elkuser apikey create
delete, d delete an existing apikey: elkuser apikey delete
OPTIONS:
-h, --help Show help (default: false)