README ¶
Tickscript package
The tickscript
package can be used to convert TICKscripts to InfluxDB 2.x tasks.
Most TICKscript functions have the same or similar counterparts in Flux.
This package only provides a set of convenience functions for easier conversion and creation of custom checks
executed as tasks
and triggering alerts
in InfluxDB 2.x.
To learn more about monitoring and alerting in InfluxDB 2.x and Flux, please see
Available functions
select
selectWindow
groupBy
compute
join
alert
deadman
defineCheck
Conversion guidelines
-
Variable conversions
var realm = 'qa'
becomesrealm = "qa"
in Flux
var warnLevel = lambda: "device_count" > 2000
is a functionwarnLevel = (r) => r["device_count"] > 2000
in Flux -
Both
batch
andstream
translates tofrom(bucket: ...)
in Flux. -
Every
batch
,stream
anddeadman
must be separate task. -
every(duration)
andoffset(duration)
property maps toevery
andoffset
fields in task's option record. For better control or aligned scheduling, usecron
option instead. -
period(duration)
property maps torange(start: -duration)
in Flux pipeline. -
deadman()
'sinterval
maps to task'severy
scheduling value. -
groupBy(columns)
maps togroup(columns)
.
Columns must include internal_measurement
column. For convenience, functiongroupBy
is provided in this package.
Flux results are grouped by all tags by default. To ungroup, callgroup()
without argument.
query('''
SELECT mean("counter") AS "total_mean" WHERE ...
''')
.groupBy('host')
.period(1m)
can be rewritten to Flux as
import ts "contrib/bonitoo-io/tickscript"
import "influxdata/influxdb/schema"
from(bucket: ...)
|> range(start: -1m)
|> filter(fn: (r) => ... )
|> schema.fieldsAsCols()
|> ts.groupBy(columns: ["host"])
|> ts.select(column: "counter", fn: mean, as: "total_mean")
groupBy(time(t), columns)
maps togroup(columns)
andaggregateWindow(..., every: t, ...)
The package provides convenience functionsgroupBy(columns)
andselectWindow(..., every(t), ...)
to achieve the same.
query('''
SELECT sum("counter") AS "total_sum" WHERE ...
''')
.groupBy(time(10s), 'host')
.period(1m)
.fill(0)
can be rewritten to Flux as
import ts "contrib/bonitoo-io/tickscript"
import "influxdata/influxdb/schema"
from(bucket: ...)
|> range(start: -1m)
|> filter(fn: (r) => ... )
|> schema.fieldsAsCols()
|> ts.groupBy(columns: ["host"])
|> ts.selectWindow(column: "counter", fn: sum, as: "total_sum", every: 10s, defaultValue: 0)
-
eval(expression)
corresponds tomap(fn: (r) = > ({ r with ...}))
in Flux -
TICKscript
alert
provides property methods to send alerts to event handlers or a topic. In Flux, usetopic
parameter inalert()
to route the alert to topic. -
stateChangesOnly
is a filter available in InfluxDB notification rule. -
TICKscript pipeline with multiple alerts translates to multiple Flux pipelines, ie.
var data = batch
| query(...)
data
| alert(...)
.topic('A')
| alert(...)
.topic('B')
becomes
data = from(bucket: ...)
|> range(start: -duration)
...
data
|> alert(..., topic: "A")
data
|> alert(..., topic: "B")
Functions
tickscript.select
tickscript.select()
is a convenience function for selecting a column with optional aggregation.
Intended to work like "SELECT x AS y"
or SELECT f(x) AS y
query (without time grouping).
Parameters:
column
- Existing column. Default value is_value
.fn
- Optional aggregation function. Default is none.as
- Desired column name.
Examples:
import ts "contrib/bonitoo-io/tickscript"
from(bucket: "test")
...
|> ts.select(column: "message_rate", as: "MsgRate") // query('''SELECT "message_rate" AS "MsgRate"''')
import ts "contrib/bonitoo-io/tickscript"
from(bucket: "test")
...
|> ts.select(column: "counter", fn: mean, as: "count") // query('''SELECT mean("counter") AS "count"''')
tickscript.selectWindow
tickscript.selectWindow()
is a convenience function for selecting a column with time grouping and aggregation.
Intended to work like "SELECT f(x) AS y"
query with .groupBy(time(t), ...)
.
Parameters:
column
- Existing column. Default value is_value
.fn
- Aggregation function.as
- Desired column name.every
- Duration of windows.defaultValue
- Value to fill windows with null aggregate value.
Examples:
import ts "contrib/bonitoo-io/tickscript"
from(bucket: "test")
...
|> ts.selectWindow(column: "counter", fn: mean, as: "rate", every: 1m, defaultValue: 0.0) // query('''"SELECT mean("counter") AS "rate"''').groupBy(time(1m))
tickscript.groupBy
tickscript.groupBy()
is a convenience function for result grouping.
Parameters:
columns
- Set of columns to group by. The implementation adds_measurement
column required by underlyingmonitor
package.
See "Examples" paragraph.
tickscript.compute
tickscript.compute()
is a convenience function for computing an aggregation on the data.
Intended to be used like TICKscript |f(x).as(y)
where f
is an aggregation function.
Parameters:
column
- Existing column. Default value is_value
.fn
- Aggregation function.as
- Desired column name.
Examples:
import ts "contrib/bonitoo-io/tickscript"
from(bucket: "test")
...
|> ts.compute(column: "message_rate", fn: median, as: "median_message_rate") // query|median('message_rate').as('median_message_rate)
tickscript.join
tickscript.join()
is a convenience function for joining two streams.
It ensures the result has _measurement
column and it is in the group key.
Parameters:
tables
- Record with two streams. See Fluxjoin
documentation for details.on
- Optional list of columns to join on. Default is["_time"]
.measurement
- Measurement name.
Examples:
import ts "contrib/bonitoo-io/tickscript"
requests = from(bucket: "test")
...
|> ts.select(column: "counter", fn: sum, as: "total_sum")
failures = from(bucket: "test")
...
|> ts.select(column: "counter", fn: sum, as: "failure_sum")
ts.join(tables: { requests: requests, failures: failures }, measurement: "xte")
|> map(fn: (r) => ({ r with error_percent: float(v: failures.failure_sum) / float(v: requests.total_sum) }))
tickscript.alert
tickscript.alert()
checks input data and create alerts.
It requires pivoted data (call schema.fieldsAsCols()
before tickscript.alert()
).
Parameters:
check
- Check data. It is a record required by the underlyingmonitor.check()
. UsedefineCheck()
or create manually.id
- Function that constructs alert ID. Default is(r) => "${r._check_id}"
.message
- Function that constructs alert message. Default isThreshold Check: ${r._check_name} is: ${r._level}"
.details
- Function that constructs detailed alert message. Default is(r) => ""
.crit
- Predicate function that determinescrit
status. Default is(r) => false
.warn
- Predicate function that determineswarn
status. Default is(r) => false
.info
- Predicate function that determinesinfo
status. Default is(r) => false
.ok
- Predicate function that determinesinfo
status. Default is(r) => true
.topic
- Topic name.
See Examples.
tickscript.deadman
tickscript.deadman()
creates an alert on low throughput.
Triggers critical alert if thoughput drops bellow threshold
value.
It requires pivoted data (call schema.fieldsAsCols()
before tickscript.deadman()
).
Parameters:
check
- Check data. It is a record required by the underlyingmonitor
package. UsedefineCheck()
or create manually.measurement
- measurement namethreshold
- threshold value (integer)id
- Function that constructs alert ID. Default is(r) => "${r._check_id}"
.message
- Function that constructs alert message. Default isDeadman Check: ${r._check_name} is: ${r._level}"
.topic
- Topic name.
See Examples.
tickscript.defineCheck
tickscript.defineCheck()
creates check record that is required by alert()
and deadman()
.
Parameters:
name
- name of the checkid
- unique identifier of the checktype
- check type. Default value:"custom"
.
It returns a record with the following structure:
{
_check_id :id,
_check_name: name,
_type: "custom",
tags: {}
}
Examples
Batch node
duration = 5m
every = 1m
db = 'gw'
tier = 'qa'
metric_type = 'kafka_message_in_rate'
h_threshold = 5000
batch
|query('SELECT mean(' + metric_type + ') AS "KafkaMsgRate" FROM ' + db + ' WHERE realm = \'' + tier + '\' AND "host" =~ /^kafka.+.m02/')
.period(duration)
.every(frequency)
.groupBy('host','realm')
|alert()
.id('Realm: {{index .Tags "realm"}} - Hostname: {{index .Tags "host"}} / Metric: ' + metric_type + ' threshold alert' )
.message('{{.ID }}: {{ .Level }} - {{ index .Fields "KafkaMsgRate" | printf "%0.2f"}}')
.crit(lambda: "KafkaMsgRate" > h_threshold)
.stateChangesOnly()
.topic('TESTING')
import ts "contrib/bonitoo-io/tickscript"
import "influxdata/influxdb/schema"
// required task option
option task = {
name: "Kafka Message Rate",
every: 1m
}
// create custom check info
check = ts.defineCheck(id: "${task.name}-check", name: "${task.name} Check")
// converted TICKscript
duration = 5m
every = 1m
db = "gw"
tier = "qa"
metric_type = "kafka_message_in_rate"
h_threshold = 5000
from(bucket: db)
|> range(start: -duration)
|> filter(fn: (r) => r.measurement == db)
|> filter(fn: (r) => r.realm == tier and r.host =~ /^kafka.+.m02/)
|> filter(fn: (r) => r._field == metric_type)
|> schema.fieldsAsCols()
|> ts.groupBy(columns: ["host", "realm"])
|> ts.select(column: metric_type, fn: mean, as: "KafkaMsgRate")
|> ts.alert(
check: check,
id: (r) => "Realm: ${r.realm} - Hostname: ${r.host} / Metric: ${metric_type} threshold alert",
message: (r) => "${r.id}: ${r._level} - ${string(v:r.KafkaMsgRate)}",
crit: (r) => r.KafkaMsgRate > h_threshold,
topic: "TESTING"
)
Use notification rule as topic handler with additional filter for _topic
tag value "TESTING"
.
Batch node
stream
|from()
.measurement('cpu')
.groupBy('host')
.where(lambda: "realm" != 'build')
|deadman(10, 10m)
.id('Deadman for system metrics')
.message('{{ .ID }} is {{ .Level }} on {{ index .Tags "host" }}')
.stateChangesOnly()
.topic('DEADMEN')
import ts "contrib/bonitoo-io/tickscript"
import "influxdata/influxdb/schema"
// required task option
option task = {
name: "System Metrics Deadman",
every: 10m
}
// custom check info
check = ts.defineCheck(id: "${task.name}-check", name: "${task.name} Check")
// converted TICKscript
from(bucket: db)
|> range(start: -task.every)
|> filter(fn: (r) => r.measurement == "cpu")
|> filter(fn: (r) => r.realm == "build")
|> schema.fieldsAsCols()
|> ts.groupBy(columns: ["host"])
|> ts.deadman(
check: check,
measurement: "cpu",
threshold: 10,
id: (r) => "Deadman for system metrics",
message: (r) => "${r.id} is ${r._level} on ${if exists r.host then r.host else "uknown"}",
topic: "DEADMEN"
)
Use notification rule as topic handler with additional filter for _topic
tag value "DEADMEN"
.
Documentation ¶
There is no documentation for this package.