SQL Output Plugin
The SQL output plugin saves Telegraf metric data to an SQL database.
The plugin uses a simple, hard-coded database schema. There is a table
for each metric type and the table name is the metric name. There is a
column per field and a column per tag. There is an optional column for
the metric timestamp.
A row is written for every input metric. This means multiple metrics
are never merged into a single row, even if they have the same metric
name, tags, and timestamp.
The plugin uses Golang's generic "database/sql" interface and third
party drivers. See the driver-specific section below for a list of
supported drivers and details. Additional drivers may be added in
future Telegraf releases.
Getting started
To use the plugin, set the driver setting to the driver name
appropriate for your database. Then set the data source name
(DSN). The format of the DSN varies by driver but often includes a
username, password, the database instance to use, and the hostname of
the database server. The user account must have privileges to insert
rows and create tables.
Generated SQL
The plugin generates simple ANSI/ISO SQL that is likely to work on any
DBMS. It doesn't use language features that are specific to a
particular DBMS. If you want to use a feature that is specific to a
particular DBMS, you may be able to set it up manually outside of this
plugin or through the init_sql setting.
The insert statements generated by the plugin use placeholder
parameters. Most database drivers use question marks as placeholders
but postgres uses indexed dollar signs. The plugin chooses which
placeholder style to use depending on the driver selected.
Advanced options
When the plugin first connects it runs SQL from the init_sql setting,
allowing you to perform custom initialization for the connection.
Before inserting a row, the plugin checks whether the table exists. If
it doesn't exist, the plugin creates the table. The existence check
and the table creation statements can be changed through template
settings. The template settings allows you to have the plugin create
customized tables or skip table creation entirely by setting the check
template to any query that executes without error, such as "select 1".
The name of the timestamp column is "timestamp" but it can be changed
with the timestamp_column setting. The timestamp column can be
completely disabled by setting it to "".
By changing the table creation template, it's possible with some
databases to save a row insertion timestamp. You can add an additional
column with a default value to the template, like "CREATE TABLE
{TABLE}(insertion_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
{COLUMNS})".
The mapping of metric types to sql column types can be customized
through the convert settings.
Configuration
# Save metrics to an SQL Database
[[outputs.sql]]
## Database driver
## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
## sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
# driver = ""
## Data source name
## The format of the data source name is different for each database driver.
## See the plugin readme for details.
# data_source_name = ""
## Timestamp column name
# timestamp_column = "timestamp"
## Table creation template
## Available template variables:
## {TABLE} - table name as a quoted identifier
## {TABLELITERAL} - table name as a quoted string literal
## {COLUMNS} - column definitions (list of quoted identifiers and types)
# table_template = "CREATE TABLE {TABLE}({COLUMNS})"
## Table existence check template
## Available template variables:
## {TABLE} - tablename as a quoted identifier
# table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
## Initialization SQL
# init_sql = ""
## Metric type to SQL type conversion
## The values on the left are the data types Telegraf has and the values on
## the right are the data types Telegraf will use when sending to a database.
##
## The database values used must be data types the destination database
## understands. It is up to the user to ensure that the selected data type is
## available in the database they are using. Refer to your database
## documentation for what data types are available and supported.
#[outputs.sql.convert]
# integer = "INT"
# real = "DOUBLE"
# text = "TEXT"
# timestamp = "TIMESTAMP"
# defaultvalue = "TEXT"
# unsigned = "UNSIGNED"
# bool = "BOOL"
## This setting controls the behavior of the unsigned value. By default the
## setting will take the integer value and append the unsigned value to it. The other
## option is "literal", which will use the actual value the user provides to
## the unsigned option. This is useful for a database like ClickHouse where
## the unsigned value should use a value like "uint64".
# conversion_style = "unsigned_suffix"
go-sql-driver/mysql
MySQL default quoting differs from standard ANSI/ISO SQL quoting. You
must use MySQL's ANSI_QUOTES mode with this plugin. You can enable
this mode by using the setting init_sql = "SET sql_mode='ANSI_QUOTES';"
or through a command-line option when
running MySQL. See MySQL's docs for details on
ANSI_QUOTES
and how to set the SQL
mode.
You can use a DSN of the format
"username:password@tcp(host:port)/dbname". See the driver
docs for details.
jackc/pgx
You can use a DSN of the format
"postgres://username:password@host:port/dbname". See the driver
docs for more details.
modernc.org/sqlite
This driver is not available on all operating systems and
architectures. It is only included in Linux builds on amd64, 386,
arm64, arm, and Darwin on amd64. It is not available for Windows,
FreeBSD, and other Linux and Darwin platforms.
The DSN is a filename or url with scheme "file:". See the driver
docs for details.
clickhouse
Use this metric type to SQL type conversion:
[outputs.sql.convert]
integer = "Int64"
text = "String"
timestamp = "DateTime"
defaultvalue = "String"
unsigned = "UInt64"
bool = "UInt8"
See ClickHouse data types for more info.
denisenkom/go-mssqldb
Telegraf doesn't have unit tests for go-mssqldb so it should be
treated as experimental.
snowflakedb/gosnowflake
Telegraf doesn't have unit tests for gosnowflake so it should be
treated as experimental.