sql

package
v1.32.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 18, 2024 License: MIT Imports: 15 Imported by: 0

README

SQL Output Plugin

This plugin writes metrics to a supported SQL database using a simple, hard-coded database schema. There is a table for each metric type with the table name corresponding to the metric name. There is a column per field and a column per tag with an optional column for the metric timestamp.

A row is written for every metric. This means multiple metrics are never merged into a single row, even if they have the same metric name, tags, and timestamp.

The plugin uses Golang's generic "database/sql" interface and third party drivers. See the driver-specific section for a list of supported drivers and details.

⭐ Telegraf v1.19.0 🏷️ datastore 💻 all

Getting started

To use the plugin, set the driver setting to the driver name appropriate for your database. Then set the data source name (DSN). The format of the DSN varies by driver but often includes a username, password, the database instance to use, and the hostname of the database server. The user account must have privileges to insert rows and create tables.

Generated SQL

The plugin generates simple ANSI/ISO SQL that is likely to work on any DBMS. It doesn't use language features that are specific to a particular DBMS. If you want to use a feature that is specific to a particular DBMS, you may be able to set it up manually outside of this plugin or through the init_sql setting.

The insert statements generated by the plugin use placeholder parameters. Most database drivers use question marks as placeholders but postgres uses indexed dollar signs. The plugin chooses which placeholder style to use depending on the driver selected.

Through the nature of the inputs plugins, the amounts of columns inserted within rows for a given metric may differ. Since the tables are created based on the tags and fields available within an input metric, it's possible the created table won't contain all the necessary columns. You might need to initialize the schema yourself, to avoid this scenario.

Advanced options

When the plugin first connects it runs SQL from the init_sql setting, allowing you to perform custom initialization for the connection.

Before inserting a row, the plugin checks whether the table exists. If it doesn't exist, the plugin creates the table. The existence check and the table creation statements can be changed through template settings. The template settings allows you to have the plugin create customized tables or skip table creation entirely by setting the check template to any query that executes without error, such as "select 1".

The name of the timestamp column is "timestamp" but it can be changed with the timestamp_column setting. The timestamp column can be completely disabled by setting it to "".

By changing the table creation template, it's possible with some databases to save a row insertion timestamp. You can add an additional column with a default value to the template, like "CREATE TABLE {TABLE}(insertion_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP, {COLUMNS})".

The mapping of metric types to sql column types can be customized through the convert settings.

Global configuration options

In addition to the plugin-specific configuration settings, plugins support additional global and plugin configuration settings. These settings are used to modify metrics, tags, and field or create aliases and configure ordering, etc. See the CONFIGURATION.md for more details.

Configuration

# Save metrics to an SQL Database
[[outputs.sql]]
  ## Database driver
  ## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
  ##  sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
  # driver = ""

  ## Data source name
  ## The format of the data source name is different for each database driver.
  ## See the plugin readme for details.
  # data_source_name = ""

  ## Timestamp column name
  # timestamp_column = "timestamp"

  ## Table creation template
  ## Available template variables:
  ##  {TABLE} - table name as a quoted identifier
  ##  {TABLELITERAL} - table name as a quoted string literal
  ##  {COLUMNS} - column definitions (list of quoted identifiers and types)
  # table_template = "CREATE TABLE {TABLE}({COLUMNS})"

  ## Table existence check template
  ## Available template variables:
  ##  {TABLE} - tablename as a quoted identifier
  # table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"

  ## Initialization SQL
  # init_sql = ""

  ## Maximum amount of time a connection may be idle. "0s" means connections are
  ## never closed due to idle time.
  # connection_max_idle_time = "0s"

  ## Maximum amount of time a connection may be reused. "0s" means connections
  ## are never closed due to age.
  # connection_max_lifetime = "0s"

  ## Maximum number of connections in the idle connection pool. 0 means unlimited.
  # connection_max_idle = 2

  ## Maximum number of open connections to the database. 0 means unlimited.
  # connection_max_open = 0

  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
  ## plugin definition, otherwise additional config options are read as part of
  ## the table

  ## Metric type to SQL type conversion
  ## The values on the left are the data types Telegraf has and the values on
  ## the right are the data types Telegraf will use when sending to a database.
  ##
  ## The database values used must be data types the destination database
  ## understands. It is up to the user to ensure that the selected data type is
  ## available in the database they are using. Refer to your database
  ## documentation for what data types are available and supported.
  #[outputs.sql.convert]
  #  integer              = "INT"
  #  real                 = "DOUBLE"
  #  text                 = "TEXT"
  #  timestamp            = "TIMESTAMP"
  #  defaultvalue         = "TEXT"
  #  unsigned             = "UNSIGNED"
  #  bool                 = "BOOL"
  #  ## This setting controls the behavior of the unsigned value. By default the
  #  ## setting will take the integer value and append the unsigned value to it. The other
  #  ## option is "literal", which will use the actual value the user provides to
  #  ## the unsigned option. This is useful for a database like ClickHouse where
  #  ## the unsigned value should use a value like "uint64".
  #  # conversion_style = "unsigned_suffix"

Driver-specific information

go-sql-driver/mysql

MySQL default quoting differs from standard ANSI/ISO SQL quoting. You must use MySQL's ANSI_QUOTES mode with this plugin. You can enable this mode by using the setting init_sql = "SET sql_mode='ANSI_QUOTES';" or through a command-line option when running MySQL. See MySQL's docs for details on ANSI_QUOTES and how to set the SQL mode.

You can use a DSN of the format "username:password@tcp(host:port)/dbname". See the driver docs for details.

jackc/pgx

You can use a DSN of the format "postgres://username:password@host:port/dbname". See the driver docs for more details.

modernc.org/sqlite

It is not supported on windows/386, mips, and mips64 platforms.

The DSN is a filename or url with scheme "file:". See the driver docs for details.

clickhouse
DSN

Currently, Telegraf's sql output plugin depends on clickhouse-go v1.5.4 which uses a different DSN format than its newer v2.* version.

Metric type to SQL type conversion

The following configuration makes the mapping compatible with Clickhouse:

  [outputs.sql.convert]
    conversion_style     = "literal"
    integer              = "Int64"
    text                 = "String"
    timestamp            = "DateTime"
    defaultvalue         = "String"
    unsigned             = "UInt64"
    bool                 = "UInt8"
    real                 = "Float64"

See ClickHouse data types for more info.

microsoft/go-mssqldb

Telegraf doesn't have unit tests for go-mssqldb so it should be treated as experimental.

snowflakedb/gosnowflake

Telegraf doesn't have unit tests for gosnowflake so it should be treated as experimental.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type ConvertStruct

type ConvertStruct struct {
	Integer         string `toml:"integer"`
	Real            string `toml:"real"`
	Text            string `toml:"text"`
	Timestamp       string `toml:"timestamp"`
	Defaultvalue    string `toml:"defaultvalue"`
	Unsigned        string `toml:"unsigned"`
	Bool            string `toml:"bool"`
	ConversionStyle string `toml:"conversion_style"`
}

type SQL

type SQL struct {
	Driver                string          `toml:"driver"`
	DataSourceName        string          `toml:"data_source_name"`
	TimestampColumn       string          `toml:"timestamp_column"`
	TableTemplate         string          `toml:"table_template"`
	TableExistsTemplate   string          `toml:"table_exists_template"`
	InitSQL               string          `toml:"init_sql"`
	Convert               ConvertStruct   `toml:"convert"`
	ConnectionMaxIdleTime config.Duration `toml:"connection_max_idle_time"`
	ConnectionMaxLifetime config.Duration `toml:"connection_max_lifetime"`
	ConnectionMaxIdle     int             `toml:"connection_max_idle"`
	ConnectionMaxOpen     int             `toml:"connection_max_open"`
	Log                   telegraf.Logger `toml:"-"`
	// contains filtered or unexported fields
}

func (*SQL) Close

func (p *SQL) Close() error

func (*SQL) Connect

func (p *SQL) Connect() error

func (*SQL) SampleConfig

func (*SQL) SampleConfig() string

func (*SQL) Write

func (p *SQL) Write(metrics []telegraf.Metric) error

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL