Prometheus Data Connector
The Hasura Prometheus Connector allows for connecting to a Prometheus database giving you an instant GraphQL API on top of your Prometheus data.
This connector is built using the Go Data Connector SDK and implements the Data Connector Spec.
Features
Metrics Collection
How it works
The connector can introspect and automatically transform available metrics on the Prometheus server to collection queries. Each collection has a common structure:
{
<label_1>
<label_2>
# ...
timestamp
value
labels
values {
timestamp
value
}
}
[!NOTE]
Labels and metrics are introspected from the Prometheus server at the current time. You need to introspect again whenever there are new labels or metrics.
The configuration plugin introspects labels of each metric and defines them as collection columns that enable the ability of Hasura permissions and remote join. The connector supports basic comparison filters for labels.
{
process_cpu_seconds_total(
where: {
timestamp: { _gt: "2024-09-24T10:00:00Z" }
job: {
_eq: "node"
_neq: "prometheus"
_in: ["node", "prometheus"]
_nin: ["ndc-prometheus"]
_regex: "prometheus.*"
_nregex: "foo.*"
}
}
args: { step: "5m", offset: "5m", timeout: "30s" }
) {
job
instance
timestamp
value
values {
timestamp
value
}
}
}
The connector can detect if you want to request an instant query or range query via the timestamp
column:
_eq
: instant query at the exact timestamp.
_gt
< _lt
: range query.
The range query mode is default If none of the timestamp operators is set.
The timestamp
and value
fields are the result of the instant query. If the request is a range query, timestamp
and value
are picked the last item of the values
series.
Common arguments
step
: the query resolution step width in duration format or float number of seconds. The step should be explicitly set for range queries. Even though the connector can estimate the approximate step width the result may be empty due to too far interval.
offset
: the offset modifier allows changing the time offset for individual instant and range vectors in a query.
timeout
: the evaluation timeout of the request.
fn
: the array of composable PromQL functions.
Aggregation
The fn
argument is an array of PromQL function parameters. You can set multiple functions that can be composed into the query. For example, with this PromQL query:
sum by (job) (rate(process_cpu_seconds_total[1m]))
The equivalent GraphQL query will be:
{
process_cpu_seconds_total(
where: { timestamp: { _gt: "2024-09-24T10:00:00Z" } }
args: { step: "5m", fn: [{ rate: "5m" }, { sum: [job] }] }
) {
job
timestamp
value
values {
timestamp
value
}
}
}
Native Query
How it works
When simple queries don't meet your need you can define native queries in the configuration file with prepared variables with the ${<name>}
template.
metadata:
native_operations:
queries:
service_up:
query: up{job="${job}", instance="${instance}"}
labels:
instance: {}
job: {}
arguments:
instance:
type: String
job:
type: String
The native query is exposed as a read-only function with 2 required fields job
and instance
.
{
service_up(
start: "2024-09-24T00:00:00Z"
job: "node"
instance: "localhost:9090"
) {
timestamp
value
labels
values {
value
timestamp
}
}
}
[!NOTE]
Labels aren't automatically added. You need to define them manually.
Common arguments
start
& end
: time range arguments for the range query.
time
: Evaluation timestamp. Use this argument if you want to run an instant query.
step
: the query resolution step width in duration format or float number of seconds. The step should be explicitly set for range queries. Even though the connector can estimate the approximate step width the result may be empty due to too far interval.
timeout
: the evaluation timeout of the request.
Prometheus APIs
Raw PromQL query
Execute a raw PromQL query directly. This API should be used by the admin only. The result contains labels and values only.
{
promql_query(
query: "process_cpu_seconds_total{job=\"node\"}"
start: "2024-09-24T10:00:00Z"
step: "5m"
) {
labels
values {
timestamp
value
}
}
}
Configuration
Authentication
Basic Authentication
connection_settings:
authentication:
basic:
username:
env: PROMETHEUS_USERNAME
password:
env: PROMETHEUS_PASSWORD
HTTP Authorization
connection_settings:
authentication:
authorization:
type:
value: Bearer
credentials:
env: PROMETHEUS_AUTH_TOKEN
OAuth2
connection_settings:
authentication:
oauth2:
token_url:
value: http://example.com/oauth2/token
client_id:
env: PROMETHEUS_OAUTH2_CLIENT_ID
client_secret:
env: PROMETHEUS_OAUTH2_CLIENT_SECRET
Google Cloud
The configuration accepts either the Google application credentials JSON string or file path. If the object is empty the client automatically loads the credential file from the GOOGLE_APPLICATION_CREDENTIALS
environment variable.
connection_settings:
authentication:
google:
# credentials:
# env: GOOGLE_APPLICATION_CREDENTIALS_JSON
# credentials_file:
# env: GOOGLE_APPLICATION_CREDENTIALS
Development
Get started
Start Docker services
docker composes up -d
Introspect the configuration file
make generate-test-config
docker compose restart ndc-prometheus
make build-supergraph-test
docker compose up -d --build engine
Browse the engine console at http://localhost:3000.