Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Collector ¶
type Collector struct {
Scrapers []*Scraper
}
Collector implements the prometheus.Collector interface and wraps internal scrapers. During collection, scrapers are invoked and pass their results back so that they can be exported.
func (*Collector) Collect ¶
func (c *Collector) Collect(ch chan<- prometheus.Metric)
Collect implements the prometheus.Collector Collect method https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#Collector
func (*Collector) Describe ¶
func (c *Collector) Describe(ch chan<- *prometheus.Desc)
Describe implements the prometheus.Collector Describe method https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#Collector
type Metric ¶ added in v0.0.2
type Metric struct { Name string Description string Labels []string // contains filtered or unexported fields }
Metric is an internal wrapper around Prometheus metrics. A Scraper can produce multiple timeseries, and this struct is meant to provide a wrapper around each time series so we can treat them all uniformly.
func (*Metric) PrefixMetricName ¶ added in v0.0.2
PrefixMetricName adds `aws_` to the beginning of every metric name.
type ScrapeResult ¶
type ScrapeResult struct { Labels []string Value float64 Type prometheus.ValueType }
ScrapeResult is the struct that must be returned from a scraper. The order of Labels should match the order defined in the Prometheus metric.
type Scraper ¶
type Scraper struct { Metrics map[string]*Metric ID string IamPermissions []string Fn func(*session.Session) (map[string][]*ScrapeResult, error) }
Scraper contains all the logic for a single Prometheus metric and the logic for how to scrape that metric. There is a one-to-one relationship between the metrics exported and scrapers.
func (*Scraper) InitializeMetrics ¶ added in v0.0.2
func (scraper *Scraper) InitializeMetrics()
InitializeMetrics assigns the Prometheus.Desc pointer to the Metric property. We do this because once a Desc has been created, all the values are private and we can't render the metadata behind a metric easily. Therefore, a Scraper is created with all the metadata defined and then the Prometheus.Desc is created one the Scraper is registered into the ScrapeRegistry.