GitHub Receiver
Status |
|
Stability |
development: metrics |
Distributions |
[liatrio] |
Issues |
|
[liatrio]:
The GitHub receiver receives data from GitHub. As a
starting point it scrapes metrics from repositories but will be extended to
include traces and logs.
The current default set of metrics can be found in
documentation.md.
These metrics can be used as leading indicators (capabilities)
to the DORA metrics; helping provide insight into modern-day
engineering practices.
Getting Started
The collection interval is common to all scrapers and is set to 30 seconds by default.
Note: Generally speaking, if the vendor allows for anonymous API calls, then you
won't have to configure any authentication, but you may only see public repositories
and organizations. You may run into significantly more rate limiting.
github:
collection_interval: <duration> #default = 30s recommended 300s
scrapers:
scraper/1:
scraper/2:
...
A more complete example using the GitHub scrapers with authentication is as follows:
extensions:
bearertokenauth/github:
token: ${env:GH_PAT}
receivers:
github:
initial_delay: 1s
collection_interval: 60s
scrapers:
scraper:
metrics:
vcs.repository.contributor.count:
enabled: true
github_org: myfancyorg
search_query: "org:myfancyorg topic:o11yalltheway" #Recommended optional query override, defaults to "{org,user}:<github_org>"
endpoint: "https://selfmanagedenterpriseserver.com"
auth:
authenticator: bearertokenauth/github
service:
extensions: [bearertokenauth/github]
pipelines:
metrics:
receivers: [..., github]
processors: []
exporters: [...]
A Grafana Dashboard exists on the marketplace for metrics from this receiver
and can be found
here.
Scraping
Important:
- The GitHub scraper does not emit metrics for branches that have not had
changes since creation from the default branch (trunk).
- Due to GitHub API limitations, it is possible for the branch time metric to
change when rebases occur, recreating the commits with new timestamps.
For additional context on GitHub scraper limitations and inner workings please
see the Scraping README.