Kubernetes Observer
The k8s_observer
is a Receiver Creator-compatible "watch observer" that will detect and report
Kubernetes pod, port, container, service, ingress and node endpoints via the Kubernetes API.
Example Config
extensions:
k8s_observer:
auth_type: serviceAccount
node: ${env:K8S_NODE_NAME}
observe_pods: true
observe_nodes: true
observe_services: true
observe_ingresses: true
receivers:
receiver_creator:
watch_observers: [k8s_observer]
receivers:
redis:
rule: type == "port" && pod.name matches "redis"
config:
password: '`pod.labels["SECRET"]`'
kubeletstats:
rule: type == "k8s.node"
config:
auth_type: serviceAccount
collection_interval: 10s
endpoint: "`endpoint`:`kubelet_endpoint_port`"
extra_metadata_labels:
- container.id
metric_groups:
- container
- pod
- node
The node
field can be set to the node name to limit discovered endpoints. For example, its name value can be obtained using the downward API inside a Collector pod spec as follows:
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
This spec-determined value would then be available via the ${env:K8S_NODE_NAME}
usage in the observer configuration.
Config
All fields are optional.
Name |
Type |
Default |
Docs |
auth_type |
string |
serviceAccount |
How to authenticate to the K8s API server. This can be one of none (for no auth), serviceAccount (to use the standard service account token provided to the agent pod), or kubeConfig to use credentials from ~/.kube/config . |
node |
string |
|
The node name to limit the discovery of pod, port, and node endpoints. Providing no value (the default) results in discovering endpoints for all available nodes. |
observe_pods |
bool |
true |
Whether to report observer pod and port endpoints. If true and node is specified it will only discover pod and port endpoints whose spec.nodeName matches the provided node name. If true and node isn't specified, it will discover all available pod and port endpoints. Please note that Collector connectivity to pods from other nodes is dependent on your cluster configuration and isn't guaranteed. |
observe_nodes |
bool |
false |
Whether to report observer k8s.node endpoints. If true and node is specified it will only discover node endpoints whose metadata.name matches the provided node name. If true and node isn't specified, it will discover all available node endpoints. Please note that Collector connectivity to nodes is dependent on your cluster configuration and isn't guaranteed. |
observe_services |
bool |
false |
Whether to report observer k8s.service endpoints. |
observe_ingresses |
bool |
false |
Whether to report observer k8s.ingress endpoints. |
More complete configuration examples on how to use this observer along with the receiver_creator
,
can be found at the Receiver Creator's documentation.
Setting up RBAC permissions
When using the serviceAccount
auth_type
, the service account of the pod running the agent needs to have the required permissions to
read the K8s resources it should observe (i.e. pods, nodes, services and ingresses).
Therefore, the service account running the pod needs to have the required ClusterRole
which grants it the permission to
read those resources from the Kubernetes API. Below is an example of how to set this up:
- Create a
ServiceAccount
that the collector should use.
<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: otelcontribcol
name: otelcontribcol
EOF
- Create a
ClusterRole
/ClusterRoleBinding
that grants permission to read pods, nodes, services and ingresses.
Note: If you do not plan to observe all of these resources (e.g. if you are only interested in services) it is recommended to remove
the resources you do not intend to observe from the configuration below:
<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- pods
verbs:
- get
- list
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- watch
- list
EOF
<<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otelcontribcol
subjects:
- kind: ServiceAccount
name: otelcontribcol
namespace: default
EOF
- Create a ConfigMap containing the configuration for the collector
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
data:
config.yaml: |
extensions:
k8s_observer:
auth_type: serviceAccount
node: ${env:K8S_NODE_NAME}
observe_pods: true
observe_nodes: true
observe_services: true
observe_ingresses: true
receivers:
receiver_creator:
watch_observers: [k8s_observer]
receivers:
redis:
rule: type == "port" && pod.name matches "redis"
config:
password: '`pod.labels["SECRET"]`'
kubeletstats:
rule: type == "k8s.node"
config:
auth_type: serviceAccount
collection_interval: 10s
endpoint: "`endpoint`:`kubelet_endpoint_port`"
extra_metadata_labels:
- container.id
metric_groups:
- container
- pod
- node
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
service:
pipelines:
metrics:
receivers: [receiver_creator]
exporters: [otlp]
EOF
- Create the collector deployment, referring to the service account created earlier
<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: otelcontribcol
labels:
app: otelcontribcol
spec:
replicas: 1
selector:
matchLabels:
app: otelcontribcol
template:
metadata:
labels:
app: otelcontribcol
spec:
serviceAccountName: otelcontribcol
containers:
- name: otelcontribcol
# This image is created by running `make docker-otelcontribcol`.
# If you are not building the collector locally, specify a published image: `otel/opentelemetry-collector-contrib`
image: otelcontribcol:latest
args: ["--config", "/etc/config/config.yaml"]
volumeMounts:
- name: config
mountPath: /etc/config
imagePullPolicy: IfNotPresent
volumes:
- name: config
configMap:
name: otelcontribcol
EOF