Documentation ¶
Overview ¶
Package cluster contains a Kubernetes cluster monitor.
This plugin collects high level metrics about a K8s cluster and sends them to SignalFx. The basic technique is to pull data from the K8s API and keep up-to-date copies of datapoints for each metric that we collect and then ship them off at the end of each reporting interval. The K8s streaming watch API is used to effeciently maintain the state between read intervals (see `clusterstate.go`).
This plugin should only be run at one place in the cluster, or else metrics would be duplicated. This plugin supports two ways of ensuring that:
2) You can simply pass a config flag `alwaysClusterReporter` with value of `true` to this plugin and it will always report cluster metrics. This method uses less cluster resources (e.g. network sockets, watches on the api server) but requires special case configuration for a single agent in the cluster, which may be more error prone.
This plugin requires read-only access to the K8s API.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Config ¶
type Config struct { config.MonitorConfig // If `true`, leader election is skipped and metrics are always reported. AlwaysClusterReporter bool `yaml:"alwaysClusterReporter"` // If set to true, the Kubernetes node name will be used as the dimension // to which to sync properties about each respective node. This is // necessary if your cluster's machines do not have unique machine-id // values, as can happen when machine images are improperly cloned. UseNodeName bool `yaml:"useNodeName"` // Config for the K8s API client KubernetesAPI *kubernetes.APIConfig `yaml:"kubernetesAPI" default:"{}"` }
Config for the K8s monitor
type Monitor ¶
Monitor for K8s Cluster Metrics. Also handles syncing certain properties about pods.
type State ¶
type State struct {
// contains filtered or unexported fields
}
State makes use of the K8s client's "reflector" helper to watch the API server for changes and keep the datapoint cache up to date,
func (*State) Start ¶
func (cs *State) Start()
Start starts syncing any resource that isn't already being synced
func (*State) Stop ¶
func (cs *State) Stop()
Stop all running goroutines. There is a bug/limitation in the k8s go client's Controller where goroutines are leaked even when using the stop channel properly. See https://github.com/kubernetes/client-go/blob/release-6.0/tools/cache/controller.go#L144