Documentation ¶
Overview ¶
Package podlogs enables live capturing of all events and log messages for some or all pods in a namespace as they get generated. This helps debugging both a running test (what is currently going on?) and the output of a CI run (events appear in chronological order and output that normally isn't available like the command stdout messages are available).
Index ¶
- func CopyAllLogs(ctx context.Context, cs clientset.Interface, ns string, to LogOutput) error
- func CopyPodLogs(ctx context.Context, cs clientset.Interface, ns, podName string, to LogOutput) error
- func WatchPods(ctx context.Context, cs clientset.Interface, ns string, to io.Writer, ...) (finalErr error)
- type LogOutput
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func CopyAllLogs ¶
CopyPodLogs is basically CopyPodLogs for all current or future pods in the given namespace ns
func CopyPodLogs ¶ added in v1.22.0
func CopyPodLogs(ctx context.Context, cs clientset.Interface, ns, podName string, to LogOutput) error
CopyPodLogs follows the logs of all containers in pod with the given podName, including those that get created in the future, and writes each log line as configured in the output options. It does that until the context is done or until an error occurs.
If podName is empty, it will follow all pods in the given namespace ns.
Beware that there is currently no way to force log collection before removing pods, which means that there is a known race between "stop pod" and "collecting log entries". The alternative would be a blocking function with collects logs from all currently running pods, but that then would have the disadvantage that already deleted pods aren't covered.
Another race occurs is when a pod shuts down. Logging stops, but if then the pod is not removed from the apiserver quickly enough, logging resumes and dumps the old log again. Previously, this was allowed based on the assumption that it is better to log twice than miss log messages of pods that started and immediately terminated or when logging temporarily stopped.
But it turned out to be rather confusing, so now a heuristic is used: if log output of a container was already captured, then capturing does not resume if the pod is marked for deletion.
func WatchPods ¶
func WatchPods(ctx context.Context, cs clientset.Interface, ns string, to io.Writer, toCloser io.Closer) (finalErr error)
WatchPods prints pod status events for a certain namespace or all namespaces when namespace name is empty. The closer can be nil if the caller doesn't want the file to be closed when watching stops.
Types ¶
type LogOutput ¶
type LogOutput struct { // If not nil, errors will be logged here. StatusWriter io.Writer // If not nil, all output goes to this writer with "<pod>/<container>:" as prefix. LogWriter io.Writer // Base directory for one log file per container. // The full path of each log file will be <log path prefix><pod>-<container>.log. LogPathPrefix string }
LogOutput determines where output from CopyAllLogs goes.