watcher

package
v0.6.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 23, 2022 License: Apache-2.0 Imports: 5 Imported by: 12

Documentation

Overview

Package watcher allows keeping track of the currently alive containers of a container engine.

Currently, the following container engines are supported:

- Docker - plain containerd with nerdctl-project awareness

Usage

Creating container watchers for specific container engines preferably should be done using a particular container engine's NewWatcher convenience function, such as:

import "github.com/thediveo/whalewatcher/watcher/moby"
moby := NewWatcher("")

The engine watcher NewWatcher() constructors additionally accept options. The only option currently being defines is to specify a container engine's PID. The PID information then can be used downstream in tools like github.com/thediveo/lxkns to translate container PIDs between different PID namespaces. It's up to the API user to supply the correct PIDs where necessary and known. The watchers themselves do not need the PID information for their own operations.

Gory Details Notes

The really difficult part here is to properly synchronize at the beginning with a container engine's state without getting out of sync: while we get ordered events (do we actually?!) there's an event horizon (and this ain't Kubernetes) so we need to run an initial listing of containers. The problem now is that when events happen while the list is in progress, we don't know how events and container listing results relate to each other.

To only slightly make matters more complicated, a simple single list request usually isn't enough, but we need many round trips to a container engine in order to get our list of containers with the required details (such as labels, PIDs, and pausing state).

If at any time there is some event happening, then how am we supposed to deal with the situation? Of course, assuming a lazy container host with but few events and those events not happening when starting the watcher is one way to deal with the problem. This is what many tools seem to do – judging from our code the lazy route doesn't seem so bad after all...

Oh, another complication is that containerd doesn't enforce unique IDs (UIDs) as Docker does with its ID: in case there is a slow list and a sudden container death with a rapid resurrection while the list is still going on, then with containerd and depending on the client creating containers we will see the same ID reappear. With Docker, we never see the same ID, but maybe only the same (service) name. It's sad that the clever Docker architecture for UIDs+names did not carry over to containerd's architecture.

Our watcher thus works as follows: it immediately starts listening to events and then kicks of listing (and "inspecting") containers. While the listing is going on, the watcher deals with certain events differently compared to after the listing has been done and its results processed.

Initially, the watcher remembers all dying container IDs during an ongoing listing. This dead container list is then used when processing the results of the full container listing to avoid adding dead containers to the watcher's final container list.

Similar, container pause state change events are also queued during the time of an ongoing full container listing. That's because we usually won't know the details about the state-changing containers yet (unless we were lucky to just see a container creation event). So we queue the state change events, but optimize to store only the latest pause state of a container. After the full container listing is done, we "replay" the queued pause state change events: this ensures that we end up with the correct pausing state for the containers that changed their pause states while the listing was in progress.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Watcher

type Watcher interface {
	// Portfolio returns the current portfolio for reading. During
	// resynchronization with a container engine this can be the buffered
	// portfolio until the watcher has caught up with the new state after an
	// engine reconnect. For this reason callers must not keep the returned
	// Portfolio reference for longer periods of time, but just for what they
	// immediately need to query a Portfolio for.
	Portfolio() *whalewatcher.Portfolio
	// Ready returns a channel that gets closed after the initial
	// synchronization has been achieved. Watcher clients do not need to wait
	// for the Ready channel to get closed to work with the portfolio; this just
	// helps those applications that need to wait for results as opposed to take
	// whatever information currently is available, or not.
	Ready() <-chan struct{}
	// Watch synchronizes the Portfolio to the connected container engine's
	// state with respect to alive containers and then continuously watches for
	// changes. Watch only returns after the specified context has been
	// cancelled. It will automatically reconnect in case of loss of connection
	// to the connected container engine.
	Watch(ctx context.Context) error
	// ID returns the (more or less) unique engine identifier; the exact format
	// is engine-specific.
	ID(ctx context.Context) string
	// Identifier of the type of container engine, such as "docker.com",
	// "containerd.io", et cetera.
	Type() string
	// Engine version information.
	Version(ctx context.Context) string
	// Container engine API path.
	API() string
	// Container engine PID, when known.
	PID() int
	// Underlying engine client (engine-specific).
	Client() interface{}
	// Close cleans up and release any engine client resources, if necessary.
	Close()
}

Watcher allows keeping track of the currently alive containers of a container engine, optionally with the composer projects they're associated with (if supported).

func New added in v0.4.0

func New(engine engineclient.EngineClient, buggeroff backoff.BackOff) Watcher

New returns a new Watcher tracking alive containers as they come and go, using the specified container EngineClient. If the backoff is nil then the backoff defaults to backoff.StopBackOff, that is, any failed operation will never be retried.

Directories

Path Synopsis
Package containerd provides a container Watcher for containerd engines.
Package containerd provides a container Watcher for containerd engines.
Package moby provides a container Watcher for Docker/Moby engines.
Package moby provides a container Watcher for Docker/Moby engines.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL