podmonitor

package
v10.214.5+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 20, 2019 License: Apache-2.0 Imports: 23 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrHandlePUStartEventFailed is the error sent back if a start event fails
	ErrHandlePUStartEventFailed = errs.New("Aporeto Enforcer start event failed")

	// ErrNetnsExtractionMissing is the error when we are missing a PID or netns path after successful metadata extraction
	ErrNetnsExtractionMissing = errs.New("Aporeto Enforcer missed to extract PID or netns path")

	// ErrHandlePUStopEventFailed is the error sent back if a stop event fails
	ErrHandlePUStopEventFailed = errs.New("Aporeto Enforcer stop event failed")

	// ErrHandlePUDestroyEventFailed is the error sent back if a create event fails
	ErrHandlePUDestroyEventFailed = errs.New("Aporeto Enforcer destroy event failed")
)

Functions

func ResyncWithAllPods

func ResyncWithAllPods(ctx context.Context, c client.Client, evCh chan<- event.GenericEvent) error

ResyncWithAllPods is called from the implemented resync, it will list all pods and fire them down the event source (the generic event channel)

Types

type Config

type Config struct {
	Kubeconfig     string
	Nodename       string
	EnableHostPods bool

	MetadataExtractor extractors.PodMetadataExtractor
	NetclsProgrammer  extractors.PodNetclsProgrammer
	ResetNetcls       extractors.ResetNetclsKubepods
	SandboxExtractor  extractors.PodSandboxExtractor
}

Config is the config for the Kubernetes monitor

func DefaultConfig

func DefaultConfig() *Config

DefaultConfig provides a default configuration

func SetupDefaultConfig

func SetupDefaultConfig(kubernetesConfig *Config) *Config

SetupDefaultConfig adds defaults to a partial configuration

type DeleteController

type DeleteController struct {
	// contains filtered or unexported fields
}

DeleteController is responsible for cleaning up after Kubernetes because we are missing our native ID on the last reconcile event where the pod has already been deleted. This is also more reliable because we are filling this controller with events starting from the time when we first see a deletion timestamp on a pod. It pretty much facilitates the work of a finalizer without needing a finalizer and also only kicking in once a pod has *really* been deleted.

func NewDeleteController

func NewDeleteController(c client.Client, pc *config.ProcessorConfig, sandboxExtractor extractors.PodSandboxExtractor, eventsCh chan event.GenericEvent) *DeleteController

NewDeleteController creates a new DeleteController.

func (*DeleteController) GetDeleteCh

func (c *DeleteController) GetDeleteCh() chan<- DeleteEvent

GetDeleteCh returns the delete channel on which to queue delete events

func (*DeleteController) GetReconcileCh

func (c *DeleteController) GetReconcileCh() chan<- struct{}

GetReconcileCh returns the channel on which to notify the controller about an immediate reconcile event

func (*DeleteController) Start

func (c *DeleteController) Start(z <-chan struct{}) error

Start implemets the Runnable interface

type DeleteEvent

type DeleteEvent struct {
	PodUID        string
	SandboxID     string
	NamespaceName client.ObjectKey
}

DeleteEvent is used to send delete events to our event loop which will watch them for real deletion in the Kubernetes API. Once an object is gone, we will send down destroy events to trireme.

type DeleteObject

type DeleteObject struct {
	// contains filtered or unexported fields
}

DeleteObject is the obj used to store in the event map.

type PodMonitor

type PodMonitor struct {
	// contains filtered or unexported fields
}

PodMonitor implements a monitor that sends pod events upstream It is implemented as a filter on the standard DockerMonitor. It gets all the PU events from the DockerMonitor and if the container is the POD container from Kubernetes, It connects to the Kubernetes API and adds the tags that are coming from Kuberntes that cannot be found

func New

func New() *PodMonitor

New returns a new kubernetes monitor.

func (*PodMonitor) Resync

func (m *PodMonitor) Resync(ctx context.Context) error

Resync requests to the monitor to do a resync.

func (*PodMonitor) Run

func (m *PodMonitor) Run(ctx context.Context) error

Run starts the monitor.

func (*PodMonitor) SetupConfig

func (m *PodMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error

SetupConfig provides a configuration to implmentations. Every implmentation can have its own config type.

func (*PodMonitor) SetupHandlers

func (m *PodMonitor) SetupHandlers(c *config.ProcessorConfig)

SetupHandlers sets up handlers for monitors to invoke for various events such as processing unit events and synchronization events. This will be called before Start() by the consumer of the monitor

type ReconcilePod

type ReconcilePod struct {
	// contains filtered or unexported fields
}

ReconcilePod reconciles a Pod object

func (*ReconcilePod) Reconcile

func (r *ReconcilePod) Reconcile(request reconcile.Request) (reconcile.Result, error)

Reconcile reads that state of the cluster for a pod object

type WatchPodMapper

type WatchPodMapper struct {
	// contains filtered or unexported fields
}

WatchPodMapper determines if we want to reconcile on a pod event. There are two limitiations: - the pod must be schedule on a matching nodeName - if the pod requests host networking, only reconcile if we want to enable host pods

func (*WatchPodMapper) Map

Map implements the handler.Mapper interface to emit reconciles for corev1.Pods. It effectively filters the pods by looking for a matching nodeName and filters them out if host networking is requested, but we don't want to enable those.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL