Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrHandlePUStartEventFailed is the error sent back if a start event fails ErrHandlePUStartEventFailed = errs.New("Aporeto Enforcer start event failed") // ErrNetnsExtractionMissing is the error when we are missing a PID or netns path after successful metadata extraction ErrNetnsExtractionMissing = errs.New("Aporeto Enforcer missed to extract PID or netns path") // ErrHandlePUStopEventFailed is the error sent back if a stop event fails ErrHandlePUStopEventFailed = errs.New("Aporeto Enforcer stop event failed") // ErrHandlePUDestroyEventFailed is the error sent back if a create event fails ErrHandlePUDestroyEventFailed = errs.New("Aporeto Enforcer destroy event failed") )
Functions ¶
func ResyncWithAllPods ¶
ResyncWithAllPods is called from the implemented resync, it will list all pods and fire them down the event source (the generic event channel)
Types ¶
type Config ¶
type Config struct { Kubeconfig string Nodename string EnableHostPods bool MetadataExtractor extractors.PodMetadataExtractor NetclsProgrammer extractors.PodNetclsProgrammer }
Config is the config for the Kubernetes monitor
func SetupDefaultConfig ¶
SetupDefaultConfig adds defaults to a partial configuration
type DeleteController ¶
type DeleteController struct {
// contains filtered or unexported fields
}
DeleteController is responsible for cleaning up after Kubernetes because we are missing our native ID on the last reconcile event where the pod has already been deleted. This is also more reliable because we are filling this controller with events starting from the time when we first see a deletion timestamp on a pod. It pretty much facilitates the work of a finalizer without needing a finalizer and also only kicking in once a pod has *really* been deleted.
func NewDeleteController ¶
func NewDeleteController(c client.Client, pc *config.ProcessorConfig) *DeleteController
NewDeleteController creates a new DeleteController.
func (*DeleteController) GetDeleteCh ¶
func (c *DeleteController) GetDeleteCh() chan<- DeleteEvent
GetDeleteCh returns the delete channel on which to queue delete events
func (*DeleteController) GetReconcileCh ¶
func (c *DeleteController) GetReconcileCh() chan<- struct{}
GetReconcileCh returns the channel on which to notify the controller about an immediate reconcile event
func (*DeleteController) Start ¶
func (c *DeleteController) Start(z <-chan struct{}) error
Start implemets the Runnable interface
type DeleteEvent ¶
DeleteEvent is used to send delete events to our event loop which will watch them for real deletion in the Kubernetes API. Once an object is gone, we will send down destroy events to trireme.
type PodMonitor ¶
type PodMonitor struct {
// contains filtered or unexported fields
}
PodMonitor implements a monitor that sends pod events upstream It is implemented as a filter on the standard DockerMonitor. It gets all the PU events from the DockerMonitor and if the container is the POD container from Kubernetes, It connects to the Kubernetes API and adds the tags that are coming from Kuberntes that cannot be found
func (*PodMonitor) Resync ¶
func (m *PodMonitor) Resync(ctx context.Context) error
Resync requests to the monitor to do a resync.
func (*PodMonitor) Run ¶
func (m *PodMonitor) Run(ctx context.Context) error
Run starts the monitor.
func (*PodMonitor) SetupConfig ¶
func (m *PodMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error
SetupConfig provides a configuration to implmentations. Every implmentation can have its own config type.
func (*PodMonitor) SetupHandlers ¶
func (m *PodMonitor) SetupHandlers(c *config.ProcessorConfig)
SetupHandlers sets up handlers for monitors to invoke for various events such as processing unit events and synchronization events. This will be called before Start() by the consumer of the monitor
type ReconcilePod ¶
type ReconcilePod struct {
// contains filtered or unexported fields
}
ReconcilePod reconciles a Pod object
type WatchPodMapper ¶
type WatchPodMapper struct {
// contains filtered or unexported fields
}
WatchPodMapper determines if we want to reconcile on a pod event. There are two limitiations: - the pod must be schedule on a matching nodeName - if the pod requests host networking, only reconcile if we want to enable host pods
func (*WatchPodMapper) Map ¶
func (w *WatchPodMapper) Map(obj handler.MapObject) []reconcile.Request
Map implements the handler.Mapper interface to emit reconciles for corev1.Pods. It effectively filters the pods by looking for a matching nodeName and filters them out if host networking is requested, but we don't want to enable those.