Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrHandlePUStartEventFailed is the error sent back if a start event fails ErrHandlePUStartEventFailed = errs.New("Aporeto Enforcer start event failed") // ErrNetnsExtractionMissing is the error when we are missing a PID or netns path after successful metadata extraction ErrNetnsExtractionMissing = errs.New("Aporeto Enforcer missed to extract PID or netns path") // ErrHandlePUStopEventFailed is the error sent back if a stop event fails ErrHandlePUStopEventFailed = errs.New("Aporeto Enforcer stop event failed") // ErrHandlePUDestroyEventFailed is the error sent back if a create event fails ErrHandlePUDestroyEventFailed = errs.New("Aporeto Enforcer destroy event failed") )
Functions ¶
func ResyncWithAllPods ¶
func ResyncWithAllPods(ctx context.Context, c client.Client, i *ResyncInfoChan, evCh chan<- event.GenericEvent, nodeName string) error
ResyncWithAllPods is called from the implemented resync, it will list all pods and fire them down the event source (the generic event channel). It will block until every pod at the time of calling has been calling `Reconcile` at least once.
Types ¶
type Config ¶
type Config struct { Kubeconfig string Nodename string EnableHostPods bool Workers int MetadataExtractor extractors.PodMetadataExtractor NetclsProgrammer extractors.PodNetclsProgrammer PidsSetMaxProcsProgrammer extractors.PodPidsSetMaxProcsProgrammer ResetNetcls extractors.ResetNetclsKubepods SandboxExtractor extractors.PodSandboxExtractor }
Config is the config for the Kubernetes monitor
func SetupDefaultConfig ¶
SetupDefaultConfig adds defaults to a partial configuration
type DeleteController ¶
type DeleteController struct {
// contains filtered or unexported fields
}
DeleteController is responsible for cleaning up after Kubernetes because we are missing our native ID on the last reconcile event where the pod has already been deleted. This is also more reliable because we are filling this controller with events starting from the time when we first see a deletion timestamp on a pod. It pretty much facilitates the work of a finalizer without needing a finalizer and also only kicking in once a pod has *really* been deleted.
func NewDeleteController ¶
func NewDeleteController(c client.Client, nodeName string, pc *config.ProcessorConfig, sandboxExtractor extractors.PodSandboxExtractor, eventsCh chan event.GenericEvent) *DeleteController
NewDeleteController creates a new DeleteController.
func (*DeleteController) GetDeleteCh ¶
func (c *DeleteController) GetDeleteCh() chan<- DeleteEvent
GetDeleteCh returns the delete channel on which to queue delete events
func (*DeleteController) GetReconcileCh ¶
func (c *DeleteController) GetReconcileCh() chan<- struct{}
GetReconcileCh returns the channel on which to notify the controller about an immediate reconcile event
func (*DeleteController) Start ¶
func (c *DeleteController) Start(z <-chan struct{}) error
Start implemets the Runnable interface
type DeleteEvent ¶
DeleteEvent is used to send delete events to our event loop which will watch them for real deletion in the Kubernetes API. Once an object is gone, we will send down destroy events to trireme.
type DeleteObject ¶
type DeleteObject struct {
// contains filtered or unexported fields
}
DeleteObject is the obj used to store in the event map.
type PodMonitor ¶
type PodMonitor struct {
// contains filtered or unexported fields
}
PodMonitor implements a monitor that sends pod events upstream It is implemented as a filter on the standard DockerMonitor. It gets all the PU events from the DockerMonitor and if the container is the POD container from Kubernetes, It connects to the Kubernetes API and adds the tags that are coming from Kuberntes that cannot be found
func (*PodMonitor) Resync ¶
func (m *PodMonitor) Resync(ctx context.Context) error
Resync requests to the monitor to do a resync.
func (*PodMonitor) Run ¶
func (m *PodMonitor) Run(ctx context.Context) error
Run starts the monitor.
func (*PodMonitor) SetupConfig ¶
func (m *PodMonitor) SetupConfig(_ registerer.Registerer, cfg interface{}) error
SetupConfig provides a configuration to implmentations. Every implmentation can have its own config type.
func (*PodMonitor) SetupHandlers ¶
func (m *PodMonitor) SetupHandlers(c *config.ProcessorConfig)
SetupHandlers sets up handlers for monitors to invoke for various events such as processing unit events and synchronization events. This will be called before Start() by the consumer of the monitor
type ReconcilePod ¶
type ReconcilePod struct {
// contains filtered or unexported fields
}
ReconcilePod reconciles a Pod object
type ResyncInfoChan ¶
type ResyncInfoChan struct {
// contains filtered or unexported fields
}
ResyncInfoChan is used to report back from the controller on which pods it has processed. It allows the Resync of the monitor to block and wait until a list has been processed.
func NewResyncInfoChan ¶
func NewResyncInfoChan() *ResyncInfoChan
NewResyncInfoChan creates a new ResyncInfoChan
func (*ResyncInfoChan) DisableNeedsInfo ¶
func (r *ResyncInfoChan) DisableNeedsInfo()
DisableNeedsInfo disables the need for sending info
func (*ResyncInfoChan) EnableNeedsInfo ¶
func (r *ResyncInfoChan) EnableNeedsInfo()
EnableNeedsInfo enables the need for sending info
func (*ResyncInfoChan) GetInfoCh ¶
func (r *ResyncInfoChan) GetInfoCh() *chan string
GetInfoCh returns the channel
func (*ResyncInfoChan) NeedsInfo ¶
func (r *ResyncInfoChan) NeedsInfo() bool
NeedsInfo returns if there is a need for sending info
func (*ResyncInfoChan) SendInfo ¶
func (r *ResyncInfoChan) SendInfo(info string)
SendInfo will make the info available through an internal channel
type WatchPodMapper ¶
type WatchPodMapper struct {
// contains filtered or unexported fields
}
WatchPodMapper determines if we want to reconcile on a pod event. There are two limitiations: - the pod must be schedule on a matching nodeName - if the pod requests host networking, only reconcile if we want to enable host pods
func (*WatchPodMapper) Map ¶
func (w *WatchPodMapper) Map(obj handler.MapObject) []reconcile.Request
Map implements the handler.Mapper interface to emit reconciles for corev1.Pods. It effectively filters the pods by looking for a matching nodeName and filters them out if host networking is requested, but we don't want to enable those.