record

package
v1.5.14 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 21, 2021 License: Apache-2.0 Imports: 21 Imported by: 0

Documentation

Overview

Package record has all client logic for recording and reporting events.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func EventAggregatorByReasonFunc

func EventAggregatorByReasonFunc(event *v1.Event) (string, string)

EventAggregatorByReasonFunc aggregates events by exact match on event.Source, event.InvolvedObject, event.Type and event.Reason

func EventAggregatorByReasonMessageFunc

func EventAggregatorByReasonMessageFunc(event *v1.Event) string

EventAggregratorByReasonMessageFunc returns an aggregate message by prefixing the incoming message

Types

type CorrelatorOptions

type CorrelatorOptions struct {
	// The lru cache size used for both EventSourceObjectSpamFilter and the EventAggregator
	// If not specified (zero value), the default specified in events_cache.go will be picked
	// This means that the LRUCacheSize has to be greater than 0.
	LRUCacheSize int
	// The burst size used by the token bucket rate filtering in EventSourceObjectSpamFilter
	// If not specified (zero value), the default specified in events_cache.go will be picked
	// This means that the BurstSize has to be greater than 0.
	BurstSize int
	// The fill rate of the token bucket in queries per second in EventSourceObjectSpamFilter
	// If not specified (zero value), the default specified in events_cache.go will be picked
	// This means that the QPS has to be greater than 0.
	QPS float32
	// The func used by the EventAggregator to group event keys for aggregation
	// If not specified (zero value), EventAggregatorByReasonFunc will be used
	KeyFunc EventAggregatorKeyFunc
	// The func used by the EventAggregator to produced aggregated message
	// If not specified (zero value), EventAggregatorByReasonMessageFunc will be used
	MessageFunc EventAggregatorMessageFunc
	// The number of events in an interval before aggregation happens by the EventAggregator
	// If not specified (zero value), the default specified in events_cache.go will be picked
	// This means that the MaxEvents has to be greater than 0
	MaxEvents int
	// The amount of time in seconds that must transpire since the last occurrence of a similar event before it is considered new by the EventAggregator
	// If not specified (zero value), the default specified in events_cache.go will be picked
	// This means that the MaxIntervalInSeconds has to be greater than 0
	MaxIntervalInSeconds int
	// The clock used by the EventAggregator to allow for testing
	// If not specified (zero value), clock.RealClock{} will be used
	Clock clock.Clock
}

CorrelatorOptions allows you to change the default of the EventSourceObjectSpamFilter and EventAggregator in EventCorrelator

type EventAggregator

type EventAggregator struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

EventAggregator identifies similar events and aggregates them into a single event

func NewEventAggregator

func NewEventAggregator(lruCacheSize int, keyFunc EventAggregatorKeyFunc, messageFunc EventAggregatorMessageFunc,
	maxEvents int, maxIntervalInSeconds int, clock clock.Clock) *EventAggregator

NewEventAggregator returns a new instance of an EventAggregator

func (*EventAggregator) EventAggregate

func (e *EventAggregator) EventAggregate(newEvent *v1.Event) (*v1.Event, string)

EventAggregate checks if a similar event has been seen according to the aggregation configuration (max events, max interval, etc) and returns:

  • The (potentially modified) event that should be created
  • The cache key for the event, for correlation purposes. This will be set to the full key for normal events, and to the result of EventAggregatorMessageFunc for aggregate events.

type EventAggregatorKeyFunc

type EventAggregatorKeyFunc func(event *v1.Event) (aggregateKey string, localKey string)

EventAggregatorKeyFunc is responsible for grouping events for aggregation It returns a tuple of the following: aggregateKey - key the identifies the aggregate group to bucket this event localKey - key that makes this event in the local group

type EventAggregatorMessageFunc

type EventAggregatorMessageFunc func(event *v1.Event) string

EventAggregatorMessageFunc is responsible for producing an aggregation message

type EventBroadcaster

type EventBroadcaster interface {
	// StartEventWatcher starts sending events received from this EventBroadcaster to the given
	// event handler function. The return value can be ignored or used to stop recording, if
	// desired.
	StartEventWatcher(eventHandler func(*v1.Event)) watch.Interface

	// StartRecordingToSink starts sending events received from this EventBroadcaster to the given
	// sink. The return value can be ignored or used to stop recording, if desired.
	StartRecordingToSink(sink EventSink) watch.Interface

	// StartLogging starts sending events received from this EventBroadcaster to the given logging
	// function. The return value can be ignored or used to stop recording, if desired.
	StartLogging(logf func(format string, args ...interface{})) watch.Interface

	// NewRecorder returns an EventRecorder that can be used to send events to this EventBroadcaster
	// with the event source set to the given event source.
	NewRecorder(scheme *runtime.Scheme, source v1.EventSource) EventRecorder
}

EventBroadcaster knows how to receive events and send them to any EventSink, watcher, or log.

func NewBroadcaster

func NewBroadcaster() EventBroadcaster

Creates a new event broadcaster.

func NewBroadcasterForTests

func NewBroadcasterForTests(sleepDuration time.Duration) EventBroadcaster

func NewBroadcasterWithCorrelatorOptions

func NewBroadcasterWithCorrelatorOptions(options CorrelatorOptions) EventBroadcaster

type EventCorrelateResult

type EventCorrelateResult struct {
	// the event after correlation
	Event *v1.Event
	// if provided, perform a strategic patch when updating the record on the server
	Patch []byte
	// if true, do no further processing of the event
	Skip bool
}

EventCorrelateResult is the result of a Correlate

type EventCorrelator

type EventCorrelator struct {
	// contains filtered or unexported fields
}

EventCorrelator processes all incoming events and performs analysis to avoid overwhelming the system. It can filter all incoming events to see if the event should be filtered from further processing. It can aggregate similar events that occur frequently to protect the system from spamming events that are difficult for users to distinguish. It performs de-duplication to ensure events that are observed multiple times are compacted into a single event with increasing counts.

func NewEventCorrelator

func NewEventCorrelator(clock clock.Clock) *EventCorrelator

NewEventCorrelator returns an EventCorrelator configured with default values.

The EventCorrelator is responsible for event filtering, aggregating, and counting prior to interacting with the API server to record the event.

The default behavior is as follows:

  • Aggregation is performed if a similar event is recorded 10 times in a in a 10 minute rolling interval. A similar event is an event that varies only by the Event.Message field. Rather than recording the precise event, aggregation will create a new event whose message reports that it has combined events with the same reason.
  • Events are incrementally counted if the exact same event is encountered multiple times.
  • A source may burst 25 events about an object, but has a refill rate budget per object of 1 event every 5 minutes to control long-tail of spam.

func NewEventCorrelatorWithOptions

func NewEventCorrelatorWithOptions(options CorrelatorOptions) *EventCorrelator

func (*EventCorrelator) EventCorrelate

func (c *EventCorrelator) EventCorrelate(newEvent *v1.Event) (*EventCorrelateResult, error)

EventCorrelate filters, aggregates, counts, and de-duplicates all incoming events

func (*EventCorrelator) UpdateState

func (c *EventCorrelator) UpdateState(event *v1.Event)

UpdateState based on the latest observed state from server

type EventFilterFunc

type EventFilterFunc func(event *v1.Event) bool

EventFilterFunc is a function that returns true if the event should be skipped

type EventRecorder

type EventRecorder interface {
	// Event constructs an event from the given information and puts it in the queue for sending.
	// 'object' is the object this event is about. Event will make a reference-- or you may also
	// pass a reference to the object directly.
	// 'type' of this event, and can be one of Normal, Warning. New types could be added in future
	// 'reason' is the reason this event is generated. 'reason' should be short and unique; it
	// should be in UpperCamelCase format (starting with a capital letter). "reason" will be used
	// to automate handling of events, so imagine people writing switch statements to handle them.
	// You want to make that easy.
	// 'message' is intended to be human readable.
	//
	// The resulting event will be created in the same namespace as the reference object.
	Event(object runtime.Object, eventtype, reason, message string)

	// Eventf is just like Event, but with Sprintf for the message field.
	Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{})

	// PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field.
	PastEventf(object runtime.Object, timestamp metav1.Time, eventtype, reason, messageFmt string, args ...interface{})

	// AnnotatedEventf is just like eventf, but with annotations attached
	AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...interface{})
}

EventRecorder knows how to record events on behalf of an EventSource.

type EventSink

type EventSink interface {
	Create(event *v1.Event) (*v1.Event, error)
	Update(event *v1.Event) (*v1.Event, error)
	Patch(oldEvent *v1.Event, data []byte) (*v1.Event, error)
}

EventSink knows how to store events (client.Client implements it.) EventSink must respect the namespace that will be embedded in 'event'. It is assumed that EventSink will return the same sorts of errors as pkg/client's REST client.

type EventSourceObjectSpamFilter

type EventSourceObjectSpamFilter struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

EventSourceObjectSpamFilter is responsible for throttling the amount of events a source and object can produce.

func NewEventSourceObjectSpamFilter

func NewEventSourceObjectSpamFilter(lruCacheSize, burst int, qps float32, clock clock.Clock) *EventSourceObjectSpamFilter

NewEventSourceObjectSpamFilter allows burst events from a source about an object with the specified qps refill.

func (*EventSourceObjectSpamFilter) Filter

func (f *EventSourceObjectSpamFilter) Filter(event *v1.Event) bool

Filter controls that a given source+object are not exceeding the allowed rate.

type FakeRecorder

type FakeRecorder struct {
	Events chan string
}

FakeRecorder is used as a fake during tests. It is thread safe. It is usable when created manually and not by NewFakeRecorder, however all events may be thrown away in this case.

func NewFakeRecorder

func NewFakeRecorder(bufferSize int) *FakeRecorder

NewFakeRecorder creates new fake event recorder with event channel with buffer of given size.

func (*FakeRecorder) AnnotatedEventf

func (f *FakeRecorder) AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...interface{})

func (*FakeRecorder) Event

func (f *FakeRecorder) Event(object runtime.Object, eventtype, reason, message string)

func (*FakeRecorder) Eventf

func (f *FakeRecorder) Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{})

func (*FakeRecorder) PastEventf

func (f *FakeRecorder) PastEventf(object runtime.Object, timestamp metav1.Time, eventtype, reason, messageFmt string, args ...interface{})

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL