Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func RegisterType ¶
RegisterType registers a new queue type.
Types ¶
type Batch ¶
Batch of events to be returned to Consumers. The `ACK` method will send the ACK signal to the queue.
type BufferConfig ¶
type BufferConfig struct {
Events int // can be <= 0, if queue can not determine limit
}
BufferConfig returns the pipelines buffering settings, for the pipeline to use. In case of the pipeline itself storing events for reporting ACKs to clients, but still dropping events, the pipeline can use the buffer information, to define an upper bound of events being active in the pipeline.
type Consumer ¶
Consumer interface to be used by the pipeline output workers. The `Get` method retrieves a batch of events up to size `sz`. If sz <= 0, the batch size is up to the queue.
type Eventer ¶
type Eventer interface {
OnACK(int) // number of consecutively published messages, acked by producers
}
Eventer listens to special events to be send by queue implementations.
type Factory ¶
Factory for creating a queue used by a pipeline instance.
func FindFactory ¶
FindFactory retrieves a queue types constructor. Returns nil if queue type is unknown
type Producer ¶
type Producer interface { Publish(event publisher.Event) bool TryPublish(event publisher.Event) bool Cancel() int }
Producer interface to be used by the pipelines client to forward events to be published to the queue. When a producer calls `Cancel`, it's up to the queue to send or remove events not yet ACKed. Note: A queue is still allowed to send the ACK signal after Cancel. The
pipeline client must filter out ACKs after cancel.
type ProducerConfig ¶
type ProducerConfig struct { // if ACK is set, the callback will be called with number of events produced // by the producer instance and being ACKed by the queue. ACK func(count int) // OnDrop provided to the queue, to report events being silently dropped by // the queue. For example an async producer close and publish event, // with close happening early might result in the event being dropped. The callback // gives a queue user a chance to keep track of total number of events // being buffered by the queue. OnDrop func(beat.Event) // DropOnCancel is a hint to the queue to drop events if the producer disconnects // via Cancel. DropOnCancel bool }
ProducerConfig as used by the Pipeline to configure some custom callbacks between pipeline and queue.
type Queue ¶
type Queue interface { io.Closer BufferConfig() BufferConfig Producer(cfg ProducerConfig) Producer Consumer() Consumer }
Queue is responsible for accepting, forwarding and ACKing events. A queue will receive and buffer single events from its producers. Consumers will receive events in batches from the queues buffers. Once a consumer has finished processing a batch, it must ACK the batch, for the queue to advance its buffers. Events in progress or ACKed are not readable from the queue. When the queue decides it is safe to progress (events have been ACKed by consumer or flush to some other intermediate storage), it will send an ACK signal with the number of ACKed events to the Producer (ACK happens in batches).
Directories ¶
Path | Synopsis |
---|---|
Package memqueue provides an in-memory queue.Queue implementation for use with the publisher pipeline.
|
Package memqueue provides an in-memory queue.Queue implementation for use with the publisher pipeline. |
Package queuetest provides common functionality tests all queue implementations must pass.
|
Package queuetest provides common functionality tests all queue implementations must pass. |