kafka

package
v2.0.0+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 2, 2019 License: Apache-2.0 Imports: 12 Imported by: 0

README

Kafka

The client package provides single purpose clients for publishing synchronous/asynchronous messages and for consuming selected topics.

The mux package uses these clients and allows to share their access to kafka brokers among multiple entities. This package also implements the generic messaging API defined in the parent package.

Requirements

Minimal supported version of kafka is determined by sarama library - Kafka 0.10 and 0.9, although older releases are still likely to work.

If you don't have kafka installed locally you can use docker image for testing:

sudo docker run -p 2181:2181 -p 9092:9092 --name kafka --rm \
--env ADVERTISED_HOST=172.17.0.1 --env ADVERTISED_PORT=9092 spotify/kafka

Kafka plugin

Kafka plugin provides access to kafka brokers.

API

The plugin's API is documented at the end of doc.go.

Configuration

  • Location of the Kafka configuration file can be defined either by command line flag kafka-config or set via KAFKA_CONFIG env variable.

Status Check

  • Kafka plugin has a mechanism to periodically check a connection status of the Kafka server.

Documentation

Overview

Package kafka implements a client for the Kafka broker. The client supports sending and receiving of messages through the Kafka message bus. It provides both sync and async Producers for sending Kafka messages and a Consumer for retrieving Kafka messages.

A Producer sends messages to Kafka. A Producer can be either synchronous or asynchronous. Request to send a message using a synchronous producer blocks until the message is published or an error is returned. A request sent using asynchronous producer returns immediately and the success or failure is communicated to the sender through a separate status channels.

A Consumer receives messages from Kafka for one or more topics. When a consumer is initialized,it automatically balances/shares the total number partitions for a message topic over all the active brokers for a topic. Message offsets can optionally be committed to Kafka so that when a consumer is restarted or a new consumer is initiated it knows where to begin reading messages from the Kafka message log.

The package also provides a Multiplexer that allows to share consumer and producers instances among multiple entities called Connections. Apart from reusing the access to kafka brokers, the Multiplexer marks the offset of consumed message as read. Offset marking allows to consume messages from the last committed offset after the restart of the Multiplexer.

Note: Marking offset does not necessarily commit the offset to the backend store immediately. This might result in a corner case where a message might be delivered multiple times.

Usage of synchronous producer:

// create minimal configuration
config := client.NewConfig()
config.SetBrokers("ip_addr:port", "ip_addr2:port")

producer, err := client.NewSyncProducer(config, nil)
if err != nil {
	os.Exit(1)
}
// key and value are of type []byte
producer.SendMsgByte(topic, key, value, meta)

// key and value are of type Encoder
producer.SendMsgToPartition(topic, key, value, meta)

Usage of asynchronous producer:

succCh := make(chan *client.ProducerMessage)
errCh := make(chan *client.ProducerError)

// init config
config := client.NewConfig()
config.SetSendSuccess(true)
config.SetSuccessChan(succCh)
config.SetSendError(true)
config.SetErrorChan(errCh)
config.SetBrokers("ip_addr:port", "ip_addr2:port")

// init producer
producer, err := client.NewAsyncProducer(config, nil)

go func() {
eventLoop:
	for {
		select {
		case <-producer.GetCloseChannel():
			break eventLoop
		case msg := <-succCh:
			fmt.Println("message sent successfully - ", msg)
		case err := <-errCh:
			fmt.Println("message errored - ", err)
		}
	}
}()

producer.SendMsgByte(topic, key, value, meta)

Usage of consumer:

        config := client.NewConfig()
	config.SetRecvNotification(true)
	config.SetRecvNotificationChan(make(chan *cluster.Notification))
	config.SetRecvError(true)
	config.SetRecvErrorChan(make(chan error))
	config.SetRecvMessageChan(make(chan *client.ConsumerMessage))
	config.SetBrokers("ip_addr:port", "ip_addr2:port2")
	config.SetTopics("topic1,topic2")
	config.SetGroup("Group1")

	consumer, err := client.NewConsumer(config, nil)
	if err != nil {
		log.Errorf("NewConsumer Error: %v", err)
		os.Exit(1)
	}

	go func() {
		for {
			select {
			case notification := <-config.RecvNotificationChan:
				handleNotifcation(consumer)
			case err := <-config.RecvErrorChan:
				fmt.Printf("Message Recv Errored: %v\n", err)
			case msg := <-config.RecvMessageChan:
				messageCallback(consumer, msg, *commit)
			case <-consumer.GetCloseChannel():
				return
			}
		}
	}()

In addition to basic sync/async producer and consumer the Multiplexer is provided. It's behaviour is depicted below:

+---------------+              +--------------------+
| Connection #1 |------+       |    Multiplexer     |
+---------------+      |       |                    |
                       |       |    sync producer   |
+---------------+      |       |   async producer   |		       /------------\
| Connection #2 |------+-----> |    consumer        |<---------->/    Kafka     \
+---------------+      |       |                    |            \--------------/
                       |       |                    |
+---------------+      |       |                    |
| Connection #3 |------+       +--------------------+
+---------------+

To initialize multiplexer run:

mx, err := mux.InitMultiplexer(pathToConfig, "name")

The config file specifies addresses of kafka brokers:

addrs:
  - "ip_addr1:port"
  - "ip_addr2:port"

To create a Connection that reuses Multiplexer access to kafka run:

cn := mx.NewBytesConnection("c1")

or

cn := mx.NewProtoConnection("c1")

Afterwards you can produce messages using sync API:

partition, offset, err := cn.SendSyncString("test", "key", "value")

or you can use async API:

	succCh := make(chan *client.ProducerMessage, 10)
	errCh := make(chan *client.ProducerError, 10)
        cn.SendAsyncString("test", "key", "async message", "meta", succCh, errCh)

        // check if the async send succeeded
        go func() {
           select {
	   case success := <-succCh:
		fmt.Println("Successfully send async msg", success.Metadata)
	   case err := <-errCh:
		fmt.Println("Error while sending async msg", err.Err, err.Msg.Metadata)
	   }
	}()

subscribe to consume a topic:

	consumerChan := make(chan *client.ConsumerMessage
        err = cn.ConsumeTopic("test", consumerChan)

	if err == nil {
		fmt.Println("Consuming test partition")
		go func() {
			eventLoop:
				for {
					select {
					case msg := <-consumerChan:
						fmt.Println(string(msg.Key), string(msg.Value))
					case <-signalChan:
						break eventLoop
					}
				}
		}()
	}

Once all connection have subscribed for topic consumption. You have to run the following function to actually initialize the consumer inside the Multiplexer.

mx.Start()

To properly clean up the Multiplexer call:

mx.Close()

The KAFKA plugin

Once kafka plugin is initialized

plugin := kafka.Plugin{}
// Init called by agent core

The plugin allows to create connections:

conn := plugin.NewConnection("name")

or connection that support proto-modelled messages:

protoConn := plugin.NewProtoConnection("protoConnection")

The usage of connections is described above.

Index

Constants

This section is empty.

Variables

View Source
var DefaultPlugin = *NewPlugin()

DefaultPlugin is a default instance of Plugin.

Functions

This section is empty.

Types

type Deps

type Deps struct {
	infra.PluginDeps
	StatusCheck  statuscheck.PluginStatusWriter // inject
	ServiceLabel servicelabel.ReaderAPI
}

Deps groups dependencies injected into the plugin so that they are logically separated from other plugin fields.

type Option added in v1.5.0

type Option func(*Plugin)

Option is a function that can be used in NewPlugin to customize Plugin.

func UseDeps added in v1.5.0

func UseDeps(cb func(*Deps)) Option

UseDeps returns Option that can inject custom dependencies.

type Plugin

type Plugin struct {
	Deps
	// contains filtered or unexported fields
}

Plugin provides API for interaction with kafka brokers.

func FromExistingMux

func FromExistingMux(mux *mux.Multiplexer) *Plugin

FromExistingMux is used mainly for testing purposes.

func NewPlugin added in v1.5.0

func NewPlugin(opts ...Option) *Plugin

NewPlugin creates a new Plugin with the provided Options.

func (*Plugin) AfterInit

func (p *Plugin) AfterInit() error

AfterInit is called in the second phase of the initialization. The kafka multiplexerNewWatcher is started, all consumers have to be subscribed until this phase.

func (*Plugin) Close

func (p *Plugin) Close() error

Close is called at plugin cleanup phase.

func (*Plugin) Disabled added in v1.0.4

func (p *Plugin) Disabled() (disabled bool)

Disabled if the plugin config was not found

func (*Plugin) Init

func (p *Plugin) Init() (err error)

Init is called at plugin initialization.

func (*Plugin) NewAsyncPublisher

func (p *Plugin) NewAsyncPublisher(connectionName string, topic string, successClb func(messaging.ProtoMessage), errorClb func(messaging.ProtoMessageErr)) (messaging.ProtoPublisher, error)

NewAsyncPublisher creates a publisher that allows to publish messages using asynchronous API. The publisher creates new proto connection on multiplexer with default partitioner.

func (*Plugin) NewAsyncPublisherToPartition added in v1.0.3

func (p *Plugin) NewAsyncPublisherToPartition(connectionName string, topic string, partition int32, successClb func(messaging.ProtoMessage), errorClb func(messaging.ProtoMessageErr)) (messaging.ProtoPublisher, error)

NewAsyncPublisherToPartition creates a publisher that allows to publish messages to custom partition using asynchronous API. The publisher creates new proto connection on multiplexer with manual partitioner.

func (*Plugin) NewBytesConnection added in v1.0.4

func (p *Plugin) NewBytesConnection(name string) *mux.BytesConnectionStr

NewBytesConnection returns a new instance of a connection to access kafka brokers. The connection allows to create new kafka providers/consumers on multiplexer with hash partitioner.

func (*Plugin) NewBytesConnectionToPartition added in v1.0.4

func (p *Plugin) NewBytesConnectionToPartition(name string) *mux.BytesManualConnectionStr

NewBytesConnectionToPartition returns a new instance of a connection to access kafka brokers. The connection allows to create new kafka providers/consumers on multiplexer with manual partitioner which allows to send messages to specific partition in kafka cluster and watch on partition/offset.

func (*Plugin) NewPartitionWatcher added in v1.0.4

func (p *Plugin) NewPartitionWatcher(name string) messaging.ProtoPartitionWatcher

NewPartitionWatcher creates a watcher that allows to start/stop consuming of messaging published to given topics, offset and partition

func (*Plugin) NewProtoConnection

func (p *Plugin) NewProtoConnection(name string) mux.Connection

NewProtoConnection returns a new instance of a connection to access kafka brokers. The connection allows to create new kafka providers/consumers on multiplexer with hash partitioner.The connection uses proto-modelled messages.

func (*Plugin) NewProtoManualConnection added in v1.0.4

func (p *Plugin) NewProtoManualConnection(name string) mux.ManualConnection

NewProtoManualConnection returns a new instance of a connection to access kafka brokers. The connection allows to create new kafka providers/consumers on multiplexer with manual partitioner which allows to send messages to specific partition in kafka cluster and watch on partition/offset. The connection uses proto-modelled messages.

func (*Plugin) NewSyncPublisher

func (p *Plugin) NewSyncPublisher(connectionName string, topic string) (messaging.ProtoPublisher, error)

NewSyncPublisher creates a publisher that allows to publish messages using synchronous API. The publisher creates new proto connection on multiplexer with default partitioner.

func (*Plugin) NewSyncPublisherToPartition added in v1.0.3

func (p *Plugin) NewSyncPublisherToPartition(connectionName string, topic string, partition int32) (messaging.ProtoPublisher, error)

NewSyncPublisherToPartition creates a publisher that allows to publish messages to custom partition using synchronous API. The publisher creates new proto connection on multiplexer with manual partitioner.

func (*Plugin) NewWatcher

func (p *Plugin) NewWatcher(name string) messaging.ProtoWatcher

NewWatcher creates a watcher that allows to start/stop consuming of messaging published to given topics.

Directories

Path Synopsis
Package client implements the synchronous and asynchronous kafka Producers and the kafka Consumer.
Package client implements the synchronous and asynchronous kafka Producers and the kafka Consumer.
Package mux implements the session multiplexer that allows multiple plugins to share a single connection to a Kafka broker.
Package mux implements the session multiplexer that allows multiple plugins to share a single connection to a Kafka broker.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL