Documentation ¶
Overview ¶
package kafka provides cluster extensions for Sarama, enabing users to consume topics across from multiple, balanced nodes.
It requires Kafka v0.9+ and follows the steps guide, described in: https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design
Index ¶
- type BrokerOffsetStore
- type Client
- type Config
- type Consumer
- func (c *Consumer) Close() (err error)
- func (c *Consumer) CommitOffsets() error
- func (c *Consumer) Errors() <-chan error
- func (c *Consumer) HighWaterMarks() map[string]map[int32]int64
- func (c *Consumer) MarkOffset(msg *sarama.ConsumerMessage, metadata string)
- func (c *Consumer) MarkOffsets(s *OffsetStash)
- func (c *Consumer) MarkPartitionOffset(topic string, partition int32, offset int64, metadata string)
- func (c *Consumer) Messages() <-chan *sarama.ConsumerMessage
- func (c *Consumer) Notifications() <-chan *Notification
- func (c *Consumer) OffsetStore() OffsetStore
- func (c *Consumer) Partitions() <-chan PartitionConsumer
- func (c *Consumer) Subscriptions() map[string][]int32
- type ConsumerMode
- type DefaultOffsetStoreFactory
- type Error
- type Notification
- type NotificationType
- type OffsetStash
- type OffsetStore
- type OffsetStoreFactory
- type PartitionConsumer
- type Strategy
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BrokerOffsetStore ¶
type BrokerOffsetStore struct {
// contains filtered or unexported fields
}
func (*BrokerOffsetStore) Close ¶
func (manager *BrokerOffsetStore) Close() error
func (*BrokerOffsetStore) CommitOffset ¶
func (manager *BrokerOffsetStore) CommitOffset(req *offsets.OffsetCommitRequest) (*offsets.OffsetCommitResponse, error)
func (*BrokerOffsetStore) FetchOffset ¶
func (manager *BrokerOffsetStore) FetchOffset(req *offsets.OffsetFetchRequest) (*offsets.OffsetFetchResponse, error)
type Client ¶
Client is a group client
func (*Client) ClusterConfig ¶
ClusterConfig returns the cluster configuration.
type Config ¶
type Config struct { sarama.Config // Group is the namespace for group management properties Group struct { // The strategy to use for the allocation of partitions to consumers (defaults to StrategyRange) PartitionStrategy Strategy // By default, messages and errors from the subscribed topics and partitions are all multiplexed and // made available through the consumer's Messages() and Errors() channels. // // Users who require low-level access can enable ConsumerModePartitions where individual partitions // are exposed on the Partitions() channel. Messages and errors must then be consumed on the partitions // themselves. Mode ConsumerMode Offsets struct { Retry struct { // The numer retries when committing offsets (defaults to 3). Max int } Synchronization struct { // The duration allowed for other clients to commit their offsets before resumption in this client, e.g. during a rebalance // NewConfig sets this to the Consumer.MaxProcessingTime duration of the Sarama configuration DwellTime time.Duration } } Session struct { // The allowed session timeout for registered consumers (defaults to 30s). // Must be within the allowed server range. Timeout time.Duration } Heartbeat struct { // Interval between each heartbeat (defaults to 3s). It should be no more // than 1/3rd of the Group.Session.Timout setting Interval time.Duration } // Return specifies which group channels will be populated. If they are set to true, // you must read from the respective channels to prevent deadlock. Return struct { // If enabled, rebalance notification will be returned on the // Notifications channel (default disabled). Notifications bool } Topics struct { // An additional whitelist of topics to subscribe to. Whitelist *regexp.Regexp // An additional blacklist of topics to avoid. If set, this will precede over // the Whitelist setting. Blacklist *regexp.Regexp } Member struct { // Custom metadata to include when joining the group. The user data for all joined members // can be retrieved by sending a DescribeGroupRequest to the broker that is the // coordinator for the group. UserData []byte } } }
Config extends sarama.Config with Group specific namespace
type Consumer ¶
type Consumer struct {
// contains filtered or unexported fields
}
Consumer is a cluster group consumer
func NewConsumer ¶
func NewConsumer(addrs []string, groupID string, topics []string, config *Config, osf OffsetStoreFactory) (*Consumer, error)
NewConsumer initializes a new consumer
func NewConsumerFromClient ¶
func NewConsumerFromClient(client *Client, groupID string, topics []string, osf OffsetStoreFactory) (*Consumer, error)
NewConsumerFromClient initializes a new consumer from an existing client.
Please note that clients cannot be shared between consumers (due to Kafka internals), they can only be re-used which requires the user to call Close() on the first consumer before using this method again to initialize another one. Attempts to use a client with more than one consumer at a time will return errors.
func (*Consumer) CommitOffsets ¶
CommitOffsets allows to manually commit previously marked offsets. By default there is no need to call this function as the consumer will commit offsets automatically using the Config.Consumer.Offsets.CommitInterval setting.
Please be aware that calling this function during an internal rebalance cycle may return broker errors (e.g. sarama.ErrUnknownMemberId or sarama.ErrIllegalGeneration).
func (*Consumer) Errors ¶
Errors returns a read channel of errors that occur during offset management, if enabled. By default, errors are logged and not returned over this channel. If you want to implement any custom error handling, set your config's Consumer.Return.Errors setting to true, and read from this channel.
func (*Consumer) HighWaterMarks ¶
HighWaterMarks returns the current high water marks for each topic and partition Consistency between partitions is not guaranteed since high water marks are updated separately.
func (*Consumer) MarkOffset ¶
func (c *Consumer) MarkOffset(msg *sarama.ConsumerMessage, metadata string)
MarkOffset marks the provided message as processed, alongside a metadata string that represents the state of the partition consumer at that point in time. The metadata string can be used by another consumer to restore that state, so it can resume consumption.
Note: calling MarkOffset does not necessarily commit the offset to the backend store immediately for efficiency reasons, and it may never be committed if your application crashes. This means that you may end up processing the same message twice, and your processing should ideally be idempotent.
func (*Consumer) MarkOffsets ¶
func (c *Consumer) MarkOffsets(s *OffsetStash)
MarkOffsets marks stashed offsets as processed. See MarkOffset for additional explanation.
func (*Consumer) MarkPartitionOffset ¶
func (c *Consumer) MarkPartitionOffset(topic string, partition int32, offset int64, metadata string)
MarkPartitionOffset marks an offset of the provided topic/partition as processed. See MarkOffset for additional explanation.
func (*Consumer) Messages ¶
func (c *Consumer) Messages() <-chan *sarama.ConsumerMessage
Messages returns the read channel for the messages that are returned by the broker.
This channel will only return if Config.Group.Mode option is set to ConsumerModeMultiplex (default).
func (*Consumer) Notifications ¶
func (c *Consumer) Notifications() <-chan *Notification
Notifications returns a channel of Notifications that occur during consumer rebalancing. Notifications will only be emitted over this channel, if your config's Group.Return.Notifications setting to true.
func (*Consumer) OffsetStore ¶
func (c *Consumer) OffsetStore() OffsetStore
func (*Consumer) Partitions ¶
func (c *Consumer) Partitions() <-chan PartitionConsumer
Partitions returns the read channels for individual partitions of this broker.
This will channel will only return if Config.Group.Mode option is set to ConsumerModePartitions.
The Partitions() channel must be listened to for the life of this consumer; when a rebalance happens old partitions will be closed (naturally come to completion) and new ones will be emitted. The returned channel will only close when the consumer is completely shut down.
func (*Consumer) Subscriptions ¶
Subscriptions returns the consumed topics and partitions
type ConsumerMode ¶
type ConsumerMode uint8
const ( ConsumerModeMultiplex ConsumerMode = iota ConsumerModePartitions )
type DefaultOffsetStoreFactory ¶
type DefaultOffsetStoreFactory struct { }
func (*DefaultOffsetStoreFactory) GenOffsetStore ¶
func (f *DefaultOffsetStoreFactory) GenOffsetStore(c *Consumer) OffsetStore
type Error ¶
type Error struct { Ctx string // contains filtered or unexported fields }
Error instances are wrappers for internal errors with a context and may be returned through the consumer's Errors() channel
type Notification ¶
type Notification struct { // Type exposes the notification type Type NotificationType // Claimed contains topic/partitions that were claimed by this rebalance cycle Claimed map[string][]int32 // Released contains topic/partitions that were released as part of this rebalance cycle Released map[string][]int32 // Current are topic/partitions that are currently claimed to the consumer Current map[string][]int32 }
Notification are state events emitted by the consumers on rebalance
type NotificationType ¶
type NotificationType uint8
NotificationType defines the type of notification
const ( UnknownNotification NotificationType = iota RebalanceStart RebalanceOK RebalanceError )
func (NotificationType) String ¶
func (t NotificationType) String() string
String describes the notification type
type OffsetStash ¶
type OffsetStash struct {
// contains filtered or unexported fields
}
OffsetStash allows to accumulate offsets and mark them as processed in a bulk
func (*OffsetStash) MarkOffset ¶
func (s *OffsetStash) MarkOffset(msg *sarama.ConsumerMessage, metadata string)
MarkOffset stashes the provided message offset
func (*OffsetStash) MarkPartitionOffset ¶
func (s *OffsetStash) MarkPartitionOffset(topic string, partition int32, offset int64, metadata string)
MarkPartitionOffset stashes the offset for the provided topic/partition combination
func (*OffsetStash) Offsets ¶
func (s *OffsetStash) Offsets() map[string]int64
Offsets returns the latest stashed offsets by topic-partition
type OffsetStore ¶
type OffsetStore interface { CommitOffset(req *offsets.OffsetCommitRequest) (*offsets.OffsetCommitResponse, error) FetchOffset(req *offsets.OffsetFetchRequest) (*offsets.OffsetFetchResponse, error) Close() error }
type OffsetStoreFactory ¶
type OffsetStoreFactory interface {
GenOffsetStore(c *Consumer) OffsetStore
}
type PartitionConsumer ¶
type PartitionConsumer interface { // Close stops the PartitionConsumer from fetching messages. It will initiate a shutdown, drain // the Messages channel, harvest any errors & return them to the caller and trigger a rebalance. Close() error // Messages returns the read channel for the messages that are returned by // the broker. Messages() <-chan *sarama.ConsumerMessage // HighWaterMarkOffset returns the high water mark offset of the partition, // i.e. the offset that will be used for the next message that will be produced. // You can use this to determine how far behind the processing is. HighWaterMarkOffset() int64 // Topic returns the consumed topic name Topic() string // Partition returns the consumed partition Partition() int32 }
PartitionConsumer allows code to consume individual partitions from the cluster.
See docs for Consumer.Partitions() for more on how to implement this.
type Strategy ¶
type Strategy string
Strategy for partition to consumer assignement
const ( // StrategyRange is the default and assigns partition ranges to consumers. // Example with six partitions and two consumers: // C1: [0, 1, 2] // C2: [3, 4, 5] StrategyRange Strategy = "range" // StrategyRoundRobin assigns partitions by alternating over consumers. // Example with six partitions and two consumers: // C1: [0, 2, 4] // C2: [1, 3, 5] StrategyRoundRobin Strategy = "roundrobin" )