Documentation ¶
Overview ¶
Package sarama provides client libraries for the Kafka 0.8 protocol. The AsyncProducer object is the high-level API for producing messages asynchronously; the SyncProducer provides a blocking API for the same purpose. The Consumer object is the high-level API for consuming messages. The Client object provides metadata management functionality that is shared between the higher-level objects.
For lower-level needs, the Broker and Request/Response objects permit precise control over each connection and message sent on the wire.
The Request/Response objects and properties are mostly undocumented, as they line up exactly with the protocol fields documented by Kafka at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
Index ¶
- Constants
- Variables
- type AsyncProducer
- type Broker
- func (b *Broker) Addr() string
- func (b *Broker) Close() error
- func (b *Broker) CommitOffset(request *OffsetCommitRequest) (*OffsetCommitResponse, error)
- func (b *Broker) Connected() (bool, error)
- func (b *Broker) Fetch(request *FetchRequest) (*FetchResponse, error)
- func (b *Broker) FetchOffset(request *OffsetFetchRequest) (*OffsetFetchResponse, error)
- func (b *Broker) GetAvailableOffsets(request *OffsetRequest) (*OffsetResponse, error)
- func (b *Broker) GetConsumerMetadata(request *ConsumerMetadataRequest) (*ConsumerMetadataResponse, error)
- func (b *Broker) GetMetadata(request *MetadataRequest) (*MetadataResponse, error)
- func (b *Broker) ID() int32
- func (b *Broker) Open(conf *Config) error
- func (b *Broker) Produce(request *ProduceRequest) (*ProduceResponse, error)
- type ByteEncoder
- type Client
- type CompressionCodec
- type Config
- type ConfigurationError
- type Consumer
- type ConsumerError
- type ConsumerErrors
- type ConsumerMessage
- type ConsumerMetadataRequest
- type ConsumerMetadataResponse
- type DescribeGroupsRequest
- type DescribeGroupsResponse
- type Encoder
- type FetchRequest
- type FetchResponse
- type FetchResponseBlock
- type GroupDescription
- type GroupMemberDescription
- type HeartbeatRequest
- type HeartbeatResponse
- type JoinGroupRequest
- type JoinGroupResponse
- type KError
- type LeaveGroupRequest
- type LeaveGroupResponse
- type ListGroupsRequest
- type ListGroupsResponse
- type Message
- type MessageBlock
- type MessageSet
- type MetadataRequest
- type MetadataResponse
- type OffsetCommitRequest
- type OffsetCommitResponse
- type OffsetFetchRequest
- type OffsetFetchResponse
- type OffsetFetchResponseBlock
- type OffsetManager
- type OffsetRequest
- type OffsetResponse
- type OffsetResponseBlock
- type PacketDecodingError
- type PacketEncodingError
- type PartitionConsumer
- type PartitionMetadata
- type PartitionOffsetManager
- type Partitioner
- type PartitionerConstructor
- type ProduceRequest
- type ProduceResponse
- type ProduceResponseBlock
- type ProducerError
- type ProducerErrors
- type ProducerMessage
- type RequiredAcks
- type StdLogger
- type StringEncoder
- type SyncGroupRequest
- type SyncGroupResponse
- type SyncProducer
- type TopicMetadata
Constants ¶
const ( // OffsetNewest stands for the log head offset, i.e. the offset that will be // assigned to the next message that will be produced to the partition. You // can send this to a client's GetOffset method to get this offset, or when // calling ConsumePartition to start consuming new messages. OffsetNewest int64 = -1 // OffsetOldest stands for the oldest offset available on the broker for a // partition. You can send this to a client's GetOffset method to get this // offset, or when calling ConsumePartition to start consuming from the // oldest offset that is still available on the broker. OffsetOldest int64 = -2 )
const ReceiveTime int64 = -1
ReceiveTime is a special value for the timestamp field of Offset Commit Requests which tells the broker to set the timestamp to the time at which the request was received. The timestamp is only used if message version 1 is used, which requires kafka 0.8.2.
Variables ¶
var ErrAlreadyConnected = errors.New("kafka: broker connection already initiated")
ErrAlreadyConnected is the error returned when calling Open() on a Broker that is already connected or connecting.
var ErrClosedClient = errors.New("kafka: tried to use a client that was closed")
ErrClosedClient is the error returned when a method is called on a client that has been closed.
var ErrIncompleteResponse = errors.New("kafka: response did not contain all the expected topic/partition blocks")
ErrIncompleteResponse is the error returned when the server returns a syntactically valid response, but it does not contain the expected information.
var ErrInsufficientData = errors.New("kafka: insufficient data to decode packet, more bytes expected")
ErrInsufficientData is returned when decoding and the packet is truncated. This can be expected when requesting messages, since as an optimization the server is allowed to return a partial message at the end of the message set.
var ErrInvalidPartition = errors.New("kafka: partitioner returned an invalid partition index")
ErrInvalidPartition is the error returned when a partitioner returns an invalid partition index (meaning one outside of the range [0...numPartitions-1]).
var ErrMessageTooLarge = errors.New("kafka: message is larger than Consumer.Fetch.Max")
ErrMessageTooLarge is returned when the next message to consume is larger than the configured Consumer.Fetch.Max
var ErrNotConnected = errors.New("kafka: broker not connected")
ErrNotConnected is the error returned when trying to send or call Close() on a Broker that is not connected.
var ErrOutOfBrokers = errors.New("kafka: client has run out of available brokers to talk to (Is your cluster reachable?)")
ErrOutOfBrokers is the error returned when the client has run out of brokers to talk to because all of them errored or otherwise failed to respond.
var ErrShuttingDown = errors.New("kafka: message received by producer in process of shutting down")
ErrShuttingDown is returned when a producer receives a message during shutdown.
var MaxRequestSize int32 = 100 * 1024 * 1024
MaxRequestSize is the maximum size (in bytes) of any request that Sarama will attempt to send. Trying to send a request larger than this will result in an PacketEncodingError. The default of 100 MiB is aligned with Kafka's default `socket.request.max.bytes`, which is the largest request the broker will attempt to process.
var MaxResponseSize int32 = 100 * 1024 * 1024
MaxResponseSize is the maximum size (in bytes) of any response that Sarama will attempt to parse. If a broker returns a response message larger than this value, Sarama will return a PacketDecodingError to protect the client from running out of memory. Please note that brokers do not have any natural limit on the size of responses they send. In particular, they can send arbitrarily large fetch responses to consumers (see https://issues.apache.org/jira/browse/KAFKA-2063).
var PanicHandler func(interface{})
PanicHandler is called for recovering from panics spawned internally to the library (and thus not recoverable by the caller's goroutine). Defaults to nil, which means panics are not recovered.
Functions ¶
This section is empty.
Types ¶
type AsyncProducer ¶
type AsyncProducer interface { // AsyncClose triggers a shutdown of the producer, flushing any messages it may // have buffered. The shutdown has completed when both the Errors and Successes // channels have been closed. When calling AsyncClose, you *must* continue to // read from those channels in order to drain the results of any messages in // flight. AsyncClose() // Close shuts down the producer and flushes any messages it may have buffered. // You must call this function before a producer object passes out of scope, as // it may otherwise leak memory. You must call this before calling Close on the // underlying client. Close() error // Input is the input channel for the user to write messages to that they // wish to send. Input() chan<- *ProducerMessage // Successes is the success output channel back to the user when AckSuccesses is // enabled. If Return.Successes is true, you MUST read from this channel or the // Producer will deadlock. It is suggested that you send and read messages // together in a single select statement. Successes() <-chan *ProducerMessage // Errors is the error output channel back to the user. You MUST read from this // channel or the Producer will deadlock when the channel is full. Alternatively, // you can set Producer.Return.Errors in your config to false, which prevents // errors to be returned. Errors() <-chan *ProducerError }
AsyncProducer publishes Kafka messages using a non-blocking API. It routes messages to the correct broker for the provided topic-partition, refreshing metadata as appropriate, and parses responses for errors. You must read from the Errors() channel or the producer will deadlock. You must call Close() or AsyncClose() on a producer to avoid leaks: it will not be garbage-collected automatically when it passes out of scope.
func NewAsyncProducer ¶
func NewAsyncProducer(addrs []string, conf *Config) (AsyncProducer, error)
NewAsyncProducer creates a new AsyncProducer using the given broker addresses and configuration.
func NewAsyncProducerFromClient ¶
func NewAsyncProducerFromClient(client Client) (AsyncProducer, error)
NewAsyncProducerFromClient creates a new Producer using the given client. It is still necessary to call Close() on the underlying client when shutting down this producer.
type Broker ¶
type Broker struct {
// contains filtered or unexported fields
}
Broker represents a single Kafka broker connection. All operations on this object are entirely concurrency-safe.
func NewBroker ¶
NewBroker creates and returns a Broker targetting the given host:port address. This does not attempt to actually connect, you have to call Open() for that.
func (*Broker) Addr ¶
Addr returns the broker address as either retrieved from Kafka's metadata or passed to NewBroker.
func (*Broker) CommitOffset ¶
func (b *Broker) CommitOffset(request *OffsetCommitRequest) (*OffsetCommitResponse, error)
func (*Broker) Connected ¶
Connected returns true if the broker is connected and false otherwise. If the broker is not connected but it had tried to connect, the error from that connection attempt is also returned.
func (*Broker) Fetch ¶
func (b *Broker) Fetch(request *FetchRequest) (*FetchResponse, error)
func (*Broker) FetchOffset ¶
func (b *Broker) FetchOffset(request *OffsetFetchRequest) (*OffsetFetchResponse, error)
func (*Broker) GetAvailableOffsets ¶
func (b *Broker) GetAvailableOffsets(request *OffsetRequest) (*OffsetResponse, error)
func (*Broker) GetConsumerMetadata ¶
func (b *Broker) GetConsumerMetadata(request *ConsumerMetadataRequest) (*ConsumerMetadataResponse, error)
func (*Broker) GetMetadata ¶
func (b *Broker) GetMetadata(request *MetadataRequest) (*MetadataResponse, error)
func (*Broker) ID ¶
ID returns the broker ID retrieved from Kafka's metadata, or -1 if that is not known.
func (*Broker) Open ¶
Open tries to connect to the Broker if it is not already connected or connecting, but does not block waiting for the connection to complete. This means that any subsequent operations on the broker will block waiting for the connection to succeed or fail. To get the effect of a fully synchronous Open call, follow it by a call to Connected(). The only errors Open will return directly are ConfigurationError or AlreadyConnected. If conf is nil, the result of NewConfig() is used.
func (*Broker) Produce ¶
func (b *Broker) Produce(request *ProduceRequest) (*ProduceResponse, error)
type ByteEncoder ¶
type ByteEncoder []byte
ByteEncoder implements the Encoder interface for Go byte slices so that they can be used as the Key or Value in a ProducerMessage.
func (ByteEncoder) Encode ¶
func (b ByteEncoder) Encode() ([]byte, error)
func (ByteEncoder) Length ¶
func (b ByteEncoder) Length() int
type Client ¶
type Client interface { // Config returns the Config struct of the client. This struct should not be // altered after it has been created. Config() *Config // Topics returns the set of available topics as retrieved from cluster metadata. Topics() ([]string, error) // Partitions returns the sorted list of all partition IDs for the given topic. Partitions(topic string) ([]int32, error) // WritablePartitions returns the sorted list of all writable partition IDs for // the given topic, where "writable" means "having a valid leader accepting // writes". WritablePartitions(topic string) ([]int32, error) // Leader returns the broker object that is the leader of the current // topic/partition, as determined by querying the cluster metadata. Leader(topic string, partitionID int32) (*Broker, error) // Replicas returns the set of all replica IDs for the given partition. Replicas(topic string, partitionID int32) ([]int32, error) // RefreshMetadata takes a list of topics and queries the cluster to refresh the // available metadata for those topics. If no topics are provided, it will refresh // metadata for all topics. RefreshMetadata(topics ...string) error // GetOffset queries the cluster to get the most recent available offset at the // given time on the topic/partition combination. Time should be OffsetOldest for // the earliest available offset, OffsetNewest for the offset of the message that // will be produced next, or a time. GetOffset(topic string, partitionID int32, time int64) (int64, error) // Coordinator returns the coordinating broker for a consumer group. It will // return a locally cached value if it's available. You can call // RefreshCoordinator to update the cached value. This function only works on // Kafka 0.8.2 and higher. Coordinator(consumerGroup string) (*Broker, error) // RefreshCoordinator retrieves the coordinator for a consumer group and stores it // in local cache. This function only works on Kafka 0.8.2 and higher. RefreshCoordinator(consumerGroup string) error // Close shuts down all broker connections managed by this client. It is required // to call this function before a client object passes out of scope, as it will // otherwise leak memory. You must close any Producers or Consumers using a client // before you close the client. Close() error // Closed returns true if the client has already had Close called on it Closed() bool }
Client is a generic Kafka client. It manages connections to one or more Kafka brokers. You MUST call Close() on a client to avoid leaks, it will not be garbage-collected automatically when it passes out of scope. A single client can be safely shared by multiple concurrent Producers and Consumers.
type CompressionCodec ¶
type CompressionCodec int8
CompressionCodec represents the various compression codecs recognized by Kafka in messages.
const ( CompressionNone CompressionCodec = 0 CompressionGZIP CompressionCodec = 1 CompressionSnappy CompressionCodec = 2 )
type Config ¶
type Config struct { // Net is the namespace for network-level properties used by the Broker, and // shared by the Client/Producer/Consumer. Net struct { // How many outstanding requests a connection is allowed to have before // sending on it blocks (default 5). MaxOpenRequests int // All three of the below configurations are similar to the // `socket.timeout.ms` setting in JVM kafka. All of them default // to 30 seconds. DialTimeout time.Duration // How long to wait for the initial connection. ReadTimeout time.Duration // How long to wait for a response. WriteTimeout time.Duration // How long to wait for a transmit. // NOTE: these config values have no compatibility guarantees; they may // change when Kafka releases its official TLS support in version 0.9. TLS struct { // Whether or not to use TLS when connecting to the broker // (defaults to false). Enable bool // The TLS configuration to use for secure connections if // enabled (defaults to nil). Config *tls.Config } // KeepAlive specifies the keep-alive period for an active network connection. // If zero, keep-alives are disabled. (default is 0: disabled). KeepAlive time.Duration } // Metadata is the namespace for metadata management properties used by the // Client, and shared by the Producer/Consumer. Metadata struct { Retry struct { // The total number of times to retry a metadata request when the // cluster is in the middle of a leader election (default 3). Max int // How long to wait for leader election to occur before retrying // (default 250ms). Similar to the JVM's `retry.backoff.ms`. Backoff time.Duration } // How frequently to refresh the cluster metadata in the background. // Defaults to 10 minutes. Set to 0 to disable. Similar to // `topic.metadata.refresh.interval.ms` in the JVM version. RefreshFrequency time.Duration } // Producer is the namespace for configuration related to producing messages, // used by the Producer. Producer struct { // The maximum permitted size of a message (defaults to 1000000). Should be // set equal to or smaller than the broker's `message.max.bytes`. MaxMessageBytes int // The level of acknowledgement reliability needed from the broker (defaults // to WaitForLocal). Equivalent to the `request.required.acks` setting of the // JVM producer. RequiredAcks RequiredAcks // The maximum duration the broker will wait the receipt of the number of // RequiredAcks (defaults to 10 seconds). This is only relevant when // RequiredAcks is set to WaitForAll or a number > 1. Only supports // millisecond resolution, nanoseconds will be truncated. Equivalent to // the JVM producer's `request.timeout.ms` setting. Timeout time.Duration // The type of compression to use on messages (defaults to no compression). // Similar to `compression.codec` setting of the JVM producer. Compression CompressionCodec // Generates partitioners for choosing the partition to send messages to // (defaults to hashing the message key). Similar to the `partitioner.class` // setting for the JVM producer. Partitioner PartitionerConstructor // Return specifies what channels will be populated. If they are set to true, // you must read from the respective channels to prevent deadlock. Return struct { // If enabled, successfully delivered messages will be returned on the // Successes channel (default disabled). Successes bool // If enabled, messages that failed to deliver will be returned on the // Errors channel, including error (default enabled). Errors bool } // The following config options control how often messages are batched up and // sent to the broker. By default, messages are sent as fast as possible, and // all messages received while the current batch is in-flight are placed // into the subsequent batch. Flush struct { // The best-effort number of bytes needed to trigger a flush. Use the // global sarama.MaxRequestSize to set a hard upper limit. Bytes int // The best-effort number of messages needed to trigger a flush. Use // `MaxMessages` to set a hard upper limit. Messages int // The best-effort frequency of flushes. Equivalent to // `queue.buffering.max.ms` setting of JVM producer. Frequency time.Duration // The maximum number of messages the producer will send in a single // broker request. Defaults to 0 for unlimited. Similar to // `queue.buffering.max.messages` in the JVM producer. MaxMessages int } Retry struct { // The total number of times to retry sending a message (default 3). // Similar to the `message.send.max.retries` setting of the JVM producer. Max int // How long to wait for the cluster to settle between retries // (default 100ms). Similar to the `retry.backoff.ms` setting of the // JVM producer. Backoff time.Duration } } // Consumer is the namespace for configuration related to consuming messages, // used by the Consumer. Consumer struct { Retry struct { // How long to wait after a failing to read from a partition before // trying again (default 2s). Backoff time.Duration } // Fetch is the namespace for controlling how many bytes are retrieved by any // given request. Fetch struct { // The minimum number of message bytes to fetch in a request - the broker // will wait until at least this many are available. The default is 1, // as 0 causes the consumer to spin when no messages are available. // Equivalent to the JVM's `fetch.min.bytes`. Min int32 // The default number of message bytes to fetch from the broker in each // request (default 32768). This should be larger than the majority of // your messages, or else the consumer will spend a lot of time // negotiating sizes and not actually consuming. Similar to the JVM's // `fetch.message.max.bytes`. Default int32 // The maximum number of message bytes to fetch from the broker in a // single request. Messages larger than this will return // ErrMessageTooLarge and will not be consumable, so you must be sure // this is at least as large as your largest message. Defaults to 0 // (no limit). Similar to the JVM's `fetch.message.max.bytes`. The // global `sarama.MaxResponseSize` still applies. Max int32 } // The maximum amount of time the broker will wait for Consumer.Fetch.Min // bytes to become available before it returns fewer than that anyways. The // default is 250ms, since 0 causes the consumer to spin when no events are // available. 100-500ms is a reasonable range for most cases. Kafka only // supports precision up to milliseconds; nanoseconds will be truncated. // Equivalent to the JVM's `fetch.wait.max.ms`. MaxWaitTime time.Duration // The maximum amount of time the consumer expects a message takes to process // for the user. If writing to the Messages channel takes longer than this, // that partition will stop fetching more messages until it can proceed again. // Note that, since the Messages channel is buffered, the actual grace time is // (MaxProcessingTime * ChanneBufferSize). Defaults to 100ms. MaxProcessingTime time.Duration // Return specifies what channels will be populated. If they are set to true, // you must read from them to prevent deadlock. Return struct { // If enabled, any errors that occured while consuming are returned on // the Errors channel (default disabled). Errors bool } // Offsets specifies configuration for how and when to commit consumed // offsets. This currently requires the manual use of an OffsetManager // but will eventually be automated. Offsets struct { // How frequently to commit updated offsets. Defaults to 1s. CommitInterval time.Duration // The initial offset to use if no offset was previously committed. // Should be OffsetNewest or OffsetOldest. Defaults to OffsetNewest. Initial int64 } } // A user-provided string sent with every request to the brokers for logging, // debugging, and auditing purposes. Defaults to "sarama", but you should // probably set it to something specific to your application. ClientID string // The number of events to buffer in internal and external channels. This // permits the producer and consumer to continue processing some messages // in the background while user code is working, greatly improving throughput. // Defaults to 256. ChannelBufferSize int }
Config is used to pass multiple configuration options to Sarama's constructors.
type ConfigurationError ¶
type ConfigurationError string
ConfigurationError is the type of error returned from a constructor (e.g. NewClient, or NewConsumer) when the specified configuration is invalid.
func (ConfigurationError) Error ¶
func (err ConfigurationError) Error() string
type Consumer ¶
type Consumer interface { // Topics returns the set of available topics as retrieved from the cluster // metadata. This method is the same as Client.Topics(), and is provided for // convenience. Topics() ([]string, error) // Partitions returns the sorted list of all partition IDs for the given topic. // This method is the same as Client.Partitions(), and is provided for convenience. Partitions(topic string) ([]int32, error) // ConsumePartition creates a PartitionConsumer on the given topic/partition with // the given offset. It will return an error if this Consumer is already consuming // on the given topic/partition. Offset can be a literal offset, or OffsetNewest // or OffsetOldest ConsumePartition(topic string, partition int32, offset int64) (PartitionConsumer, error) // Close shuts down the consumer. It must be called after all child // PartitionConsumers have already been closed. Close() error }
Consumer manages PartitionConsumers which process Kafka messages from brokers. You MUST call Close() on a consumer to avoid leaks, it will not be garbage-collected automatically when it passes out of scope.
Sarama's Consumer type does not currently support automatic consumer group rebalancing and offset tracking, however the https://github.com/wvanbergen/kafka library builds on Sarama to add this support. We plan to properly integrate this functionality at a later date.
func NewConsumer ¶
NewConsumer creates a new consumer using the given broker addresses and configuration.
func NewConsumerFromClient ¶
NewConsumerFromClient creates a new consumer using the given client. It is still necessary to call Close() on the underlying client when shutting down this consumer.
type ConsumerError ¶
ConsumerError is what is provided to the user when an error occurs. It wraps an error and includes the topic and partition.
func (ConsumerError) Error ¶
func (ce ConsumerError) Error() string
type ConsumerErrors ¶
type ConsumerErrors []*ConsumerError
ConsumerErrors is a type that wraps a batch of errors and implements the Error interface. It can be returned from the PartitionConsumer's Close methods to avoid the need to manually drain errors when stopping.
func (ConsumerErrors) Error ¶
func (ce ConsumerErrors) Error() string
type ConsumerMessage ¶
ConsumerMessage encapsulates a Kafka message returned by the consumer.
type ConsumerMetadataRequest ¶
type ConsumerMetadataRequest struct {
ConsumerGroup string
}
type DescribeGroupsRequest ¶
type DescribeGroupsRequest struct {
Groups []string
}
func (*DescribeGroupsRequest) AddGroup ¶
func (r *DescribeGroupsRequest) AddGroup(group string)
type DescribeGroupsResponse ¶
type DescribeGroupsResponse struct {
Groups []*GroupDescription
}
type Encoder ¶
Encoder is a simple interface for any type that can be encoded as an array of bytes in order to be sent as the key or value of a Kafka message. Length() is provided as an optimization, and must return the same as len() on the result of Encode().
type FetchRequest ¶
type FetchResponse ¶
type FetchResponse struct {
Blocks map[string]map[int32]*FetchResponseBlock
}
func (*FetchResponse) AddError ¶
func (fr *FetchResponse) AddError(topic string, partition int32, err KError)
func (*FetchResponse) AddMessage ¶
func (fr *FetchResponse) AddMessage(topic string, partition int32, key, value Encoder, offset int64)
func (*FetchResponse) GetBlock ¶
func (fr *FetchResponse) GetBlock(topic string, partition int32) *FetchResponseBlock
type FetchResponseBlock ¶
type FetchResponseBlock struct { Err KError HighWaterMarkOffset int64 MsgSet MessageSet }
type GroupDescription ¶
type GroupMemberDescription ¶
type HeartbeatRequest ¶
type HeartbeatResponse ¶
type HeartbeatResponse struct {
Err KError
}
type JoinGroupRequest ¶
type JoinGroupRequest struct { GroupId string SessionTimeout int32 MemberId string ProtocolType string GroupProtocols map[string][]byte }
func (*JoinGroupRequest) AddGroupProtocol ¶
func (r *JoinGroupRequest) AddGroupProtocol(name string, metadata []byte)
type JoinGroupResponse ¶
type KError ¶
type KError int16
KError is the type of error that can be returned directly by the Kafka broker. See https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-ErrorCodes
const ( ErrNoError KError = 0 ErrUnknown KError = -1 ErrOffsetOutOfRange KError = 1 ErrInvalidMessage KError = 2 ErrUnknownTopicOrPartition KError = 3 ErrInvalidMessageSize KError = 4 ErrLeaderNotAvailable KError = 5 ErrNotLeaderForPartition KError = 6 ErrRequestTimedOut KError = 7 ErrBrokerNotAvailable KError = 8 ErrReplicaNotAvailable KError = 9 ErrMessageSizeTooLarge KError = 10 ErrStaleControllerEpochCode KError = 11 ErrOffsetMetadataTooLarge KError = 12 ErrOffsetsLoadInProgress KError = 14 ErrConsumerCoordinatorNotAvailable KError = 15 ErrNotCoordinatorForConsumer KError = 16 ErrInvalidTopic KError = 17 ErrMessageSetSizeTooLarge KError = 18 ErrNotEnoughReplicas KError = 19 ErrNotEnoughReplicasAfterAppend KError = 20 ErrInvalidRequiredAcks KError = 21 ErrIllegalGeneration KError = 22 ErrInconsistentGroupProtocol KError = 23 ErrInvalidGroupId KError = 24 ErrUnknownMemberId KError = 25 ErrInvalidSessionTimeout KError = 26 ErrRebalanceInProgress KError = 27 ErrInvalidCommitOffsetSize KError = 28 ErrTopicAuthorizationFailed KError = 29 ErrGroupAuthorizationFailed KError = 30 ErrClusterAuthorizationFailed KError = 31 )
Numeric error codes returned by the Kafka server.
type LeaveGroupRequest ¶
type LeaveGroupResponse ¶
type LeaveGroupResponse struct {
Err KError
}
type ListGroupsRequest ¶
type ListGroupsRequest struct { }
type ListGroupsResponse ¶
type Message ¶
type Message struct { Codec CompressionCodec // codec used to compress the message contents Key []byte // the message key, may be nil Value []byte // the message contents Set *MessageSet // the message set a message might wrap // contains filtered or unexported fields }
type MessageBlock ¶
func (*MessageBlock) Messages ¶
func (msb *MessageBlock) Messages() []*MessageBlock
Messages convenience helper which returns either all the messages that are wrapped in this block
type MessageSet ¶
type MessageSet struct { PartialTrailingMessage bool // whether the set on the wire contained an incomplete trailing MessageBlock Messages []*MessageBlock }
type MetadataRequest ¶
type MetadataRequest struct {
Topics []string
}
type MetadataResponse ¶
type MetadataResponse struct { Brokers []*Broker Topics []*TopicMetadata }
func (*MetadataResponse) AddBroker ¶
func (m *MetadataResponse) AddBroker(addr string, id int32)
func (*MetadataResponse) AddTopic ¶
func (m *MetadataResponse) AddTopic(topic string, err KError) *TopicMetadata
func (*MetadataResponse) AddTopicPartition ¶
func (m *MetadataResponse) AddTopicPartition(topic string, partition, brokerID int32, replicas, isr []int32, err KError)
type OffsetCommitRequest ¶
type OffsetCommitRequest struct { ConsumerGroup string ConsumerGroupGeneration int32 // v1 or later ConsumerID string // v1 or later RetentionTime int64 // v2 or later // Version can be: // - 0 (kafka 0.8.1 and later) // - 1 (kafka 0.8.2 and later) // - 2 (kafka 0.8.3 and later) Version int16 // contains filtered or unexported fields }
type OffsetCommitResponse ¶
type OffsetFetchRequest ¶
type OffsetFetchRequest struct { ConsumerGroup string Version int16 // contains filtered or unexported fields }
func (*OffsetFetchRequest) AddPartition ¶
func (r *OffsetFetchRequest) AddPartition(topic string, partitionID int32)
type OffsetFetchResponse ¶
type OffsetFetchResponse struct {
Blocks map[string]map[int32]*OffsetFetchResponseBlock
}
func (*OffsetFetchResponse) AddBlock ¶
func (r *OffsetFetchResponse) AddBlock(topic string, partition int32, block *OffsetFetchResponseBlock)
func (*OffsetFetchResponse) GetBlock ¶
func (r *OffsetFetchResponse) GetBlock(topic string, partition int32) *OffsetFetchResponseBlock
type OffsetManager ¶
type OffsetManager interface { // ManagePartition creates a PartitionOffsetManager on the given topic/partition. // It will return an error if this OffsetManager is already managing the given // topic/partition. ManagePartition(topic string, partition int32) (PartitionOffsetManager, error) // Close stops the OffsetManager from managing offsets. It is required to call // this function before an OffsetManager object passes out of scope, as it // will otherwise leak memory. You must call this after all the // PartitionOffsetManagers are closed. Close() error }
OffsetManager uses Kafka to store and fetch consumed partition offsets.
func NewOffsetManagerFromClient ¶
func NewOffsetManagerFromClient(group string, client Client) (OffsetManager, error)
NewOffsetManagerFromClient creates a new OffsetManager from the given client. It is still necessary to call Close() on the underlying client when finished with the partition manager.
type OffsetRequest ¶
type OffsetRequest struct {
// contains filtered or unexported fields
}
type OffsetResponse ¶
type OffsetResponse struct {
Blocks map[string]map[int32]*OffsetResponseBlock
}
func (*OffsetResponse) AddTopicPartition ¶
func (r *OffsetResponse) AddTopicPartition(topic string, partition int32, offset int64)
func (*OffsetResponse) GetBlock ¶
func (r *OffsetResponse) GetBlock(topic string, partition int32) *OffsetResponseBlock
type OffsetResponseBlock ¶
type PacketDecodingError ¶
type PacketDecodingError struct {
Info string
}
PacketDecodingError is returned when there was an error (other than truncated data) decoding the Kafka broker's response. This can be a bad CRC or length field, or any other invalid value.
func (PacketDecodingError) Error ¶
func (err PacketDecodingError) Error() string
type PacketEncodingError ¶
type PacketEncodingError struct {
Info string
}
PacketEncodingError is returned from a failure while encoding a Kafka packet. This can happen, for example, if you try to encode a string over 2^15 characters in length, since Kafka's encoding rules do not permit that.
func (PacketEncodingError) Error ¶
func (err PacketEncodingError) Error() string
type PartitionConsumer ¶
type PartitionConsumer interface { // AsyncClose initiates a shutdown of the PartitionConsumer. This method will // return immediately, after which you should wait until the 'messages' and // 'errors' channel are drained. It is required to call this function, or // Close before a consumer object passes out of scope, as it will otherwise // leak memory. You must call this before calling Close on the underlying client. AsyncClose() // Close stops the PartitionConsumer from fetching messages. It is required to // call this function (or AsyncClose) before a consumer object passes out of // scope, as it will otherwise leak memory. You must call this before calling // Close on the underlying client. Close() error // Messages returns the read channel for the messages that are returned by // the broker. Messages() <-chan *ConsumerMessage // Errors returns a read channel of errors that occured during consuming, if // enabled. By default, errors are logged and not returned over this channel. // If you want to implement any custom error handling, set your config's // Consumer.Return.Errors setting to true, and read from this channel. Errors() <-chan *ConsumerError // HighWaterMarkOffset returns the high water mark offset of the partition, // i.e. the offset that will be used for the next message that will be produced. // You can use this to determine how far behind the processing is. HighWaterMarkOffset() int64 }
PartitionConsumer processes Kafka messages from a given topic and partition. You MUST call Close() or AsyncClose() on a PartitionConsumer to avoid leaks, it will not be garbage-collected automatically when it passes out of scope.
The simplest way of using a PartitionConsumer is to loop over its Messages channel using a for/range loop. The PartitionConsumer will only stop itself in one case: when the offset being consumed is reported as out of range by the brokers. In this case you should decide what you want to do (try a different offset, notify a human, etc) and handle it appropriately. For all other error cases, it will just keep retrying. By default, it logs these errors to sarama.Logger; if you want to be notified directly of all errors, set your config's Consumer.Return.Errors to true and read from the Errors channel, using a select statement or a separate goroutine. Check out the Consumer examples to see implementations of these different approaches.
type PartitionMetadata ¶
type PartitionOffsetManager ¶
type PartitionOffsetManager interface { // NextOffset returns the next offset that should be consumed for the managed // partition, accompanied by metadata which can be used to reconstruct the state // of the partition consumer when it resumes. NextOffset() will return // `config.Consumer.Offsets.Initial` and an empty metadata string if no offset // was committed for this partition yet. NextOffset() (int64, string) // MarkOffset marks the provided offset as processed, alongside a metadata string // that represents the state of the partition consumer at that point in time. The // metadata string can be used by another consumer to restore that state, so it // can resume consumption. // // Note: calling MarkOffset does not necessarily commit the offset to the backend // store immediately for efficiency reasons, and it may never be committed if // your application crashes. This means that you may end up processing the same // message twice, and your processing should ideally be idempotent. MarkOffset(offset int64, metadata string) // Errors returns a read channel of errors that occur during offset management, if // enabled. By default, errors are logged and not returned over this channel. If // you want to implement any custom error handling, set your config's // Consumer.Return.Errors setting to true, and read from this channel. Errors() <-chan *ConsumerError // AsyncClose initiates a shutdown of the PartitionOffsetManager. This method will // return immediately, after which you should wait until the 'errors' channel has // been drained and closed. It is required to call this function, or Close before // a consumer object passes out of scope, as it will otherwise leak memory. You // must call this before calling Close on the underlying client. AsyncClose() // Close stops the PartitionOffsetManager from managing offsets. It is required to // call this function (or AsyncClose) before a PartitionOffsetManager object // passes out of scope, as it will otherwise leak memory. You must call this // before calling Close on the underlying client. Close() error }
PartitionOffsetManager uses Kafka to store and fetch consumed partition offsets. You MUST call Close() on a partition offset manager to avoid leaks, it will not be garbage-collected automatically when it passes out of scope.
type Partitioner ¶
type Partitioner interface { // Partition takes a message and partition count and chooses a partition Partition(message *ProducerMessage, numPartitions int32) (int32, error) // RequiresConsistency indicates to the user of the partitioner whether the // mapping of key->partition is consistent or not. Specifically, if a // partitioner requires consistency then it must be allowed to choose from all // partitions (even ones known to be unavailable), and its choice must be // respected by the caller. The obvious example is the HashPartitioner. RequiresConsistency() bool }
Partitioner is anything that, given a Kafka message and a number of partitions indexed [0...numPartitions-1], decides to which partition to send the message. RandomPartitioner, RoundRobinPartitioner and HashPartitioner are provided as simple default implementations.
func NewHashPartitioner ¶
func NewHashPartitioner(topic string) Partitioner
NewHashPartitioner returns a Partitioner which behaves as follows. If the message's key is nil, or fails to encode, then a random partition is chosen. Otherwise the FNV-1a hash of the encoded bytes of the message key is used, modulus the number of partitions. This ensures that messages with the same key always end up on the same partition.
func NewManualPartitioner ¶
func NewManualPartitioner(topic string) Partitioner
NewManualPartitioner returns a Partitioner which uses the partition manually set in the provided ProducerMessage's Partition field as the partition to produce to.
func NewRandomPartitioner ¶
func NewRandomPartitioner(topic string) Partitioner
NewRandomPartitioner returns a Partitioner which chooses a random partition each time.
func NewRoundRobinPartitioner ¶
func NewRoundRobinPartitioner(topic string) Partitioner
NewRoundRobinPartitioner returns a Partitioner which walks through the available partitions one at a time.
type PartitionerConstructor ¶
type PartitionerConstructor func(topic string) Partitioner
PartitionerConstructor is the type for a function capable of constructing new Partitioners.
type ProduceRequest ¶
type ProduceRequest struct { RequiredAcks RequiredAcks Timeout int32 // contains filtered or unexported fields }
func (*ProduceRequest) AddMessage ¶
func (p *ProduceRequest) AddMessage(topic string, partition int32, msg *Message)
func (*ProduceRequest) AddSet ¶
func (p *ProduceRequest) AddSet(topic string, partition int32, set *MessageSet)
type ProduceResponse ¶
type ProduceResponse struct {
Blocks map[string]map[int32]*ProduceResponseBlock
}
func (*ProduceResponse) AddTopicPartition ¶
func (pr *ProduceResponse) AddTopicPartition(topic string, partition int32, err KError)
func (*ProduceResponse) GetBlock ¶
func (pr *ProduceResponse) GetBlock(topic string, partition int32) *ProduceResponseBlock
type ProduceResponseBlock ¶
type ProducerError ¶
type ProducerError struct { Msg *ProducerMessage Err error }
ProducerError is the type of error generated when the producer fails to deliver a message. It contains the original ProducerMessage as well as the actual error value.
func (ProducerError) Error ¶
func (pe ProducerError) Error() string
type ProducerErrors ¶
type ProducerErrors []*ProducerError
ProducerErrors is a type that wraps a batch of "ProducerError"s and implements the Error interface. It can be returned from the Producer's Close method to avoid the need to manually drain the Errors channel when closing a producer.
func (ProducerErrors) Error ¶
func (pe ProducerErrors) Error() string
type ProducerMessage ¶
type ProducerMessage struct { Topic string // The Kafka topic for this message. // The partitioning key for this message. Pre-existing Encoders include // StringEncoder and ByteEncoder. Key Encoder // The actual message to store in Kafka. Pre-existing Encoders include // StringEncoder and ByteEncoder. Value Encoder // This field is used to hold arbitrary data you wish to include so it // will be available when receiving on the Successes and Errors channels. // Sarama completely ignores this field and is only to be used for // pass-through data. Metadata interface{} // Offset is the offset of the message stored on the broker. This is only // guaranteed to be defined if the message was successfully delivered and // RequiredAcks is not NoResponse. Offset int64 // Partition is the partition that the message was sent to. This is only // guaranteed to be defined if the message was successfully delivered. Partition int32 // contains filtered or unexported fields }
ProducerMessage is the collection of elements passed to the Producer in order to send a message.
type RequiredAcks ¶
type RequiredAcks int16
RequiredAcks is used in Produce Requests to tell the broker how many replica acknowledgements it must see before responding. Any of the constants defined here are valid. On broker versions prior to 0.8.2.0 any other positive int16 is also valid (the broker will wait for that many acknowledgements) but in 0.8.2.0 and later this will raise an exception (it has been replaced by setting the `min.isr` value in the brokers configuration).
const ( // NoResponse doesn't send any response, the TCP ACK is all you get. NoResponse RequiredAcks = 0 // WaitForLocal waits for only the local commit to succeed before responding. WaitForLocal RequiredAcks = 1 // WaitForAll waits for all replicas to commit before responding. WaitForAll RequiredAcks = -1 )
type StdLogger ¶
type StdLogger interface { Print(v ...interface{}) Printf(format string, v ...interface{}) Println(v ...interface{}) }
StdLogger is used to log error messages.
type StringEncoder ¶
type StringEncoder string
StringEncoder implements the Encoder interface for Go strings so that they can be used as the Key or Value in a ProducerMessage.
func (StringEncoder) Encode ¶
func (s StringEncoder) Encode() ([]byte, error)
func (StringEncoder) Length ¶
func (s StringEncoder) Length() int
type SyncGroupRequest ¶
type SyncGroupRequest struct { GroupId string GenerationId int32 MemberId string GroupAssignments map[string][]byte }
func (*SyncGroupRequest) AddGroupAssignment ¶
func (r *SyncGroupRequest) AddGroupAssignment(memberId string, memberAssignment []byte)
type SyncGroupResponse ¶
type SyncProducer ¶
type SyncProducer interface { // SendMessage produces a given message, and returns only when it either has // succeeded or failed to produce. It will return the partition and the offset // of the produced message, or an error if the message failed to produce. SendMessage(msg *ProducerMessage) (partition int32, offset int64, err error) // Close shuts down the producer and flushes any messages it may have buffered. // You must call this function before a producer object passes out of scope, as // it may otherwise leak memory. You must call this before calling Close on the // underlying client. Close() error }
SyncProducer publishes Kafka messages. It routes messages to the correct broker, refreshing metadata as appropriate, and parses responses for errors. You must call Close() on a producer to avoid leaks, it may not be garbage-collected automatically when it passes out of scope.
func NewSyncProducer ¶
func NewSyncProducer(addrs []string, config *Config) (SyncProducer, error)
NewSyncProducer creates a new SyncProducer using the given broker addresses and configuration.
func NewSyncProducerFromClient ¶
func NewSyncProducerFromClient(client Client) (SyncProducer, error)
NewSyncProducerFromClient creates a new SyncProducer using the given client. It is still necessary to call Close() on the underlying client when shutting down this producer.
type TopicMetadata ¶
type TopicMetadata struct { Err KError Name string Partitions []*PartitionMetadata }
Source Files ¶
- async_producer.go
- broker.go
- client.go
- config.go
- consumer.go
- consumer_metadata_request.go
- consumer_metadata_response.go
- crc32_field.go
- describe_groups_request.go
- describe_groups_response.go
- encoder_decoder.go
- errors.go
- fetch_request.go
- fetch_response.go
- heartbeat_request.go
- heartbeat_response.go
- join_group_request.go
- join_group_response.go
- leave_group_request.go
- leave_group_response.go
- length_field.go
- list_groups_request.go
- list_groups_response.go
- message.go
- message_set.go
- metadata_request.go
- metadata_response.go
- offset_commit_request.go
- offset_commit_response.go
- offset_fetch_request.go
- offset_fetch_response.go
- offset_manager.go
- offset_request.go
- offset_response.go
- packet_decoder.go
- packet_encoder.go
- partitioner.go
- prep_encoder.go
- produce_request.go
- produce_response.go
- produce_set.go
- real_decoder.go
- real_encoder.go
- request.go
- response_header.go
- sarama.go
- snappy.go
- sync_group_request.go
- sync_group_response.go
- sync_producer.go
- utils.go