Documentation ¶
Index ¶
Constants ¶
View Source
const ( // ProduceAPIKey is the API key for produce requests ProduceAPIKey = 0 // FetchAPIKey is the API key for fetch requests FetchAPIKey = 1 // RelativeAccuracy defines the acceptable error in quantile values calculated by DDSketch. // For example, if the actual value at p50 is 100, with a relative accuracy of 0.01 the value calculated // will be between 99 and 101 RelativeAccuracy = 0.01 )
View Source
const ( TopicNameBuckets = 0xa TopicNameMaxSize = 0x50 MaxSupportedProduceRequestApiVersion = 0xa MaxSupportedFetchRequestApiVersion = 0xc )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type EbpfTx ¶
type EbpfTx struct { Tup ConnTuple Transaction KafkaTransaction }
type KafkaResponseContext ¶
type KafkaResponseContext struct { Transaction KafkaTransaction State uint8 Remainder uint8 Varint_position uint8 Partition_error_code int8 Partition_state uint8 Remainder_buf [4]int8 Record_batches_num_bytes int32 Record_batch_length int32 Expected_tcp_seq uint32 Carry_over_offset int32 Partitions_count uint32 Varint_value uint32 Record_batches_arrays_idx uint32 Record_batches_arrays_count uint32 Pad_cgo_0 [4]byte }
type KafkaTransaction ¶
type KafkaTransactionKey ¶
type Key ¶
type Key struct { RequestAPIKey uint16 RequestVersion uint16 TopicName *intern.StringValue types.ConnectionKey }
Key is an identifier for a group of Kafka transactions
type RawKernelTelemetry ¶
type RequestStat ¶
type RequestStat struct { // this field order is intentional to help the GC pointer tracking Latencies *ddsketch.DDSketch // Note: every time we add a latency value to the DDSketch, it's possible for the sketch to discard that value // (ie if it is outside the range that is tracked by the sketch). For that reason, in order to keep an accurate count // the number of kafka transactions processed, we have our own count field (rather than relying on DDSketch.GetCount()) Count int // This field holds the value (in nanoseconds) of the first HTTP request // in this bucket. We do this as optimization to avoid creating sketches with // a single value. This is quite common in the context of HTTP requests without // keep-alives where a short-lived TCP connection is used for a single request. FirstLatencySample float64 StaticTags uint64 }
RequestStat stores stats for Kafka requests to a particular key
type RequestStats ¶
type RequestStats struct { // Go uses optimized map access implementations if the key is int32/int64, so using int32 instead of int8 // Here you can find the original CPU impact when using int8: // https://dd.datad0g.com/dashboard/s3s-3hu-mh6/usm-performance-evaluation-20?fromUser=true&refresh_mode=paused&tpl_var_base_agent-env%5B0%5D=kafka-error-base&tpl_var_client-service%5B0%5D=kafka-client-%2A&tpl_var_compare_agent-env%5B0%5D=kafka-error-new&tpl_var_kube_cluster_name%5B0%5D=usm-datad0g&tpl_var_server-service%5B0%5D=kafka-broker&view=spans&from_ts=1719153394917&to_ts=1719156854000&live=false ErrorCodeToStat map[int32]*RequestStat }
RequestStats stores Kafka request statistics per Kafka error code We include the error code here and not in the Key to avoid creating a new Key for each error code
func NewRequestStats ¶
func NewRequestStats() *RequestStats
NewRequestStats creates a new RequestStats object.
func (*RequestStats) AddRequest ¶
func (r *RequestStats) AddRequest(errorCode int32, count int, staticTags uint64, latency float64)
AddRequest takes information about a Kafka transaction and adds it to the request stats
func (*RequestStats) CombineWith ¶
func (r *RequestStats) CombineWith(newStats *RequestStats)
CombineWith merges the data in 2 RequestStats objects newStats is kept as it is, while the method receiver gets mutated
Click to show internal directories.
Click to hide internal directories.