Documentation ¶
Overview ¶
Package ocgrpc contains OpenCensus stats and trace integrations for gRPC.
Use ServerHandler for servers and ClientHandler for clients.
Index ¶
- Variables
- type ClientHandler
- func (c *ClientHandler) HandleConn(ctx context.Context, cs stats.ConnStats)
- func (c *ClientHandler) HandleRPC(ctx context.Context, rs stats.RPCStats)
- func (c *ClientHandler) TagConn(ctx context.Context, cti *stats.ConnTagInfo) context.Context
- func (c *ClientHandler) TagRPC(ctx context.Context, rti *stats.RPCTagInfo) context.Context
- type ServerHandler
- func (s *ServerHandler) HandleConn(ctx context.Context, cs stats.ConnStats)
- func (s *ServerHandler) HandleRPC(ctx context.Context, rs stats.RPCStats)
- func (s *ServerHandler) TagConn(ctx context.Context, cti *stats.ConnTagInfo) context.Context
- func (s *ServerHandler) TagRPC(ctx context.Context, rti *stats.RPCTagInfo) context.Context
Constants ¶
This section is empty.
Variables ¶
var ( ClientSentMessagesPerRPC = stats.Int64("grpc.io/client/sent_messages_per_rpc", "Number of messages sent in the RPC (always 1 for non-streaming RPCs).", stats.UnitDimensionless) ClientSentBytesPerRPC = stats.Int64("grpc.io/client/sent_bytes_per_rpc", "Total bytes sent across all request messages per RPC.", stats.UnitBytes) ClientReceivedMessagesPerRPC = stats.Int64("grpc.io/client/received_messages_per_rpc", "Number of response messages received per RPC (always 1 for non-streaming RPCs).", stats.UnitDimensionless) ClientReceivedBytesPerRPC = stats.Int64("grpc.io/client/received_bytes_per_rpc", "Total bytes received across all response messages per RPC.", stats.UnitBytes) ClientRoundtripLatency = stats.Float64("grpc.io/client/roundtrip_latency", "Time between first byte of request sent to last byte of response received, or terminal error.", stats.UnitMilliseconds) ClientServerLatency = stats.Float64("grpc.io/client/server_latency", `Propagated from the server and should have the same value as "grpc.io/server/latency".`, stats.UnitMilliseconds) )
The following variables are measures are recorded by ClientHandler:
var ( ClientSentBytesPerRPCView = &view.View{ Measure: ClientSentBytesPerRPC, Name: "grpc.io/client/sent_bytes_per_rpc", Description: "Distribution of bytes sent per RPC, by method.", TagKeys: []tag.Key{KeyClientMethod}, Aggregation: DefaultBytesDistribution, } ClientReceivedBytesPerRPCView = &view.View{ Measure: ClientReceivedBytesPerRPC, Name: "grpc.io/client/received_bytes_per_rpc", Description: "Distribution of bytes received per RPC, by method.", TagKeys: []tag.Key{KeyClientMethod}, Aggregation: DefaultBytesDistribution, } ClientRoundtripLatencyView = &view.View{ Measure: ClientRoundtripLatency, Name: "grpc.io/client/roundtrip_latency", Description: "Distribution of round-trip latency, by method.", TagKeys: []tag.Key{KeyClientMethod}, Aggregation: DefaultMillisecondsDistribution, } // Purposely reuses the count from `ClientRoundtripLatency`, tagging // with method and status to result in ClientCompletedRpcs. ClientCompletedRPCsView = &view.View{ Measure: ClientRoundtripLatency, Name: "grpc.io/client/completed_rpcs", Description: "Count of RPCs by method and status.", TagKeys: []tag.Key{KeyClientMethod, KeyClientStatus}, Aggregation: view.Count(), } ClientSentMessagesPerRPCView = &view.View{ Measure: ClientSentMessagesPerRPC, Name: "grpc.io/client/sent_messages_per_rpc", Description: "Distribution of sent messages count per RPC, by method.", TagKeys: []tag.Key{KeyClientMethod}, Aggregation: DefaultMessageCountDistribution, } ClientReceivedMessagesPerRPCView = &view.View{ Measure: ClientReceivedMessagesPerRPC, Name: "grpc.io/client/received_messages_per_rpc", Description: "Distribution of received messages count per RPC, by method.", TagKeys: []tag.Key{KeyClientMethod}, Aggregation: DefaultMessageCountDistribution, } ClientServerLatencyView = &view.View{ Measure: ClientServerLatency, Name: "grpc.io/client/server_latency", Description: "Distribution of server latency as viewed by client, by method.", TagKeys: []tag.Key{KeyClientMethod}, Aggregation: DefaultMillisecondsDistribution, } )
Predefined views may be registered to collect data for the above measures. As always, you may also define your own custom views over measures collected by this package. These are declared as a convenience only; none are registered by default.
var ( ServerReceivedMessagesPerRPC = stats.Int64("grpc.io/server/received_messages_per_rpc", "Number of messages received in each RPC. Has value 1 for non-streaming RPCs.", stats.UnitDimensionless) ServerReceivedBytesPerRPC = stats.Int64("grpc.io/server/received_bytes_per_rpc", "Total bytes received across all messages per RPC.", stats.UnitBytes) ServerSentMessagesPerRPC = stats.Int64("grpc.io/server/sent_messages_per_rpc", "Number of messages sent in each RPC. Has value 1 for non-streaming RPCs.", stats.UnitDimensionless) ServerSentBytesPerRPC = stats.Int64("grpc.io/server/sent_bytes_per_rpc", "Total bytes sent in across all response messages per RPC.", stats.UnitBytes) ServerLatency = stats.Float64("grpc.io/server/server_latency", "Time between first byte of request received to last byte of response sent, or terminal error.", stats.UnitMilliseconds) )
The following variables are measures are recorded by ServerHandler:
var ( ServerReceivedBytesPerRPCView = &view.View{ Name: "grpc.io/server/received_bytes_per_rpc", Description: "Distribution of received bytes per RPC, by method.", Measure: ServerReceivedBytesPerRPC, TagKeys: []tag.Key{KeyServerMethod}, Aggregation: DefaultBytesDistribution, } ServerSentBytesPerRPCView = &view.View{ Name: "grpc.io/server/sent_bytes_per_rpc", Description: "Distribution of total sent bytes per RPC, by method.", Measure: ServerSentBytesPerRPC, TagKeys: []tag.Key{KeyServerMethod}, Aggregation: DefaultBytesDistribution, } ServerLatencyView = &view.View{ Name: "grpc.io/server/server_latency", Description: "Distribution of server latency in milliseconds, by method.", TagKeys: []tag.Key{KeyServerMethod}, Measure: ServerLatency, Aggregation: DefaultMillisecondsDistribution, } // Purposely reuses the count from `ServerLatency`, tagging // with method and status to result in ServerCompletedRpcs. ServerCompletedRPCsView = &view.View{ Name: "grpc.io/server/completed_rpcs", Description: "Count of RPCs by method and status.", TagKeys: []tag.Key{KeyServerMethod, KeyServerStatus}, Measure: ServerLatency, Aggregation: view.Count(), } ServerReceivedMessagesPerRPCView = &view.View{ Name: "grpc.io/server/received_messages_per_rpc", Description: "Distribution of messages received count per RPC, by method.", TagKeys: []tag.Key{KeyServerMethod}, Measure: ServerReceivedMessagesPerRPC, Aggregation: DefaultMessageCountDistribution, } ServerSentMessagesPerRPCView = &view.View{ Name: "grpc.io/server/sent_messages_per_rpc", Description: "Distribution of messages sent count per RPC, by method.", TagKeys: []tag.Key{KeyServerMethod}, Measure: ServerSentMessagesPerRPC, Aggregation: DefaultMessageCountDistribution, } )
Predefined views may be registered to collect data for the above measures. As always, you may also define your own custom views over measures collected by this package. These are declared as a convenience only; none are registered by default.
var ( DefaultBytesDistribution = view.Distribution(1024, 2048, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864, 268435456, 1073741824, 4294967296) DefaultMillisecondsDistribution = view.Distribution(0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000) DefaultMessageCountDistribution = view.Distribution(1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536) )
The following variables define the default hard-coded auxiliary data used by both the default GRPC client and GRPC server metrics.
var ( KeyServerMethod = tag.MustNewKey("grpc_server_method") KeyServerStatus = tag.MustNewKey("grpc_server_status") )
Server tags are applied to the context used to process each RPC, as well as the measures at the end of each RPC.
var ( KeyClientMethod = tag.MustNewKey("grpc_client_method") KeyClientStatus = tag.MustNewKey("grpc_client_status") )
Client tags are applied to measures at the end of each RPC.
var DefaultClientViews = []*view.View{ ClientSentBytesPerRPCView, ClientReceivedBytesPerRPCView, ClientRoundtripLatencyView, ClientCompletedRPCsView, }
DefaultClientViews are the default client views provided by this package.
var DefaultServerViews = []*view.View{ ServerReceivedBytesPerRPCView, ServerSentBytesPerRPCView, ServerLatencyView, ServerCompletedRPCsView, }
DefaultServerViews are the default server views provided by this package.
Functions ¶
This section is empty.
Types ¶
type ClientHandler ¶
type ClientHandler struct { // StartOptions allows configuring the StartOptions used to create new spans. // // StartOptions.SpanKind will always be set to trace.SpanKindClient // for spans started by this handler. StartOptions trace.StartOptions }
ClientHandler implements a gRPC stats.Handler for recording OpenCensus stats and traces. Use with gRPC clients only.
func (*ClientHandler) HandleConn ¶
func (c *ClientHandler) HandleConn(ctx context.Context, cs stats.ConnStats)
HandleConn exists to satisfy gRPC stats.Handler.
func (*ClientHandler) HandleRPC ¶
func (c *ClientHandler) HandleRPC(ctx context.Context, rs stats.RPCStats)
HandleRPC implements per-RPC tracing and stats instrumentation.
func (*ClientHandler) TagConn ¶
func (c *ClientHandler) TagConn(ctx context.Context, cti *stats.ConnTagInfo) context.Context
TagConn exists to satisfy gRPC stats.Handler.
func (*ClientHandler) TagRPC ¶
func (c *ClientHandler) TagRPC(ctx context.Context, rti *stats.RPCTagInfo) context.Context
TagRPC implements per-RPC context management.
type ServerHandler ¶
type ServerHandler struct { // IsPublicEndpoint may be set to true to always start a new trace around // each RPC. Any SpanContext in the RPC metadata will be added as a linked // span instead of making it the parent of the span created around the // server RPC. // // Be aware that if you leave this false (the default) on a public-facing // server, callers will be able to send tracing metadata in gRPC headers // and trigger traces in your backend. IsPublicEndpoint bool // StartOptions to use for to spans started around RPCs handled by this server. // // These will apply even if there is tracing metadata already // present on the inbound RPC but the SpanContext is not sampled. This // ensures that each service has some opportunity to be traced. If you would // like to not add any additional traces for this gRPC service, set: // // StartOptions.Sampler = trace.ProbabilitySampler(0.0) // // StartOptions.SpanKind will always be set to trace.SpanKindServer // for spans started by this handler. StartOptions trace.StartOptions }
ServerHandler implements gRPC stats.Handler recording OpenCensus stats and traces. Use with gRPC servers.
When installed (see Example), tracing metadata is read from inbound RPCs by default. If no tracing metadata is present, or if the tracing metadata is present but the SpanContext isn't sampled, then a new trace may be started (as determined by Sampler).
func (*ServerHandler) HandleConn ¶
func (s *ServerHandler) HandleConn(ctx context.Context, cs stats.ConnStats)
HandleConn exists to satisfy gRPC stats.Handler.
func (*ServerHandler) HandleRPC ¶
func (s *ServerHandler) HandleRPC(ctx context.Context, rs stats.RPCStats)
HandleRPC implements per-RPC tracing and stats instrumentation.
func (*ServerHandler) TagConn ¶
func (s *ServerHandler) TagConn(ctx context.Context, cti *stats.ConnTagInfo) context.Context
TagConn exists to satisfy gRPC stats.Handler.
func (*ServerHandler) TagRPC ¶
func (s *ServerHandler) TagRPC(ctx context.Context, rti *stats.RPCTagInfo) context.Context
TagRPC implements per-RPC context management.