tracing

package
v4.9.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 14, 2022 License: Apache-2.0 Imports: 22 Imported by: 6

README

Distributed Tracing Using OpenTelemetry

What is OpenTelemetry?

From opentelemetry.io:

OpenTelemetry is a collection of tools, APIs, and SDKs. Use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software’s performance and behavior.

Use OpenTelemetry to generate traces visualizing behavior in your application. It's comprised of nested spans that are rendered as a waterfall graph. Each span indicates start/end timings and optionally other developer specified metadata and logging output.

Jaeger Tracing is a common tool used to receive OpenTelemetry trace data. Use its web UI to query for traces and view the waterfall graph.

OpenTelemetry is distributed, which allows services to pass the trace ids to disparate remote services. The remote service may generate child spans that will be visible on the same waterfall graph. This requires that all services send traces to the same Jaeger Server.

Why OpenTelemetry?

It is the latest standard for distributed tracing clients.

OpenTelemetry supersedes its now deprecated predecessor, OpenTracing.

It no longer requires implementation specific client modules, such as Jaeger client. The provided OpenTelemetry SDK includes a client for Jaeger.

Why Jaeger Tracing Server?

Easy to setup. Powerful and easy to use web UI. Open source. Scalable using Elasticsearch.

Getting Started

opentelemetry.io

OpenTelemetry dev reference: https://pkg.go.dev/go.opentelemetry.io/otel

See unit tests for usage examples.

Configuration

In ideal conditions where you wish to send traces to localhost on 6831/udp, no configuration is necessary.

Configuration reference via environment variables: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md.

Exporter

Traces export to Jaeger by default. Other exporters are available by setting environment variable OTEL_TRACES_EXPORTER with one or more of:

Usually, you'd only need one exporter. If not, more than one may be selected by delimiting with comma.

OTLP Exporter

By default, OTLP exporter exports to an OpenTelemetry Collector on localhost on gRPC port 4317. The host and port can be changed by setting environment variable OTEL_EXPORTER_OTLP_ENDPOINT like https://collector:4317.

See more: OTLP configuration

Jaeger Exporter via UDP

By default, Jaeger exporter exports to a Jaeger Agent on localhost port 6831/udp. The host and port can be changed by setting environment variables OTEL_EXPORTER_JAEGER_AGENT_HOST, OTEL_EXPORTER_JAEGER_AGENT_PORT.

It's important to ensure UDP traces are sent on the loopback interface (aka localhost). UDP datagrams are limited in size to the MTU of the interface and the payload cannot be split into multiple datagrams. The loopback interface MTU is large, typically 65000 or higher. Network interface MTU is typically much lower at 1500. OpenTelemetry's Jaeger client is sometimes unable to limit its payload to fit in a 1500 byte datagram and will drop those packets. This causes traces that are mangled or missing detail.

Jaeger Exporter via HTTP

If it's not possible to install a Jaeger Agent on localhost, the client can instead export directly to the Jaeger Collector of the Jaeger Server on HTTP port 14268.

Enable HTTP exporter with configuration:

OTEL_EXPORTER_JAEGER_PROTOCOL=http/thrift.binary
OTEL_EXPORTER_JAEGER_ENDPOINT=http://<jaeger-server>:14268/api/traces
Probabilistic Sampling

By default, all traces are sampled. If the tracing volume is burdening the application, network, or Jaeger Server, then sampling can be used to selectively drop some of the traces.

In production, it may be ideal to set sampling based on a percentage probability. The probability can be set in Jaeger Server configuration or locally.

To enable locally, set environment variables:

OTEL_TRACES_SAMPLER=traceidratio
OTEL_TRACES_SAMPLER_ARG=<value-between-0-and-1>

Where 1 is always sample every trace and 0 is do not sample anything.

Initialization

The OpenTelemetry client must be initialized to read configuration and prepare a Tracer object. When application is exiting, call CloseTracing().

The library name passed in the second argument appears in spans as metadata otel.library.name. This is used to identify the library or module that generated that span. This usually the fully qualified module name of your repo.

import "github.com/mailgun/holster/v4/tracing"

err := tracing.InitTracing(ctx, "github.com/myrepo/myservice")

// ...

err = tracing.CloseTracing(ctx)
Log Level

Log level may be applied to traces to filter spans having a minimum log severity. Spans that do not meed the minimum severity are simply dropped and not exported.

Log level is passed with option tracing.WithLevel() as a numeric log level (0-6): Panic, Fatal, Error, Warning, Info, Debug, Trace.

As a convenience, use constants, such as tracing.DebugLevel:

import "github.com/mailgun/holster/v4/tracing"

level := tracing.DebugLevel
err := tracing.InitTracing(ctx, "my library name", tracing.WithLevel(level))

If WithLevel() is omitted, the level will be the global level set in Logrus.

See Scope Log Level for details on creating spans with an assigned log level.

Log Level Filtering

Just like with common log frameworks, scope will filter spans that are a lower severity than threshold provided using WithLevel().

If scopes are nested and one in the middle is dropped, the hierarchy will be preserved.

e.g. If WithLevel() is passed a log level of "Info", we expect "Debug" scopes to be dropped:

# Input:
Info Level 1 -> Debug Level 2 -> Info Level 3

# Exports spans in form:
Info Level 1 -> Info Level 3

Log level filtering is critical for high volume applications where debug tracing would generate significantly more data that isn't sustainable or helpful for normal operations. But developers will have the option to selectively enable debug tracing for troubleshooting.

Tracer Lifecycle

The common use case is to call InitTracing() to build a single default tracer that the application uses througout its lifetime, then call CloseTracing() on shutdown.

The default tracer is stored globally in the tracer package for use by tracing functions.

The tracer object identifies itself by a library name, which can be seen in Jaeger traces as attribute otel.library.name. This value is typically the module name of the application.

If it's necessary to create traces with a different library name, additional tracer objects may be created by NewTracer() which returns a context with the tracer object embedded in it. This context object must be passed to tracing functions use that tracer in particular, otherwise the default tracer will be selected.

Setting Resources

OpenTelemetry is configured by environment variables and supplemental resource settings. Some of these resources also map to environment variables.

Service Name

The service name appears in the Jaeger "Service" dropdown. If unset, default is unknown_service:<executable-filename>.

Service name may be set in configuration by environment variable OTEL_SERVICE_NAME.

As an alternative to environment variable, it may be provided as a resource. The resource setting takes precedent over the environment variable.

import "github.com/mailgun/holster/v4/tracing"

res, err := tracing.NewResource("My service", "v1.0.0")
ctx, tracer, err := tracing.InitTracing(ctx, "github.com/myrepo/myservice", tracing.WithResource(res))
Manual Tracing

Basic instrumentation. Traces function duration as a span and captures logrus logs.

import (
	"context"

	"github.com/mailgun/holster/v4/tracing"
)

func MyFunc(ctx context.Context) error {
	tracer := tracing.Tracer()
	ctx, span := tracer.Start(ctx, "Span name")
	defer span.End()

	// ...

	return nil
}
Common OpenTelemetry Tasks
Span Attributes

The active Span object is embedded in the Context object. This can be extracted to do things like add attribute metadata to the span:

import (
	"context"

	"go.opentelemetry.io/otel/attribute"
	"go.opentelemetry.io/otel/trace"
)

func MyFunc(ctx context.Context) error {
	span := trace.SpanFromContext(ctx)
	span.SetAttributes(
		attribute.String("foobar", "value"),
		attribute.Int("x", 12345),
	)
}
Add Span Event

A span event is a log message added to the active span. It can optionally include attribute metadata.

span.AddEvent("My message")
span.AddEvent("My metadata", trace.WithAttributes(
	attribute.String("foobar", "value"),
	attribute.Int("x", 12345"),
))
Log an Error

An Error object can be logged to the active span. This appears as a log event on the span.

err := errors.New("My error message")
span.RecordError(err)

// Can also add attribute metadata.
span.RecordError(err, trace.WithAttributes(
	attribute.String("foobar", "value"),
))
Scope Tracing

The scope functions automate span start/end and error reporting to the active trace.

Function Description
StartScope() Start a scope by creating a span named after the fully qualified calling function.
StartNamedScope() Start a scope by creating a span with user-provided name.
EndScope() End the scope, record returned error value.
Scope() Wraps a code block as a scope using StartScope()/EndScope() functionality.
NamedScope() Same as Scope() with a user-provided span name.

If the scope's action function returns an error, the error message is automatically logged to the trace and the trace is marked as error.

Using StartScope()/EndScope()
import (
	"context"

	"github.com/mailgun/holster/tracing"
	"github.com/sirupsen/logrus"
)

func MyFunc(ctx context.Context) (reterr error) {
	ctx = tracing.StartScope(ctx)
	defer func() {
		tracing.EndScope(ctx, reterr)
	}()

	logrus.WithContext(ctx).Info("This message also logged to trace")

	// ...

	return nil
}
Using Scope()
import (
	"context"

	"github.com/mailgun/holster/v4/tracing"
	"github.com/sirupsen/logrus"
)

func MyFunc(ctx context.Context) error {
	return tracing.Scope(ctx, func(ctx context.Context) error {
		logrus.WithContext(ctx).Info("This message also logged to trace")

		// ...

		return nil
	})
}
Scope Log Level

Log level can be applied to individual spans using variants of Scope()/StartScope() to set debug, info, warn, or error levels:

ctx2 := tracing.StartScopeDebug(ctx)
defer tracing.EndScope(ctx2, nil)
err := tracing.ScopeDebug(ctx, func(ctx context.Context) error {
    // ...

    return nil
})
Scope Log Level Filtering

Just like with common log frameworks, scope will filter spans that are a lower severity than threshold provided using WithLevel().

Instrumentation

Logrus

Logrus is configured by InitTracing() to mirror log messages to the active trace, if exists.

For this to work, you must use the WithContext() method to propagate the active trace stored in the context.

logrus.WithContext(ctx).Info("This message also logged to trace")

If the log is error level or higher, the span is also marked as error and sets attributes otel.status_code and otel.status_description with the error details.

Other Instrumentation Options

See: https://opentelemetry.io/registry/?language=go&component=instrumentation

gRPC Client

Client's trace ids are propagated to the server. A span will be created for the client call and another one for the server side.

import (
	"google.golang.org/grpc"
	"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
)

conn, err := grpc.Dial(server,
	grpc.WithUnaryInterceptor(otelgrpc.UnaryClientInterceptor()),
	grpc.WithStreamInterceptor(otelgrpc.StreamClientInterceptor()),
)
gRPC Server
import (
	"google.golang.org/grpc"
	"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
)

grpcSrv := grpc.NewServer(
	grpc.UnaryInterceptor(otelgrpc.UnaryServerInterceptor()),
	grpc.StreamInterceptor(otelgrpc.StreamServerInterceptor()),
)
Config Options

Possible environment config exporter config options when using tracing.InitTracing().

OTLP
  • OTEL_EXPORTER_OTLP_PROTOCOL
    • May be one of: grpc, http/protobuf.
  • OTEL_EXPORTER_OTLP_ENDPOINT
    • Set to URL like http://collector:<port> or https://collector:<port>.
    • Port for grpc protocol is 4317, http/protobuf is 4318.
    • If protocol is grpc, URL scheme http indicates insecure TLS connection, https indicates secure even though connection is over gRPC protocol, not HTTP(S).
  • OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_CLIENT_KEY
    • If protocol is grpc or using HTTPS endpoint, set TLS certificate files.
  • OTEL_EXPORTER_OTLP_HEADERS
    • Optional headers passed to collector in format: key=value,key2=value2,....

See also OTLP configuration reference.

Jaeger
  • OTEL_EXPORTER_JAEGER_PROTOCOL
  • OTEL_EXPORTER_JAEGER_ENDPOINT
  • OTEL_EXPORTER_JAEGER_AGENT_HOST
  • OTEL_EXPORTER_JAEGER_AGENT_PORT
OTEL_EXPORTER_JAEGER_PROTOCOL

Possible values:

  • udp/thrift.compact (default): Export traces via UDP datagrams. Best used when Jaeger Agent is accessible via loopback interface. May also provide OTEL_EXPORTER_JAEGER_AGENT_HOST/OTEL_EXPORTER_JAEGER_AGENT_PORT, which default to localhost/6831.
  • udp/thrift.binary: Alternative protocol to the more commonly used udp/thrift.compact. May also provide OTEL_EXPORTER_JAEGER_AGENT_HOST/OTEL_EXPORTER_JAEGER_AGENT_PORT, which default to localhost/6832.
  • http/thrift.compact: Export traces via HTTP packets. Best used when Jaeger Agent cannot be deployed or is inaccessible via loopback interface. This setting sends traces directly to Jaeger's collector port. May also provide OTEL_EXPORTER_JAEGER_ENDPOINT, which defaults to http://localhost:14268/api/traces.
Honeycomb

Honeycomb consumes OTLP traces and requires an API key header:

OTEL_EXPORTER_OTLP_PROTOCOL=otlp
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io
OTEL_EXPORTER_OTLP_HEADERS=x-honeycomb-team=<api-key>

Prometheus Metrics

Prometheus metric objects are defined at array tracing.Metrics. Enable by registering these metrics in Prometheus.

Metric Description
holster_tracing_counter Count of traces generated by holster tracing package. Label error contains true for traces in error status.
holster_tracing_spans Count of trace spans generated by holster tracing package. Label error contains true for spans in error status.

Documentation

Index

Constants

View Source
const (
	ErrorClassKey = "error.class"
	ErrorTypeKey  = "error.type"
)
View Source
const LogLevelKey = "log.level"

LogLevelKey is the span attribute key for storing numeric log level.

View Source
const LogLevelNumKey = "log.levelNum"

Variables

View Source
var (

	// Metrics contains Prometheus metrics available for use.
	Metrics = []prometheus.Collector{
		traceCounter,
		spanCounter,
	}
)

Functions

func CloseTracing

func CloseTracing(ctx context.Context) error

CloseTracing closes the global OpenTelemetry tracer provider. This allows queued up traces to be flushed.

func EndScope

func EndScope(ctx context.Context, err error)

End scope created by `StartScope()`/`StartNamedScope()`. Logs error return value and ends span.

func InitTracing

func InitTracing(ctx context.Context, libraryName string, opts ...TracingOption) error

InitTracing initializes a global OpenTelemetry tracer provider singleton. Call to initialize before using functions in this package. Instruments logrus to mirror to active trace. Must use `WithContext()` method. Call after initializing logrus. libraryName is typically the application's module name. Prometheus metrics are accessible by registering the metrics at `tracing.Metrics`.

func NamedScope

func NamedScope(ctx context.Context, spanName string, action ScopeAction, opts ...trace.SpanStartOption) error

NamedScope calls action function within a tracing span. Equivalent to wrapping a code block with `StartNamedScope()`/`EndScope()`.

func NamedScopeDebug added in v4.7.0

func NamedScopeDebug(ctx context.Context, spanName string, action ScopeAction, opts ...trace.SpanStartOption) error

NamedScopeDebug calls action function within a tracing span. Scope tagged with log level debug. Equivalent to wrapping a code block with `StartNamedScope()`/`EndScope()`.

func NamedScopeError added in v4.7.0

func NamedScopeError(ctx context.Context, spanName string, action ScopeAction, opts ...trace.SpanStartOption) error

NamedScopeError calls action function within a tracing span. Scope tagged with log level error. Equivalent to wrapping a code block with `StartNamedScope()`/`EndScope()`.

func NamedScopeInfo added in v4.7.0

func NamedScopeInfo(ctx context.Context, spanName string, action ScopeAction, opts ...trace.SpanStartOption) error

NamedScopeInfo calls action function within a tracing span. Scope tagged with log level info. Equivalent to wrapping a code block with `StartNamedScope()`/`EndScope()`.

func NamedScopeWarn added in v4.7.0

func NamedScopeWarn(ctx context.Context, spanName string, action ScopeAction, opts ...trace.SpanStartOption) error

NamedScopeWarn calls action function within a tracing span. Scope tagged with log level warning. Equivalent to wrapping a code block with `StartNamedScope()`/`EndScope()`.

func NewResource added in v4.3.1

func NewResource(serviceName, version string, resources ...*resource.Resource) (*resource.Resource, error)

NewResource creates a resource with sensible defaults. Replaces common use case of verbose usage.

func Scope

func Scope(ctx context.Context, action ScopeAction, opts ...trace.SpanStartOption) error

Scope calls action function within a tracing span named after the calling function. Equivalent to wrapping a code block with `StartScope()`/`EndScope()`.

func ScopeDebug added in v4.7.0

func ScopeDebug(ctx context.Context, action ScopeAction, opts ...trace.SpanStartOption) error

Scope calls action function within a tracing span named after the calling function. Scope tagged with log level debug. Equivalent to wrapping a code block with `StartScope()`/`EndScope()`.

func ScopeError added in v4.7.0

func ScopeError(ctx context.Context, action ScopeAction, opts ...trace.SpanStartOption) error

Scope calls action function within a tracing span named after the calling function. Scope tagged with log level error. Equivalent to wrapping a code block with `StartScope()`/`EndScope()`.

func ScopeInfo added in v4.7.0

func ScopeInfo(ctx context.Context, action ScopeAction, opts ...trace.SpanStartOption) error

Scope calls action function within a tracing span named after the calling function. Scope tagged with log level info. Equivalent to wrapping a code block with `StartScope()`/`EndScope()`.

func ScopeWarn added in v4.7.0

func ScopeWarn(ctx context.Context, action ScopeAction, opts ...trace.SpanStartOption) error

Scope calls action function within a tracing span named after the calling function. Scope tagged with log level warning. Equivalent to wrapping a code block with `StartScope()`/`EndScope()`.

func StartNamedScope

func StartNamedScope(ctx context.Context, spanName string, opts ...trace.SpanStartOption) context.Context

Start a scope with user-provided span name.

func StartNamedScopeDebug added in v4.7.0

func StartNamedScopeDebug(ctx context.Context, spanName string, opts ...trace.SpanStartOption) context.Context

Start a scope with user-provided span name with debug log level.

func StartNamedScopeError added in v4.7.0

func StartNamedScopeError(ctx context.Context, spanName string, opts ...trace.SpanStartOption) context.Context

Start a scope with user-provided span name with error log level.

func StartNamedScopeInfo added in v4.7.0

func StartNamedScopeInfo(ctx context.Context, spanName string, opts ...trace.SpanStartOption) context.Context

Start a scope with user-provided span name with info log level.

func StartNamedScopeWarn added in v4.7.0

func StartNamedScopeWarn(ctx context.Context, spanName string, opts ...trace.SpanStartOption) context.Context

Start a scope with user-provided span name with warning log level.

func StartScope

func StartScope(ctx context.Context, opts ...trace.SpanStartOption) context.Context

Start a scope with span named after fully qualified caller function.

func StartScopeDebug added in v4.7.0

func StartScopeDebug(ctx context.Context, opts ...trace.SpanStartOption) context.Context

Start a scope with span named after fully qualified caller function with debug log level.

func StartScopeError added in v4.7.0

func StartScopeError(ctx context.Context, opts ...trace.SpanStartOption) context.Context

Start a scope with span named after fully qualified caller function with error log level.

func StartScopeInfo added in v4.7.0

func StartScopeInfo(ctx context.Context, opts ...trace.SpanStartOption) context.Context

Start a scope with span named after fully qualified caller function with info log level.

func StartScopeWarn added in v4.7.0

func StartScopeWarn(ctx context.Context, opts ...trace.SpanStartOption) context.Context

Start a scope with span named after fully qualified caller function with warning log level.

func Tracer added in v4.7.0

func Tracer(opts ...trace.TracerOption) trace.Tracer

Tracer returns a tracer object.

Types

type DummySpan added in v4.7.0

type DummySpan struct {
	// contains filtered or unexported fields
}

DummySpan is used to create a stub span as a placeholder for spans not intended for export. This is used to selectively disable tracing individual spans based on criteria, such as log level. Any child spans created from a DummySpan will be linked to the dummy's next non-dummy ancestor.

func (*DummySpan) AddEvent added in v4.7.0

func (s *DummySpan) AddEvent(name string, options ...trace.EventOption)

func (*DummySpan) End added in v4.7.0

func (s *DummySpan) End(options ...trace.SpanEndOption)

func (*DummySpan) IsRecording added in v4.7.0

func (s *DummySpan) IsRecording() bool

func (*DummySpan) RecordError added in v4.7.0

func (s *DummySpan) RecordError(err error, options ...trace.EventOption)

func (*DummySpan) SetAttributes added in v4.7.0

func (s *DummySpan) SetAttributes(kv ...attribute.KeyValue)

func (*DummySpan) SetName added in v4.7.0

func (s *DummySpan) SetName(name string)

func (*DummySpan) SetStatus added in v4.7.0

func (s *DummySpan) SetStatus(code codes.Code, description string)

func (*DummySpan) SpanContext added in v4.7.0

func (s *DummySpan) SpanContext() trace.SpanContext

func (*DummySpan) TracerProvider added in v4.7.0

func (s *DummySpan) TracerProvider() trace.TracerProvider

type Level added in v4.7.1

type Level uint64
const (
	PanicLevel Level = iota
	FatalLevel
	ErrorLevel
	WarnLevel
	InfoLevel
	DebugLevel
	TraceLevel
)

type LevelTracer added in v4.7.0

type LevelTracer struct {
	trace.Tracer
	// contains filtered or unexported fields
}

LevelTracer is created by `LevelTracerProvider`.

func (*LevelTracer) Start added in v4.7.0

func (t *LevelTracer) Start(ctx context.Context, spanName string, opts ...trace.SpanStartOption) (context.Context, trace.Span)

type LevelTracerProvider added in v4.7.0

type LevelTracerProvider struct {
	*sdktrace.TracerProvider
	// contains filtered or unexported fields
}

LevelTracerProvider wraps a TracerProvider to apply log level processing. Tag spans with `log.level` and `log.levelNum=n`, where `n` is numeric log level 0-6 (Panic, Fatal, Error, Warn, Info, Debug, Trace). If span log level is lower severity than threshold, create a `DummySpan` instead. `DummySpan` behaves like an alias of its next non-dummy ancestor, but gets filtered out and omitted from export. Nested spans containing a mix of real and `DummySpan` will be linked as if the `DummySpan` never happened.

func NewLevelTracerProvider added in v4.7.0

func NewLevelTracerProvider(level Level, opts ...sdktrace.TracerProviderOption) *LevelTracerProvider

func (*LevelTracerProvider) Tracer added in v4.7.0

func (tp *LevelTracerProvider) Tracer(libraryName string, opts ...trace.TracerOption) trace.Tracer

type LevelTracingOption added in v4.7.0

type LevelTracingOption struct {
	// contains filtered or unexported fields
}

func WithLevel added in v4.7.0

func WithLevel(level Level) *LevelTracingOption

WithLevel passes a log level to InitTracing.

type MetricSpanProcessor added in v4.7.0

type MetricSpanProcessor struct {
	// contains filtered or unexported fields
}

MetricSpanProcessor implements SpanProcessor as a middleware to track via Prometheus metrics before export.

func NewMetricSpanProcessor added in v4.7.0

func NewMetricSpanProcessor(next sdktrace.SpanProcessor) *MetricSpanProcessor

func (*MetricSpanProcessor) ForceFlush added in v4.7.0

func (p *MetricSpanProcessor) ForceFlush(ctx context.Context) error

func (*MetricSpanProcessor) OnEnd added in v4.7.0

func (*MetricSpanProcessor) OnStart added in v4.7.0

func (*MetricSpanProcessor) Shutdown added in v4.7.0

func (p *MetricSpanProcessor) Shutdown(ctx context.Context) error

type ResourceOption added in v4.7.0

type ResourceOption struct {
	// contains filtered or unexported fields
}

func WithResource added in v4.7.0

func WithResource(res *resource.Resource) *ResourceOption

WithResource is convenience function for common use case of passing a Resource object as TracerProviderOption.

type ScopeAction

type ScopeAction func(ctx context.Context) error

type TracerProviderTracingOption added in v4.7.0

type TracerProviderTracingOption struct {
	// contains filtered or unexported fields
}

func WithTracerProviderOption added in v4.7.0

func WithTracerProviderOption(opts ...sdktrace.TracerProviderOption) *TracerProviderTracingOption

WithTracerProviderOption passes TracerProviderOption arguments to InitTracing.

type TracingOption added in v4.7.0

type TracingOption interface {
	// contains filtered or unexported methods
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL