log

package module
v0.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 17, 2025 License: Apache-2.0 Imports: 20 Imported by: 192

README

Log SDK

PkgGoDev

Documentation

Overview

Package log provides the OpenTelemetry Logs SDK.

See https://opentelemetry.io/docs/concepts/signals/logs/ for information about the concept of OpenTelemetry Logs and https://opentelemetry.io/docs/concepts/components/ for more information about OpenTelemetry SDKs.

The entry point for the log package is NewLoggerProvider. LoggerProvider is the object that all Bridge API calls use to create Loggers, and ultimately emit log records. Also, it is an object that should be used to control the life-cycle (start, flush, and shutdown) of the Logs SDK.

A LoggerProvider needs to be configured to process the log records, this is done by configuring it with a Processor implementation using WithProcessor. The log package provides the BatchProcessor and SimpleProcessor that are configured with an Exporter implementation which exports the log records to given destination. See go.opentelemetry.io/otel/exporters for exporters that can be used with these Processors.

The data generated by a LoggerProvider needs to include information about its origin. A LoggerProvider needs to be configured with a Resource, by using WithResource, to include this information. This Resource should be used to describe the unique runtime environment instrumented code is being run on. That way when multiple instances of the code are collected at a single endpoint their origin is decipherable.

See go.opentelemetry.io/otel/log for more information about the OpenTelemetry Logs Bridge API.

See go.opentelemetry.io/otel/sdk/log/internal/x for information about the experimental features.

Example

Initialize OpenTelemetry Logs SDK and setup logging using a log bridge.

package main

import (
	"context"
	"fmt"

	"go.opentelemetry.io/otel/log/global"
	"go.opentelemetry.io/otel/sdk/log"
)

func main() {
	// Create an exporter that will emit log records.
	// E.g. use go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp
	// to send logs using OTLP over HTTP:
	// exporter, err := otlploghttp.New(ctx)
	var exporter log.Exporter

	// Create a log record processor pipeline.
	processor := log.NewBatchProcessor(exporter)

	// Create a logger provider.
	// You can pass this instance directly when creating a log bridge.
	provider := log.NewLoggerProvider(
		log.WithProcessor(processor),
	)

	// Handle shutdown properly so that nothing leaks.
	defer func() {
		err := provider.Shutdown(context.Background())
		if err != nil {
			fmt.Println(err)
		}
	}()

	// Register as global logger provider so that it can be used via global.Meter
	// and accessed using global.GetMeterProvider.
	// Most log bridges use the global logger provider as default.
	// If the global logger provider is not set then a no-op implementation
	// is used, which fails to generate data.
	global.SetLoggerProvider(provider)

	// Use a bridge so that you can emit logs using your Go logging library of preference.
	// E.g. use go.opentelemetry.io/contrib/bridges/otelslog so that you can use log/slog:
	// slog.SetDefault(otelslog.NewLogger("my/pkg/name", otelslog.WithLoggerProvider(provider)))
}
Output:

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BatchProcessor

type BatchProcessor struct {
	// contains filtered or unexported fields
}

BatchProcessor is a processor that exports batches of log records.

Use NewBatchProcessor to create a BatchProcessor. An empty BatchProcessor is shut down by default, no records will be batched or exported.

func NewBatchProcessor

func NewBatchProcessor(exporter Exporter, opts ...BatchProcessorOption) *BatchProcessor

NewBatchProcessor decorates the provided exporter so that the log records are batched before exporting.

All of the exporter's methods are called synchronously.

func (*BatchProcessor) ForceFlush

func (b *BatchProcessor) ForceFlush(ctx context.Context) error

ForceFlush flushes queued log records and flushes the decorated exporter.

func (*BatchProcessor) OnEmit

func (b *BatchProcessor) OnEmit(_ context.Context, r *Record) error

OnEmit batches provided log record.

func (*BatchProcessor) Shutdown

func (b *BatchProcessor) Shutdown(ctx context.Context) error

Shutdown flushes queued log records and shuts down the decorated exporter.

type BatchProcessorOption

type BatchProcessorOption interface {
	// contains filtered or unexported methods
}

BatchProcessorOption applies a configuration to a BatchProcessor.

func WithExportBufferSize added in v0.7.0

func WithExportBufferSize(size int) BatchProcessorOption

WithExportBufferSize sets the batch buffer size. Batches will be temporarily kept in a memory buffer until they are exported.

By default, a value of 1 will be used. The default value is also used when the provided value is less than one.

func WithExportInterval

func WithExportInterval(d time.Duration) BatchProcessorOption

WithExportInterval sets the maximum duration between batched exports.

If the OTEL_BLRP_SCHEDULE_DELAY environment variable is set, and this option is not passed, that variable value will be used.

By default, if an environment variable is not set, and this option is not passed, 1s will be used. The default value is also used when the provided value is less than one.

func WithExportMaxBatchSize

func WithExportMaxBatchSize(size int) BatchProcessorOption

WithExportMaxBatchSize sets the maximum batch size of every export. A batch will be split into multiple exports to not exceed this size.

If the OTEL_BLRP_MAX_EXPORT_BATCH_SIZE environment variable is set, and this option is not passed, that variable value will be used.

By default, if an environment variable is not set, and this option is not passed, 512 will be used. The default value is also used when the provided value is less than one.

func WithExportTimeout

func WithExportTimeout(d time.Duration) BatchProcessorOption

WithExportTimeout sets the duration after which a batched export is canceled.

If the OTEL_BLRP_EXPORT_TIMEOUT environment variable is set, and this option is not passed, that variable value will be used.

By default, if an environment variable is not set, and this option is not passed, 30s will be used. The default value is also used when the provided value is less than one.

func WithMaxQueueSize

func WithMaxQueueSize(size int) BatchProcessorOption

WithMaxQueueSize sets the maximum queue size used by the Batcher. After the size is reached log records are dropped.

If the OTEL_BLRP_MAX_QUEUE_SIZE environment variable is set, and this option is not passed, that variable value will be used.

By default, if an environment variable is not set, and this option is not passed, 2048 will be used. The default value is also used when the provided value is less than one.

type Exporter

type Exporter interface {
	// Export transmits log records to a receiver.
	//
	// The deadline or cancellation of the passed context must be honored. An
	// appropriate error should be returned in these situations.
	//
	// All retry logic must be contained in this function. The SDK does not
	// implement any retry logic. All errors returned by this function are
	// considered unrecoverable and will be reported to a configured error
	// Handler.
	//
	// Implementations must not retain the records slice.
	//
	// Before modifying a Record, the implementation must use Record.Clone
	// to create a copy that shares no state with the original.
	//
	// Export should never be called concurrently with other Export calls.
	// However, it may be called concurrently with other methods.
	Export(ctx context.Context, records []Record) error

	// Shutdown is called when the SDK shuts down. Any cleanup or release of
	// resources held by the exporter should be done in this call.
	//
	// The deadline or cancellation of the passed context must be honored. An
	// appropriate error should be returned in these situations.
	//
	// After Shutdown is called, calls to Export, Shutdown, or ForceFlush
	// should perform no operation and return nil error.
	//
	// Shutdown may be called concurrently with itself or with other methods.
	Shutdown(ctx context.Context) error

	// ForceFlush exports log records to the configured Exporter that have not yet
	// been exported.
	//
	// The deadline or cancellation of the passed context must be honored. An
	// appropriate error should be returned in these situations.
	//
	// ForceFlush may be called concurrently with itself or with other methods.
	ForceFlush(ctx context.Context) error
}

Exporter handles the delivery of log records to external receivers.

type LoggerProvider

type LoggerProvider struct {
	embedded.LoggerProvider
	// contains filtered or unexported fields
}

LoggerProvider handles the creation and coordination of Loggers. All Loggers created by a LoggerProvider will be associated with the same Resource.

func NewLoggerProvider

func NewLoggerProvider(opts ...LoggerProviderOption) *LoggerProvider

NewLoggerProvider returns a new and configured LoggerProvider.

By default, the returned LoggerProvider is configured with the default Resource and no Processors. Processors cannot be added after a LoggerProvider is created. This means the returned LoggerProvider, one created with no Processors, will perform no operations.

func (*LoggerProvider) ForceFlush

func (p *LoggerProvider) ForceFlush(ctx context.Context) error

ForceFlush flushes all processors.

This method can be called concurrently.

func (*LoggerProvider) Logger

func (p *LoggerProvider) Logger(name string, opts ...log.LoggerOption) log.Logger

Logger returns a new log.Logger with the provided name and configuration.

If p is shut down, a noop.Logger instance is returned.

This method can be called concurrently.

func (*LoggerProvider) Shutdown

func (p *LoggerProvider) Shutdown(ctx context.Context) error

Shutdown shuts down the provider and all processors.

This method can be called concurrently.

type LoggerProviderOption

type LoggerProviderOption interface {
	// contains filtered or unexported methods
}

LoggerProviderOption applies a configuration option value to a LoggerProvider.

func WithAttributeCountLimit

func WithAttributeCountLimit(limit int) LoggerProviderOption

WithAttributeCountLimit sets the maximum allowed log record attribute count. Any attribute added to a log record once this limit is reached will be dropped.

Setting this to zero means no attributes will be recorded.

Setting this to a negative value means no limit is applied.

If the OTEL_LOGRECORD_ATTRIBUTE_COUNT_LIMIT environment variable is set, and this option is not passed, that variable value will be used.

By default, if an environment variable is not set, and this option is not passed, 128 will be used.

func WithAttributeValueLengthLimit

func WithAttributeValueLengthLimit(limit int) LoggerProviderOption

AttributeValueLengthLimit sets the maximum allowed attribute value length.

This limit only applies to string and string slice attribute values. Any string longer than this value will be truncated to this length.

Setting this to a negative value means no limit is applied.

If the OTEL_LOGRECORD_ATTRIBUTE_VALUE_LENGTH_LIMIT environment variable is set, and this option is not passed, that variable value will be used.

By default, if an environment variable is not set, and this option is not passed, no limit (-1) will be used.

func WithProcessor

func WithProcessor(processor Processor) LoggerProviderOption

WithProcessor associates Processor with a LoggerProvider.

By default, if this option is not used, the LoggerProvider will perform no operations; no data will be exported without a processor.

The SDK invokes the processors sequentially in the same order as they were registered.

For production, use NewBatchProcessor to batch log records before they are exported. For testing and debugging, use NewSimpleProcessor to synchronously export log records.

func WithResource

func WithResource(res *resource.Resource) LoggerProviderOption

WithResource associates a Resource with a LoggerProvider. This Resource represents the entity producing telemetry and is associated with all Loggers the LoggerProvider will create.

By default, if this Option is not used, the default Resource from the go.opentelemetry.io/otel/sdk/resource package will be used.

type Processor

type Processor interface {
	// OnEmit is called when a Record is emitted.
	//
	// OnEmit will be called independent of Enabled. Implementations need to
	// validate the arguments themselves before processing.
	//
	// Implementation should not interrupt the record processing
	// if the context is canceled.
	//
	// All retry logic must be contained in this function. The SDK does not
	// implement any retry logic. All errors returned by this function are
	// considered unrecoverable and will be reported to a configured error
	// Handler.
	//
	// The SDK invokes the processors sequentially in the same order as
	// they were registered using [WithProcessor].
	// Implementations may synchronously modify the record so that the changes
	// are visible in the next registered processor.
	// Notice that [Record] is not concurrent safe. Therefore, asynchronous
	// processing may cause race conditions. Use [Record.Clone]
	// to create a copy that shares no state with the original.
	OnEmit(ctx context.Context, record *Record) error

	// Shutdown is called when the SDK shuts down. Any cleanup or release of
	// resources held by the exporter should be done in this call.
	//
	// The deadline or cancellation of the passed context must be honored. An
	// appropriate error should be returned in these situations.
	//
	// After Shutdown is called, calls to Export, Shutdown, or ForceFlush
	// should perform no operation and return nil error.
	Shutdown(ctx context.Context) error

	// ForceFlush exports log records to the configured Exporter that have not yet
	// been exported.
	//
	// The deadline or cancellation of the passed context must be honored. An
	// appropriate error should be returned in these situations.
	ForceFlush(ctx context.Context) error
}

Processor handles the processing of log records.

Any of the Processor's methods may be called concurrently with itself or with other methods. It is the responsibility of the Processor to manage this concurrency.

See go.opentelemetry.io/otel/sdk/log/internal/x for information about how a Processor can be extended to support experimental features.

Example (Filtering)

Use a processor that filters out records based on the provided context.

package main

import (
	"context"
	"sync"

	logapi "go.opentelemetry.io/otel/log"
	"go.opentelemetry.io/otel/sdk/log"
)

func main() {
	// Existing processor that emits telemetry.
	var processor log.Processor = log.NewBatchProcessor(nil)

	// Wrap the processor so that it ignores processing log records
	// when a context deriving from WithIgnoreLogs is passed
	// to the logging methods.
	processor = &ContextFilterProcessor{Processor: processor}

	// The created processor can then be registered with
	// the OpenTelemetry Logs SDK using the WithProcessor option.
	_ = log.NewLoggerProvider(
		log.WithProcessor(processor),
	)
}

type key struct{}

var ignoreLogsKey key

// ContextFilterProcessor filters out logs when a context deriving from
// [WithIgnoreLogs] is passed to its methods.
type ContextFilterProcessor struct {
	log.Processor

	lazyFilter sync.Once

	filter filter
}

type filter interface {
	Enabled(ctx context.Context, param logapi.EnabledParameters) bool
}

func (p *ContextFilterProcessor) OnEmit(ctx context.Context, record *log.Record) error {
	if ignoreLogs(ctx) {
		return nil
	}
	return p.Processor.OnEmit(ctx, record)
}

func (p *ContextFilterProcessor) Enabled(ctx context.Context, param logapi.EnabledParameters) bool {
	p.lazyFilter.Do(func() {
		if f, ok := p.Processor.(filter); ok {
			p.filter = f
		}
	})
	return !ignoreLogs(ctx) && (p.filter == nil || p.filter.Enabled(ctx, param))
}

func ignoreLogs(ctx context.Context) bool {
	_, ok := ctx.Value(ignoreLogsKey).(bool)
	return ok
}
Output:

Example (Redact)

Use a processor which redacts sensitive data from some attributes.

package main

import (
	"context"
	"strings"

	logapi "go.opentelemetry.io/otel/log"
	"go.opentelemetry.io/otel/sdk/log"
)

func main() {
	// Existing processor that emits telemetry.
	var processor log.Processor = log.NewBatchProcessor(nil)

	// Add a processor so that it redacts values from token attributes.
	redactProcessor := &RedactTokensProcessor{}

	// The created processor can then be registered with
	// the OpenTelemetry Logs SDK using the WithProcessor option.
	_ = log.NewLoggerProvider(
		// Order is important here. Redact before handing to the processor.
		log.WithProcessor(redactProcessor),
		log.WithProcessor(processor),
	)
}

// RedactTokensProcessor is a [log.Processor] decorator that redacts values
// from attributes containing "token" in the key.
type RedactTokensProcessor struct{}

// OnEmit redacts values from attributes containing "token" in the key
// by replacing them with a REDACTED value.
func (p *RedactTokensProcessor) OnEmit(ctx context.Context, record *log.Record) error {
	record.WalkAttributes(func(kv logapi.KeyValue) bool {
		if strings.Contains(strings.ToLower(kv.Key), "token") {
			record.AddAttributes(logapi.String(kv.Key, "REDACTED"))
		}
		return true
	})
	return nil
}

// Shutdown returns nil.
func (p *RedactTokensProcessor) Shutdown(ctx context.Context) error {
	return nil
}

// ForceFlush returns nil.
func (p *RedactTokensProcessor) ForceFlush(ctx context.Context) error {
	return nil
}
Output:

type Record

type Record struct {
	// contains filtered or unexported fields
}

Record is a log record emitted by the Logger.

Do not create instances of Record on your own in production code. You can use go.opentelemetry.io/otel/sdk/log/logtest.RecordFactory for testing purposes.

func (*Record) AddAttributes

func (r *Record) AddAttributes(attrs ...log.KeyValue)

AddAttributes adds attributes to the log record. Attributes in attrs will overwrite any attribute already added to r with the same key.

func (*Record) AttributesLen

func (r *Record) AttributesLen() int

AttributesLen returns the number of attributes in the log record.

func (*Record) Body

func (r *Record) Body() log.Value

Body returns the body of the log record.

func (*Record) Clone

func (r *Record) Clone() Record

Clone returns a copy of the record with no shared state. The original record and the clone can both be modified without interfering with each other.

func (*Record) DroppedAttributes

func (r *Record) DroppedAttributes() int

DroppedAttributes returns the number of attributes dropped due to limits being reached.

func (*Record) InstrumentationScope

func (r *Record) InstrumentationScope() instrumentation.Scope

InstrumentationScope returns the scope that the Logger was created with.

func (*Record) ObservedTimestamp

func (r *Record) ObservedTimestamp() time.Time

ObservedTimestamp returns the time when the log record was observed.

func (*Record) Resource

func (r *Record) Resource() resource.Resource

Resource returns the entity that collected the log.

func (*Record) SetAttributes

func (r *Record) SetAttributes(attrs ...log.KeyValue)

SetAttributes sets (and overrides) attributes to the log record.

func (*Record) SetBody

func (r *Record) SetBody(v log.Value)

SetBody sets the body of the log record.

func (*Record) SetObservedTimestamp

func (r *Record) SetObservedTimestamp(t time.Time)

SetObservedTimestamp sets the time when the log record was observed.

func (*Record) SetSeverity

func (r *Record) SetSeverity(level log.Severity)

SetSeverity sets the severity level of the log record.

func (*Record) SetSeverityText

func (r *Record) SetSeverityText(text string)

SetSeverityText sets severity (also known as log level) text. This is the original string representation of the severity as it is known at the source.

func (*Record) SetSpanID

func (r *Record) SetSpanID(id trace.SpanID)

SetSpanID sets the span ID.

func (*Record) SetTimestamp

func (r *Record) SetTimestamp(t time.Time)

SetTimestamp sets the time when the log record occurred.

func (*Record) SetTraceFlags

func (r *Record) SetTraceFlags(flags trace.TraceFlags)

SetTraceFlags sets the trace flags.

func (*Record) SetTraceID

func (r *Record) SetTraceID(id trace.TraceID)

SetTraceID sets the trace ID.

func (*Record) Severity

func (r *Record) Severity() log.Severity

Severity returns the severity of the log record.

func (*Record) SeverityText

func (r *Record) SeverityText() string

SeverityText returns severity (also known as log level) text. This is the original string representation of the severity as it is known at the source.

func (*Record) SpanID

func (r *Record) SpanID() trace.SpanID

SpanID returns the span ID or empty array.

func (*Record) Timestamp

func (r *Record) Timestamp() time.Time

Timestamp returns the time when the log record occurred.

func (*Record) TraceFlags

func (r *Record) TraceFlags() trace.TraceFlags

TraceFlags returns the trace flags.

func (*Record) TraceID

func (r *Record) TraceID() trace.TraceID

TraceID returns the trace ID or empty array.

func (*Record) WalkAttributes

func (r *Record) WalkAttributes(f func(log.KeyValue) bool)

WalkAttributes walks all attributes the log record holds by calling f for each on each log.KeyValue in the Record. Iteration stops if f returns false.

type SimpleProcessor

type SimpleProcessor struct {
	// contains filtered or unexported fields
}

SimpleProcessor is an processor that synchronously exports log records.

Use NewSimpleProcessor to create a SimpleProcessor.

func NewSimpleProcessor

func NewSimpleProcessor(exporter Exporter, _ ...SimpleProcessorOption) *SimpleProcessor

NewSimpleProcessor is a simple Processor adapter.

This Processor is not recommended for production use due to its synchronous nature, which makes it suitable for testing, debugging, or demonstrating other features, but can lead to slow performance and high computational overhead. For production environments, it is recommended to use NewBatchProcessor instead. However, there may be exceptions where certain Exporter implementations perform better with this Processor.

func (*SimpleProcessor) ForceFlush

func (s *SimpleProcessor) ForceFlush(ctx context.Context) error

ForceFlush flushes the exporter.

func (*SimpleProcessor) OnEmit

func (s *SimpleProcessor) OnEmit(ctx context.Context, r *Record) error

OnEmit batches provided log record.

func (*SimpleProcessor) Shutdown

func (s *SimpleProcessor) Shutdown(ctx context.Context) error

Shutdown shuts down the exporter.

type SimpleProcessorOption

type SimpleProcessorOption interface {
	// contains filtered or unexported methods
}

SimpleProcessorOption applies a configuration to a SimpleProcessor.

Directories

Path Synopsis
internal
x
Package x contains support for Logs SDK experimental features.
Package x contains support for Logs SDK experimental features.
Package logtest is a testing helper package.
Package logtest is a testing helper package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL