README
¶
General Information
Processors are used at various stages of a pipeline. Generally, a processor pre-processes data before it is exported (e.g. modify attributes or sample) or helps ensure that data makes it through a pipeline successfully (e.g. batch/retry).
Some important aspects of pipelines and processors to be aware of:
Supported processors (sorted alphabetically):
- Attributes Processor
- Batch Processor
- Filter Processor
- Memory Limiter Processor
- Queued Retry Processor
- Resource Processor
- Sampling Processors
- Span Processor
The contributors repository has more processors that can be added to custom builds of the Collector.
Recommended Processors
No processors are enabled by default, however multiple processors are recommended to be enabled depending on the data source. Processors must be enabled for every data source and not all processors support all data sources. In addition, it is important to note that the order of processors matters. The order in each section below is the best practice. Refer to the individual processor documentation for more information.
Traces
- memory_limiter
- any sampling processors
- batch
- any other processors
- queued_retry
Metrics
Data Ownership
The ownership of the TraceData
and MetricsData
in a pipeline is passed as the data travels
through the pipeline. The data is created by the receiver and then the ownership is passed
to the first processor when ConsumeTraceData
/ConsumeMetricsData
function is called.
Note: the receiver may be attached to multiple pipelines, in which case the same data will be passed to all attached pipelines via a data fan-out connector.
From data ownership perspective pipelines can work in 2 modes:
- Exclusive data ownership
- Shared data ownership
The mode is defined during startup based on data modification intent reported by the
processors. The intent is reported by each processor via MutatesConsumedData
field of
the struct returned by GetCapabilities
function. If any processor in the pipeline
declares an intent to modify the data then that pipeline will work in exclusive ownership
mode. In addition, any other pipeline that receives data from a receiver that is attached
to a pipeline with exclusive ownership mode will be also operating in exclusive ownership
mode.
Exclusive Ownership
In exclusive ownership mode the data is owned exclusively by a particular processor at a given moment of time and the processor is free to modify the data it owns.
Exclusive ownership mode is only applicable for pipelines that receive data from the same receiver. If a pipeline is marked to be in exclusive ownership mode then any data received from a shared receiver will be cloned at the fan-out connector before passing further to each pipeline. This ensures that each pipeline has its own exclusive copy of data and the data can be safely modified in the pipeline.
The exclusive ownership of data allows processors to freely modify the data while
they own it (e.g. see attributesprocessor
). The duration of ownership of the data
by processor is from the beginning of ConsumeTraceData
/ConsumeMetricsData
call
until the processor calls the next processor's ConsumeTraceData
/ConsumeMetricsData
function, which passes the ownership to the next processor. After that the processor
must no longer read or write the data since it may be concurrently modified by the
new owner.
Exclusive Ownership mode allows to easily implement processors that need to modify the data by simply declaring such intent.
Shared Ownership
In shared ownership mode no particular processor owns the data and no processor is allowed the modify the shared data.
In this mode no cloning is performed at the fan-out connector of receivers that
are attached to multiple pipelines. In this case all such pipelines will see
the same single shared copy of the data. Processors in pipelines operating in shared
ownership mode are prohibited from modifying the original data that they receive
via ConsumeTraceData
/ConsumeMetricsData
call. Processors may only read the data but
must not modify the data.
If the processor needs to modify the data while performing the processing but
does not want to incur the cost of data cloning that Exclusive mode brings then
the processor can declare that it does not modify the data and use any
different technique that ensures original data is not modified. For example,
the processor can implement copy-on-write approach for individual sub-parts of
TraceData
/MetricsData
argument. Any approach that does not mutate the
original TraceData
/MetricsData
argument (including referenced data, such as
Node
, Resource
, Spans
, etc) is allowed.
If the processor uses such technique it should declare that it does not intend
to modify the original data by setting MutatesConsumedData=false
in its capabilities
to avoid marking the pipeline for Exclusive ownership and to avoid the cost of
data cloning described in Exclusive Ownership section.
Ordering Processors
The order processors are specified in a pipeline is important as this is the order in which each processor is applied to traces and metrics.
Include/Exclude Metrics
The filter processor exposes the option to provide a set of
metric names to match against to determine if the metric should be
included or excluded from the processor. To configure this option, under
include
and/or exclude
both match_type
and metrics_names
are required.
Note: If both include
and exclude
are specified, the include
properties
are checked before the exclude
properties.
filter:
# include and/or exclude can be specified. However, the include properties
# are always checked before the exclude properties.
{include, exclude}:
# match_type controls how items matching is done.
# Possible values are "regexp" or "strict".
# This is a required field.
match_type: {strict, regexp}
# regexp is an optional configuration section for match_type regexp.
regexp:
# < see "Match Configuration" below >
# metric_names specify an array of items to match the metric name against.
# This is a required field.
metric_name: [<item1>, ..., <itemN>]
Match Configuration
Some match_type
values have additional configuration options that can be
specified. The match_type
value is the name of the configuration section.
These sections are optional.
# regexp is an optional configuration section for match_type regexp.
regexp:
# cacheenabled determines whether match results are LRU cached to make subsequent matches faster.
# Cache size is unlimited unless cachemaxnumentries is also specified.
cacheenabled: <bool>
# cachemaxnumentries is the max number of entries of the LRU cache; ignored if cacheenabled is false.
cachemaxnumentries: <int>
Include/Exclude Spans
The attribute processor and the span processor expose
the option to provide a set of properties of a span to match against to determine
if the span should be included or excluded from the processor. To configure
this option, under include
and/or exclude
at least match_type
and one of
services
, span_names
or attributes
is required.
Note: If both include
and exclude
are specified, the include
properties
are checked before the exclude
properties.
{span, attributes}:
# include and/or exclude can be specified. However, the include properties
# are always checked before the exclude properties.
{include, exclude}:
# At least one of services, span_names or attributes must be specified.
# It is supported to have more than one specified, but all of the specified
# conditions must evaluate to true for a match to occur.
# match_type controls how items in "services" and "span_names" arrays are
# interpreted. Possible values are "regexp" or "strict".
# This is a required field.
match_type: {strict, regexp}
# regexp is an optional configuration section for match_type regexp.
regexp:
# < see "Match Configuration" below >
# services specify an array of items to match the service name against.
# A match occurs if the span service name matches at least of the items.
# This is an optional field.
services: [<item1>, ..., <itemN>]
# The span name must match at least one of the items.
# This is an optional field.
span_names: [<item1>, ..., <itemN>]
# Attributes specifies the list of attributes to match against.
# All of these attributes must match exactly for a match to occur.
# Only match_type=strict is allowed if "attributes" are specified.
# This is an optional field.
attributes:
# Key specifies the attribute to match against.
- key: <key>
# Value specifies the exact value to match against.
# If not specified, a match occurs if the key is present in the attributes.
value: {value}
Match Configuration
Some match_type
values have additional configuration options that can be
specified. The match_type
value is the name of the configuration section.
These sections are optional.
# regexp is an optional configuration section for match_type regexp.
regexp:
# cacheenabled determines whether match results are LRU cached to make subsequent matches faster.
# Cache size is unlimited unless cachemaxnumentries is also specified.
cacheenabled: <bool>
# cachemaxnumentries is the max number of entries of the LRU cache; ignored if cacheenabled is false.
cachemaxnumentries: <int>
Documentation
¶
Index ¶
- Variables
- func CreateMetricsCloningFanOutConnector(mcs []consumer.MetricsConsumerBase) consumer.MetricsConsumerBase
- func CreateMetricsFanOutConnector(mcs []consumer.MetricsConsumerBase) consumer.MetricsConsumerBase
- func CreateTraceCloningFanOutConnector(tcs []consumer.TraceConsumerBase) consumer.TraceConsumerBase
- func CreateTraceFanOutConnector(tcs []consumer.TraceConsumerBase) consumer.TraceConsumerBase
- func MetricTagKeys(level telemetry.Level) []tag.Key
- func MetricViews(level telemetry.Level) []*view.View
- func NewLogCloningFanOutConnector(lcs []consumer.LogConsumer) consumer.LogConsumer
- func NewLogFanOutConnector(lcs []consumer.LogConsumer) consumer.LogConsumer
- func NewMetricsFanOutConnector(mcs []consumer.MetricsConsumer) consumer.MetricsConsumer
- func NewMetricsFanOutConnectorOld(mcs []consumer.MetricsConsumerOld) consumer.MetricsConsumerOld
- func NewTraceFanOutConnector(tcs []consumer.TraceConsumer) consumer.TraceConsumer
- func NewTraceFanOutConnectorOld(tcs []consumer.TraceConsumerOld) consumer.TraceConsumerOld
- func RecordsSpanCountMetrics(ctx context.Context, scm *SpanCountStats, measure *stats.Int64Measure)
- func ServiceNameForNode(node *commonpb.Node) string
- func ServiceNameForResource(resource pdata.Resource) string
- type LogCloningFanOutConnector
- type LogFanOutConnector
- type SpanCountStats
Constants ¶
This section is empty.
Variables ¶
var (
TagServiceNameKey, _ = tag.NewKey("service")
TagProcessorNameKey, _ = tag.NewKey(obsreport.ProcessorKey)
StatReceivedSpanCount = stats.Int64(
"spans_received",
"counts the number of spans received",
stats.UnitDimensionless)
StatDroppedSpanCount = stats.Int64(
"spans_dropped",
"counts the number of spans dropped",
stats.UnitDimensionless)
StatTraceBatchesDroppedCount = stats.Int64(
"trace_batches_dropped",
"counts the number of trace batches dropped",
stats.UnitDimensionless)
)
Keys and stats for telemetry.
Functions ¶
func CreateMetricsCloningFanOutConnector ¶
func CreateMetricsCloningFanOutConnector(mcs []consumer.MetricsConsumerBase) consumer.MetricsConsumerBase
CreateMetricsCloningFanOutConnector is a placeholder function for now. It supposed to create an old type connector or a new type connector based on type of provided metrics consumer.
func CreateMetricsFanOutConnector ¶
func CreateMetricsFanOutConnector(mcs []consumer.MetricsConsumerBase) consumer.MetricsConsumerBase
CreateMetricsFanOutConnector creates a connector based on provided type of trace consumer. If any of the wrapped metrics consumers are of the new type, use metricsFanOutConnector, otherwise use the old type connector.
func CreateTraceCloningFanOutConnector ¶
func CreateTraceCloningFanOutConnector(tcs []consumer.TraceConsumerBase) consumer.TraceConsumerBase
CreateTraceCloningFanOutConnector is a placeholder function for now. It supposed to create an old type connector or a new type connector based on type of provided trace consumer.
func CreateTraceFanOutConnector ¶
func CreateTraceFanOutConnector(tcs []consumer.TraceConsumerBase) consumer.TraceConsumerBase
CreateTraceFanOutConnector wraps multiple trace consumers in a single one. If any of the wrapped trace consumers are of the new type, use traceFanOutConnector, otherwise use the old type connector
func MetricTagKeys ¶
MetricTagKeys returns the metric tag keys according to the given telemetry level.
func MetricViews ¶
MetricViews return the metrics views according to given telemetry level.
func NewLogCloningFanOutConnector ¶
func NewLogCloningFanOutConnector(lcs []consumer.LogConsumer) consumer.LogConsumer
NewLogCloningFanOutConnector wraps multiple trace consumers in a single one.
func NewLogFanOutConnector ¶
func NewLogFanOutConnector(lcs []consumer.LogConsumer) consumer.LogConsumer
NewLogFanOutConnector wraps multiple new type consumers in a single one.
func NewMetricsFanOutConnector ¶
func NewMetricsFanOutConnector(mcs []consumer.MetricsConsumer) consumer.MetricsConsumer
NewMetricsFanOutConnector wraps multiple new type metrics consumers in a single one.
func NewMetricsFanOutConnectorOld ¶
func NewMetricsFanOutConnectorOld(mcs []consumer.MetricsConsumerOld) consumer.MetricsConsumerOld
NewMetricsFanOutConnectorOld wraps multiple metrics consumers in a single one.
func NewTraceFanOutConnector ¶
func NewTraceFanOutConnector(tcs []consumer.TraceConsumer) consumer.TraceConsumer
NewTraceFanOutConnector wraps multiple new type trace consumers in a single one.
func NewTraceFanOutConnectorOld ¶
func NewTraceFanOutConnectorOld(tcs []consumer.TraceConsumerOld) consumer.TraceConsumerOld
NewTraceFanOutConnectorOld wraps multiple trace consumers in a single one.
func RecordsSpanCountMetrics ¶
func RecordsSpanCountMetrics(ctx context.Context, scm *SpanCountStats, measure *stats.Int64Measure)
RecordsSpanCountMetrics reports span count metrics for specified measure.
func ServiceNameForNode ¶
ServiceNameForNode gets the service name for a specified node.
func ServiceNameForResource ¶
ServiceNameForResource gets the service name for a specified Resource. TODO: Find a better package for this function.
Types ¶
type LogCloningFanOutConnector ¶
type LogCloningFanOutConnector []consumer.LogConsumer
func (LogCloningFanOutConnector) ConsumeLogs ¶
ConsumeLogs exports the span data to all consumers wrapped by the current one.
type LogFanOutConnector ¶
type LogFanOutConnector []consumer.LogConsumer
func (LogFanOutConnector) ConsumeLogs ¶
Consume exports the span data to all consumers wrapped by the current one.
type SpanCountStats ¶
type SpanCountStats struct {
// contains filtered or unexported fields
}
SpanCountStats represents span count stats grouped by service if DETAILED telemetry level is set, otherwise only overall span count is stored in serviceSpansCounts.
func NewSpanCountStats ¶
func NewSpanCountStats(td pdata.Traces) *SpanCountStats
func (*SpanCountStats) GetAllSpansCount ¶
func (scm *SpanCountStats) GetAllSpansCount() int
Directories
¶
Path | Synopsis |
---|---|
Package attributesprocessor contains the logic to modify attributes of a span.
|
Package attributesprocessor contains the logic to modify attributes of a span. |
Package filterprocessor implements a processor for filtering (dropping) metrics and/or spans by various properties.
|
Package filterprocessor implements a processor for filtering (dropping) metrics and/or spans by various properties. |
Package memorylimiter provides a processor for OpenTelemetry Service pipeline that drops data on the pipeline according to the current state of memory usage.
|
Package memorylimiter provides a processor for OpenTelemetry Service pipeline that drops data on the pipeline according to the current state of memory usage. |
memorylimiterprocessor
module
|
|
processorhelperprofiles
Module
|
|
xprocessorhelper
Module
|
|
processorprofiles
module
|
|
Package resourceprocessor implements a processor for specifying resource labels to be added to OpenCensus trace data and metrics data.
|
Package resourceprocessor implements a processor for specifying resource labels to be added to OpenCensus trace data and metrics data. |
samplingprocessor
|
|
tailsamplingprocessor/idbatcher
Package idbatcher defines a pipeline of fixed size in which the elements are batches of ids.
|
Package idbatcher defines a pipeline of fixed size in which the elements are batches of ids. |
tailsamplingprocessor/sampling
Package sampling contains the interfaces and data types used to implement the various sampling policies.
|
Package sampling contains the interfaces and data types used to implement the various sampling policies. |
Package spanprocessor contains logic to modify top level settings of a span, such as its name.
|
Package spanprocessor contains logic to modify top level settings of a span, such as its name. |
xprocessor
module
|