groupby

package
v1.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 28, 2023 License: BSD-3-Clause Imports: 14 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var DefaultLimit = 1000000

Functions

This section is empty.

Types

type Aggregator

type Aggregator struct {
	// contains filtered or unexported fields
}

Aggregator performs the core aggregation computation for a list of reducer generators. It handles both regular and time-binned ("every") group-by operations. Records are generated in a deterministic but undefined total order.

func NewAggregator

func NewAggregator(ctx context.Context, zctx *zed.Context, keyRefs, keyExprs, aggRefs []expr.Evaluator, aggs []*expr.Aggregator, builder *zed.RecordBuilder, limit int, inputDir order.Direction, partialsIn, partialsOut bool) (*Aggregator, error)

func (*Aggregator) Consume

func (a *Aggregator) Consume(batch zbuf.Batch, this *zed.Value) error

Consume adds a value to an aggregation.

type Op added in v1.7.0

type Op struct {
	// contains filtered or unexported fields
}

Proc computes aggregations using an Aggregator.

func New

func New(octx *op.Context, parent zbuf.Puller, keys []expr.Assignment, aggNames field.List, aggs []*expr.Aggregator, limit int, inputSortDir order.Direction, partialsIn, partialsOut bool) (*Op, error)

func (*Op) Pull added in v1.7.0

func (o *Op) Pull(done bool) (zbuf.Batch, error)

type Row

type Row struct {
	// contains filtered or unexported fields
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL