Documentation ¶
Index ¶
- Constants
- Variables
- type Cardinality
- type Collator
- type ContReaderFunc
- type ContWriterFunc
- type ContinuousInput
- type ContinuousOutput
- type Count
- type DiscReaderFunc
- type DiscWriterFunc
- type DiscreteInput
- type DiscreteOutput
- type FM
- type FMV
- type FV
- type FVList
- type FieldSpec
- type Filter
- type KeySpec
- type Max
- type Mean
- type Measure
- type MeasureSpec
- type Min
- type Query
- type Record
- type Result
- type Row
- type RowMeasure
- type RowWriterFunc
- type Schema
- type Sum
- type Variance
Constants ¶
const (
RowPseudoField = "*" // a field name that represents a row level measure
)
Variables ¶
var (
ErrMeasureNotFound = errors.New("measure not found for field")
)
Functions ¶
This section is empty.
Types ¶
type Cardinality ¶
type Cardinality struct {
Precision int // precision between 4 and 18, inclusive
}
Cardinality is an approximate unique count of discrete observations over all time.
func (Cardinality) ContReader ¶
func (c Cardinality) ContReader() ContReaderFunc
func (Cardinality) DiscWriter ¶
func (c Cardinality) DiscWriter() DiscWriterFunc
func (Cardinality) Name ¶
func (Cardinality) Name() string
func (Cardinality) Size ¶
func (c Cardinality) Size() int
type Collator ¶
type Collator struct {
// contains filtered or unexported fields
}
func (Collator) Read ¶
Note that the byte slices in fml and buf must not be retained or reused by the collator.
func (Collator) Update ¶
Update reads the row and updates buf according to the schema's measures. buf contains the existing state of the collation record and is updated in- place. buf must be long enough for the schema's data. If init is true then the buffer contains unitialised data and the measures should initialise their state. If an error is returned then the buffer may be in an inconsistent state and should not be used further.
type ContReaderFunc ¶
type ContinuousInput ¶
type ContinuousInput interface {
ContWriter() ContWriterFunc
}
type ContinuousOutput ¶
type ContinuousOutput interface {
ContReader() ContReaderFunc
}
type Count ¶
type Count struct{}
Count is a precise count of observations over all time.
func (Count) ContReader ¶
func (Count) ContReader() ContReaderFunc
func (Count) RowWriter ¶
func (Count) RowWriter() RowWriterFunc
type DiscReaderFunc ¶
type DiscreteInput ¶
type DiscreteInput interface {
DiscWriter() DiscWriterFunc
}
type DiscreteOutput ¶
type DiscreteOutput interface {
DiscReader() DiscReaderFunc
}
type FVList ¶
type FVList []FV
func (FVList) KeyValue ¶
KeyValue builds a key value by locating Query criteria fields corresponding to the key names supplied and hashing their names and values. It returns false as the second argument if the key could not be constructed, e.g. if one of the keys does not match a field in the query criteria. keys must be sorted in ascending order.
type FieldSpec ¶
type FieldSpec struct {
Pattern string
}
A FieldSpec specifies a field contained in an incoming observation.
type Max ¶
type Max struct{}
Max is a precise maximum of continuous observations over all time.
func (Max) ContReader ¶
func (Max) ContReader() ContReaderFunc
func (Max) ContWriter ¶
func (Max) ContWriter() ContWriterFunc
type Mean ¶
type Mean struct{}
Mean is a precise mean of continuous observations over all time.
func (Mean) ContReader ¶
func (Mean) ContReader() ContReaderFunc
func (Mean) ContWriter ¶
func (Mean) ContWriter() ContWriterFunc
type MeasureSpec ¶
A MeasureSpec specifies a measure field in a schema.
type Min ¶
type Min struct{}
Min is a precise minimum of continuous observations over all time.
func (Min) ContReader ¶
func (Min) ContReader() ContReaderFunc
func (Min) ContWriter ¶
func (Min) ContWriter() ContWriterFunc
type Query ¶
type Query struct { FieldMeasures []FM // fields and the measures to take from them Criteria FVList // fields and values in the row. Field names must be unique. }
Query is a row of data to be collated containing fields and their values.
func (*Query) KeyValue ¶
KeyValue builds a key value by locating Query criteria fields corresponding to the key names supplied and hashing their names and values. It returns false as the second argument if the key could not be constructed, e.g. if one of the keys does not match a field in the query criteria. keys must be sorted in ascending order.
type Record ¶
type Record struct { }
A Record is one instance of a schema for a particular combination of keys.
type Result ¶
type Result struct {
FieldMeasureValues []FMV // fields and the measure values
}
Result is a result of a query.
type Row ¶
type Row struct { ReceiveTime time.Time // received time DataTime time.Time // data time Data FVList // fields and values in the row. Field names must be unique. }
Row is a row of data to be collated containing fields and their values.
func (*Row) KeyValue ¶
KeyValue builds a key value by locating Row fields corresponding to the key names supplied and hashing their names and values. It returns false as the second argument if the key could not be constructed, e.g. if one of the keys does not match a field in the row. keys must be sorted in ascending order.
type RowMeasure ¶
type RowMeasure interface { Measure RowWriter() RowWriterFunc }
type RowWriterFunc ¶
type Schema ¶
type Schema struct { Name string Filter Filter Keys []KeySpec Measures []MeasureSpec // Field-level measures RecordMeasures []RowMeasure // Record-level measures }
A Schema defines the structure of a collation.
type Sum ¶
type Sum struct{}
Sum is a precise sum of continuous observations over all time.
func (Sum) ContReader ¶
func (Sum) ContReader() ContReaderFunc
func (Sum) ContWriter ¶
func (Sum) ContWriter() ContWriterFunc
type Variance ¶
type Variance struct{}
Variance is a precise variance of continuous observations over all time.
func (Variance) ContReader ¶
func (Variance) ContReader() ContReaderFunc
func (Variance) ContWriter ¶
func (Variance) ContWriter() ContWriterFunc