Documentation ¶
Overview ¶
Package models implements basic objects used throughout the TICK stack.
Index ¶
- Constants
- Variables
- func AppendMakeKey(dst []byte, name []byte, tags Tags) []byte
- func CheckTime(t time.Time) error
- func CompareTags(a, b Tags) int
- func EnableUintSupport()
- func EscapeMeasurement(in []byte) []byte
- func EscapeStringField(in string) string
- func GetPrecisionMultiplier(precision string) int64
- func MakeKey(name []byte, tags Tags) []byte
- func ParseName(buf []byte) []byte
- func SafeCalcTime(timestamp int64, precision string) (time.Time, error)
- func ValidKeyToken(s string) bool
- func ValidKeyTokens(name string, tags Tags) bool
- type ConsistencyLevel
- type FieldIterator
- type FieldType
- type Fields
- type InlineFNV64a
- type Point
- func MustNewPoint(name string, tags Tags, fields Fields, time time.Time) Point
- func NewPoint(name string, tags Tags, fields Fields, t time.Time) (Point, error)
- func NewPointFromBytes(b []byte) (Point, error)
- func ParsePoints(buf []byte) ([]Point, error)
- func ParsePointsString(buf string) ([]Point, error)
- func ParsePointsWithPrecision(buf []byte, defaultTime time.Time, precision string) ([]Point, error)
- type Points
- type Row
- type Rows
- type Statistic
- type StatisticTags
- type Tag
- type TagKeysSet
- func (set *TagKeysSet) Clear()
- func (set *TagKeysSet) IsSupersetBytes(other [][]byte) bool
- func (set *TagKeysSet) IsSupersetKeys(other Tags) bool
- func (set *TagKeysSet) Keys() []string
- func (set *TagKeysSet) KeysBytes() [][]byte
- func (set *TagKeysSet) String() string
- func (set *TagKeysSet) UnionBytes(other [][]byte)
- func (set *TagKeysSet) UnionKeys(other Tags)
- type Tags
- func CopyTags(a Tags) Tags
- func DeepCopyTags(a Tags) Tags
- func NewTags(m map[string]string) Tags
- func NewTagsKeyValues(a Tags, kv ...[]byte) (Tags, error)
- func ParseKey(buf []byte) (string, Tags)
- func ParseKeyBytes(buf []byte) ([]byte, Tags)
- func ParseKeyBytesWithTags(buf []byte, tags Tags) ([]byte, Tags)
- func ParseTags(buf []byte) Tags
- func (a Tags) AppendHashKey(dst []byte) []byte
- func (a Tags) Clone() Tags
- func (a *Tags) Delete(key []byte)
- func (a Tags) Equal(other Tags) bool
- func (a Tags) Get(key []byte) []byte
- func (a Tags) GetString(key string) string
- func (a Tags) HashKey() []byte
- func (a Tags) Keys() []string
- func (a Tags) Len() int
- func (a Tags) Less(i, j int) bool
- func (a Tags) Map() map[string]string
- func (a Tags) Merge(other map[string]string) Tags
- func (a *Tags) Set(key, value []byte)
- func (a *Tags) SetString(key, value string)
- func (a Tags) Size() int
- func (a Tags) String() string
- func (a Tags) Swap(i, j int)
- func (a Tags) Values() []string
Constants ¶
const ( FieldKeyTagKey = "\xff" MeasurementTagKey = "\x00" )
Values used to store the field key and measurement name as special internal tags.
const ( // MinNanoTime is the minimum time that can be represented. // // 1677-09-21 00:12:43.145224194 +0000 UTC // // The two lowest minimum integers are used as sentinel values. The // minimum value needs to be used as a value lower than any other value for // comparisons and another separate value is needed to act as a sentinel // default value that is unusable by the user, but usable internally. // Because these two values need to be used for a special purpose, we do // not allow users to write points at these two times. MinNanoTime = int64(math.MinInt64) + 2 // MaxNanoTime is the maximum time that can be represented. // // 2262-04-11 23:47:16.854775806 +0000 UTC // // The highest time represented by a nanosecond needs to be used for an // exclusive range in the shard group, so the maximum time needs to be one // less than the possible maximum number of nanoseconds representable by an // int64 so that we don't lose a point at that one time. MaxNanoTime = int64(math.MaxInt64) - 1 )
const (
// MaxKeyLength is the largest allowed size of the combined measurement and tag keys.
MaxKeyLength = 65535
)
Variables ¶
var ( FieldKeyTagKeyBytes = []byte(FieldKeyTagKey) MeasurementTagKeyBytes = []byte(MeasurementTagKey) )
Predefined byte representations of special tag keys.
var ( // ErrPointMustHaveAField is returned when operating on a point that does not have any fields. ErrPointMustHaveAField = errors.New("point without fields is unsupported") // ErrInvalidNumber is returned when a number is expected but not provided. ErrInvalidNumber = errors.New("invalid number") // ErrInvalidPoint is returned when a point cannot be parsed correctly. ErrInvalidPoint = errors.New("point is invalid") // ErrInvalidKevValuePairs is returned when the number of key, value pairs // is odd, indicating a missing value. ErrInvalidKevValuePairs = errors.New("key/value pairs is an odd length") )
var ( // ErrInvalidConsistencyLevel is returned when parsing the string version // of a consistency level. ErrInvalidConsistencyLevel = errors.New("invalid consistency level") )
var ( // ErrTimeOutOfRange gets returned when time is out of the representable range using int64 nanoseconds since the epoch. ErrTimeOutOfRange = fmt.Errorf("time outside range %d - %d", MinNanoTime, MaxNanoTime) )
Functions ¶
func AppendMakeKey ¶ added in v1.5.1
AppendMakeKey appends the key derived from name and tags to dst and returns the extended buffer.
func CompareTags ¶ added in v1.3.0
CompareTags returns -1 if a < b, 1 if a > b, and 0 if a == b.
func EnableUintSupport ¶ added in v1.4.0
func EnableUintSupport()
EnableUintSupport manually enables uint support for the point parser. This function will be removed in the future and only exists for unit tests during the transition.
func EscapeMeasurement ¶ added in v1.4.0
func EscapeStringField ¶ added in v1.0.0
EscapeStringField returns a copy of in with any double quotes or backslashes with escaped values.
func GetPrecisionMultiplier ¶ added in v0.10.0
GetPrecisionMultiplier will return a multiplier for the precision specified.
func SafeCalcTime ¶ added in v0.10.0
SafeCalcTime safely calculates the time given. Will return error if the time is outside the supported range.
func ValidKeyToken ¶ added in v1.5.4
ValidKeyToken returns true if the token used for measurement, tag key, or tag value is a valid unicode string and only contains printable, non-replacement characters.
func ValidKeyTokens ¶ added in v1.5.4
ValidKeyTokens returns true if the measurement name and all tags are valid.
Types ¶
type ConsistencyLevel ¶ added in v0.12.0
type ConsistencyLevel int
ConsistencyLevel represent a required replication criteria before a write can be returned as successful.
The consistency level is handled in open-source InfluxDB but only applicable to clusters.
const ( // ConsistencyLevelAny allows for hinted handoff, potentially no write happened yet. ConsistencyLevelAny ConsistencyLevel = iota // ConsistencyLevelOne requires at least one data node acknowledged a write. ConsistencyLevelOne // ConsistencyLevelQuorum requires a quorum of data nodes to acknowledge a write. ConsistencyLevelQuorum // ConsistencyLevelAll requires all data nodes to acknowledge a write. ConsistencyLevelAll )
func ParseConsistencyLevel ¶ added in v0.12.0
func ParseConsistencyLevel(level string) (ConsistencyLevel, error)
ParseConsistencyLevel converts a consistency level string to the corresponding ConsistencyLevel const.
type FieldIterator ¶ added in v1.1.0
type FieldIterator interface { // Next indicates whether there any fields remaining. Next() bool // FieldKey returns the key of the current field. FieldKey() []byte // Type returns the FieldType of the current field. Type() FieldType // StringValue returns the string value of the current field. StringValue() string // IntegerValue returns the integer value of the current field. IntegerValue() (int64, error) // UnsignedValue returns the unsigned value of the current field. UnsignedValue() (uint64, error) // BooleanValue returns the boolean value of the current field. BooleanValue() (bool, error) // FloatValue returns the float value of the current field. FloatValue() (float64, error) // Reset resets the iterator to its initial state. Reset() }
FieldIterator provides a low-allocation interface to iterate through a point's fields.
type FieldType ¶ added in v1.1.0
type FieldType int
FieldType represents the type of a field.
const ( // Integer indicates the field's type is integer. Integer FieldType = iota // Float indicates the field's type is float. Float // Boolean indicates the field's type is boolean. Boolean // String indicates the field's type is string. String // Empty is used to indicate that there is no field. Empty // Unsigned indicates the field's type is an unsigned integer. Unsigned )
type Fields ¶
type Fields map[string]interface{}
Fields represents a mapping between a Point's field names and their values.
func (Fields) MarshalBinary ¶
MarshalBinary encodes all the fields to their proper type and returns the binary representation NOTE: uint64 is specifically not supported due to potential overflow when we decode again later to an int64 NOTE2: uint is accepted, and may be 64 bits, and is for some reason accepted...
type InlineFNV64a ¶ added in v1.1.0
type InlineFNV64a uint64
InlineFNV64a is an alloc-free port of the standard library's fnv64a. See https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function.
func NewInlineFNV64a ¶ added in v1.1.0
func NewInlineFNV64a() InlineFNV64a
NewInlineFNV64a returns a new instance of InlineFNV64a.
func (*InlineFNV64a) Sum64 ¶ added in v1.1.0
func (s *InlineFNV64a) Sum64() uint64
Sum64 returns the uint64 of the current resulting hash.
type Point ¶
type Point interface { // Name return the measurement name for the point. Name() []byte // SetName updates the measurement name for the point. SetName(string) // Tags returns the tag set for the point. Tags() Tags // ForEachTag iterates over each tag invoking fn. If fn return false, iteration stops. ForEachTag(fn func(k, v []byte) bool) // AddTag adds or replaces a tag value for a point. AddTag(key, value string) // SetTags replaces the tags for the point. SetTags(tags Tags) // HasTag returns true if the tag exists for the point. HasTag(tag []byte) bool // Fields returns the fields for the point. Fields() (Fields, error) // Time return the timestamp for the point. Time() time.Time // SetTime updates the timestamp for the point. SetTime(t time.Time) // UnixNano returns the timestamp of the point as nanoseconds since Unix epoch. UnixNano() int64 // HashID returns a non-cryptographic checksum of the point's key. HashID() uint64 // Key returns the key (measurement joined with tags) of the point. Key() []byte // String returns a string representation of the point. If there is a // timestamp associated with the point then it will be specified with the default // precision of nanoseconds. String() string // MarshalBinary returns a binary representation of the point. MarshalBinary() ([]byte, error) // PrecisionString returns a string representation of the point. If there // is a timestamp associated with the point then it will be specified in the // given unit. PrecisionString(precision string) string // RoundedString returns a string representation of the point. If there // is a timestamp associated with the point, then it will be rounded to the // given duration. RoundedString(d time.Duration) string // Split will attempt to return multiple points with the same timestamp whose // string representations are no longer than size. Points with a single field or // a point without a timestamp may exceed the requested size. Split(size int) []Point // Round will round the timestamp of the point to the given duration. Round(d time.Duration) // StringSize returns the length of the string that would be returned by String(). StringSize() int // AppendString appends the result of String() to the provided buffer and returns // the result, potentially reducing string allocations. AppendString(buf []byte) []byte // FieldIterator returns a FieldIterator that can be used to traverse the // fields of a point without constructing the in-memory map. FieldIterator() FieldIterator }
Point defines the values that will be written to the database.
func MustNewPoint ¶
MustNewPoint returns a new point with the given measurement name, tags, fields and timestamp. If an unsupported field value (NaN) is passed, this function panics.
func NewPoint ¶
NewPoint returns a new point with the given measurement name, tags, fields and timestamp. If an unsupported field value (NaN, or +/-Inf) or out of range time is passed, this function returns an error.
func NewPointFromBytes ¶ added in v0.9.6
NewPointFromBytes returns a new Point from a marshalled Point.
func ParsePoints ¶
ParsePoints returns a slice of Points from a text representation of a point with each point separated by newlines. If any points fail to parse, a non-nil error will be returned in addition to the points that parsed successfully.
func ParsePointsString ¶
ParsePointsString is identical to ParsePoints but accepts a string.
func ParsePointsWithPrecision ¶
ParsePointsWithPrecision is similar to ParsePoints, but allows the caller to provide a precision for time.
NOTE: to minimize heap allocations, the returned Points will refer to subslices of buf. This can have the unintended effect preventing buf from being garbage collected.
type Row ¶
type Row struct { Name string `json:"name,omitempty"` Tags map[string]string `json:"tags,omitempty"` Columns []string `json:"columns,omitempty"` Values [][]interface{} `json:"values,omitempty"` Partial bool `json:"partial,omitempty"` }
Row represents a single row returned from the execution of a statement.
func (*Row) SameSeries ¶
SameSeries returns true if r contains values for the same series as o.
type Statistic ¶ added in v1.0.0
type Statistic struct { Name string `json:"name"` Tags map[string]string `json:"tags"` Values map[string]interface{} `json:"values"` }
Statistic is the representation of a statistic used by the monitoring service.
func NewStatistic ¶ added in v1.1.0
NewStatistic returns an initialized Statistic.
type StatisticTags ¶ added in v1.1.0
StatisticTags is a map that can be merged with others without causing mutations to either map.
func (StatisticTags) Merge ¶ added in v1.1.0
func (t StatisticTags) Merge(tags map[string]string) map[string]string
Merge creates a new map containing the merged contents of tags and t. If both tags and the receiver map contain the same key, the value in tags is used in the resulting map.
Merge always returns a usable map.
type Tag ¶ added in v1.1.0
Tag represents a single key/value tag pair.
func (Tag) Clone ¶ added in v1.1.2
Clone returns a shallow copy of Tag.
Tags associated with a Point created by ParsePointsWithPrecision will hold references to the byte slice that was parsed. Use Clone to create a Tag with new byte slices that do not refer to the argument to ParsePointsWithPrecision.
type TagKeysSet ¶ added in v1.7.8
type TagKeysSet struct {
// contains filtered or unexported fields
}
TagKeysSet provides set operations for combining Tags.
func (*TagKeysSet) Clear ¶ added in v1.7.8
func (set *TagKeysSet) Clear()
Clear removes all the elements of TagKeysSet and ensures all internal buffers are reset.
func (*TagKeysSet) IsSupersetBytes ¶ added in v1.7.8
func (set *TagKeysSet) IsSupersetBytes(other [][]byte) bool
IsSupersetBytes returns true if the TagKeysSet is a superset of all the keys in other. Other must be lexicographically sorted or the results are undefined.
func (*TagKeysSet) IsSupersetKeys ¶ added in v1.7.8
func (set *TagKeysSet) IsSupersetKeys(other Tags) bool
IsSupersetKeys returns true if the TagKeysSet is a superset of all the keys contained in other.
func (*TagKeysSet) Keys ¶ added in v1.7.8
func (set *TagKeysSet) Keys() []string
Keys returns a copy of the merged keys in lexicographical order.
func (*TagKeysSet) KeysBytes ¶ added in v1.7.8
func (set *TagKeysSet) KeysBytes() [][]byte
KeysBytes returns the merged keys in lexicographical order. The slice is valid until the next call to UnionKeys, UnionBytes or Reset.
func (*TagKeysSet) String ¶ added in v1.7.8
func (set *TagKeysSet) String() string
func (*TagKeysSet) UnionBytes ¶ added in v1.7.8
func (set *TagKeysSet) UnionBytes(other [][]byte)
UnionBytes updates the set so that it is the union of itself and all the keys contained in other. Other must be lexicographically sorted or the results are undefined.
func (*TagKeysSet) UnionKeys ¶ added in v1.7.8
func (set *TagKeysSet) UnionKeys(other Tags)
UnionKeys updates the set so that it is the union of itself and all the keys contained in other.
type Tags ¶
type Tags []Tag
Tags represents a sorted list of tags.
func DeepCopyTags ¶ added in v1.3.0
DeepCopyTags returns a deep copy of tags.
func NewTagsKeyValues ¶ added in v1.9.0
NewTagsKeyValues returns a new Tags from a list of key, value pairs, ensuring the returned result is correctly sorted. Duplicate keys are removed, however, it which duplicate that remains is undefined. NewTagsKeyValues will return ErrInvalidKevValuePairs if len(kvs) is not even. If the input is guaranteed to be even, the error can be safely ignored. If a has enough capacity, it will be reused.
func ParseKey ¶ added in v0.9.6
ParseKey returns the measurement name and tags from a point.
NOTE: to minimize heap allocations, the returned Tags will refer to subslices of buf. This can have the unintended effect preventing buf from being garbage collected.
func ParseKeyBytes ¶ added in v1.5.0
func ParseKeyBytesWithTags ¶ added in v1.6.1
func (Tags) AppendHashKey ¶ added in v1.5.1
AppendHashKey appends the result of hashing all of a tag's keys and values to dst and returns the extended buffer.
func (Tags) Clone ¶ added in v1.1.2
Clone returns a copy of the slice where the elements are a result of calling `Clone` on the original elements
Tags associated with a Point created by ParsePointsWithPrecision will hold references to the byte slice that was parsed. Use Clone to create Tags with new byte slices that do not refer to the argument to ParsePointsWithPrecision.
func (Tags) Merge ¶ added in v1.0.0
Merge merges the tags combining the two. If both define a tag with the same key, the merged value overwrites the old value. A new map is returned.
func (Tags) Size ¶ added in v1.3.0
Size returns the number of bytes needed to store all tags. Note, this is the number of bytes needed to store all keys and values and does not account for data structures or delimiters for example.