chunk

package
v3.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 18, 2024 License: AGPL-3.0 Imports: 24 Imported by: 0

Documentation

Index

Constants

View Source
const ChunkLen = 1024

ChunkLen is the length of a chunk in bytes.

Variables

View Source
var (
	ErrInvalidChecksum = errs.New("invalid chunk checksum")
	ErrWrongMetadata   = errs.New("wrong chunk metadata")
	ErrMetadataLength  = errs.New("chunk metadata wrong length")
	ErrDataLength      = errs.New("chunk data wrong length")
	ErrSliceOutOfRange = errs.New("chunk can't be sliced out of its data range")
	ErrChunkDecode     = errs.New("error decoding freshly created chunk")
)
View Source
var (
	ErrSliceNoDataInRange = errors.New("chunk has no data for given range to slice")
	ErrSliceChunkOverflow = errors.New("slicing should not overflow a chunk")
)

Functions

func MustRegisterEncoding

func MustRegisterEncoding(enc Encoding, name string, f func() Data)

MustRegisterEncoding add a new chunk encoding. There is no locking, so this must be called in init().

Types

type Chunk

type Chunk struct {
	logproto.ChunkRef

	Metric labels.Labels `json:"metric"`

	// We never use Delta encoding (the zero value), so if this entry is
	// missing, we default to DoubleDelta.
	Encoding Encoding `json:"encoding"`
	Data     Data     `json:"-"`
	// contains filtered or unexported fields
}

Chunk contains encoded timeseries data

func NewChunk

func NewChunk(userID string, fp model.Fingerprint, metric labels.Labels, c Data, from, through model.Time) Chunk

NewChunk creates a new chunk

func ParseExternalKey

func ParseExternalKey(userID, externalKey string) (Chunk, error)

ParseExternalKey is used to construct a partially-populated chunk from the key in DynamoDB. This chunk can then be used to calculate the key needed to fetch the Chunk data from Memcache/S3, and then fully populate the chunk with decode().

Pre-checksums, the keys written to DynamoDB looked like `<fingerprint>:<start time>:<end time>` (aka the ID), and the key for memcache and S3 was `<user id>/<fingerprint>:<start time>:<end time>. Finger prints and times were written in base-10.

Post-checksums, externals keys become the same across DynamoDB, Memcache and S3. Numbers become hex encoded. Keys look like: `<user id>/<fingerprint>:<start time>:<end time>:<checksum>`.

v12+, fingerprint is now a prefix to support better read and write request parallelization: `<user>/<fprint>/<start>:<end>:<checksum>`

func (*Chunk) Decode

func (c *Chunk) Decode(decodeContext *DecodeContext, input []byte) error

Decode the chunk from the given buffer, and confirm the chunk is the one we expected.

func (*Chunk) Encode

func (c *Chunk) Encode() error

Encode writes the chunk into a buffer, and calculates the checksum.

func (*Chunk) EncodeTo

func (c *Chunk) EncodeTo(buf *bytes.Buffer, log log.Logger) error

EncodeTo is like Encode but you can provide your own buffer to use.

func (*Chunk) Encoded

func (c *Chunk) Encoded() ([]byte, error)

Encoded returns the buffer created by Encoded()

type Data

type Data interface {
	// Add adds a SamplePair to the chunks, performs any necessary
	// re-encoding, and creates any necessary overflow chunk.
	// The returned Chunk is the overflow chunk if it was created.
	// The returned Chunk is nil if the sample got appended to the same chunk.
	Add(sample model.SamplePair) (Data, error)
	Marshal(io.Writer) error
	UnmarshalFromBuf([]byte) error
	Encoding() Encoding
	// Rebound returns a smaller chunk that includes all samples between start and end (inclusive).
	// We do not want to change existing Slice implementations because
	// it is built specifically for query optimization and is a noop for some of the encodings.
	Rebound(start, end model.Time, filter filter.Func) (Data, error)
	// Size returns the approximate length of the chunk in bytes.
	Size() int
	// UncompressedSize returns the length of uncompressed bytes.
	UncompressedSize() int
	// Entries returns the number of entries in a chunk
	Entries() int
	Utilization() float64
}

Data is the interface for all chunks. Chunks are generally not goroutine-safe.

func NewForEncoding

func NewForEncoding(encoding Encoding) (Data, error)

NewForEncoding allows configuring what chunk type you want

type DecodeContext

type DecodeContext struct {
	// contains filtered or unexported fields
}

DecodeContext holds data that can be re-used between decodes of different chunks

func NewDecodeContext

func NewDecodeContext() *DecodeContext

NewDecodeContext creates a new, blank, DecodeContext

type Encoding

type Encoding byte

Encoding defines which encoding we are using, delta, doubledelta, or varbit

const (
	Dummy Encoding = iota
)

func (*Encoding) Set

func (e *Encoding) Set(s string) error

Set implements flag.Value.

func (Encoding) String

func (e Encoding) String() string

String implements flag.Value.

type Filterer

type Filterer interface {
	ShouldFilter(metric labels.Labels) bool
	RequiredLabelNames() []string
}

Filterer filters chunks based on the metric.

type Predicate

type Predicate struct {
	Matchers []*labels.Matcher
	// contains filtered or unexported fields
}

TODO(owen-d): rename. This is not a predicate and is confusing.

func NewPredicate

func NewPredicate(m []*labels.Matcher, p *plan.QueryPlan) Predicate

func (Predicate) Plan

func (p Predicate) Plan() plan.QueryPlan

type RequestChunkFilterer

type RequestChunkFilterer interface {
	ForRequest(ctx context.Context) Filterer
}

RequestChunkFilterer creates ChunkFilterer for a given request context.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL