Documentation ¶
Index ¶
Constants ¶
const ( ErrInvalidChecksum = errs.Error("invalid chunk checksum") ErrWrongMetadata = errs.Error("wrong chunk metadata") ErrMetadataLength = errs.Error("chunk metadata wrong length") ErrDataLength = errs.Error("chunk data wrong length") ErrSliceOutOfRange = errs.Error("chunk can't be sliced out of its data range") )
const ( // ChunkLen is the length of a chunk in bytes. ChunkLen = 1024 ErrSliceNoDataInRange = errs.Error("chunk has no data for given range to slice") ErrSliceChunkOverflow = errs.Error("slicing should not overflow a chunk") )
Variables ¶
var ( // DefaultEncoding exported for use in unit tests elsewhere DefaultEncoding = Bigchunk )
Functions ¶
func MustRegisterEncoding ¶
MustRegisterEncoding add a new chunk encoding. There is no locking, so this must be called in init().
Types ¶
type Chunk ¶
type Chunk struct { logproto.ChunkRef Metric labels.Labels `json:"metric"` // We never use Delta encoding (the zero value), so if this entry is // missing, we default to DoubleDelta. Encoding Encoding `json:"encoding"` Data Data `json:"-"` // contains filtered or unexported fields }
Chunk contains encoded timeseries data
func NewChunk ¶
func NewChunk(userID string, fp model.Fingerprint, metric labels.Labels, c Data, from, through model.Time) Chunk
NewChunk creates a new chunk
func ParseExternalKey ¶
ParseExternalKey is used to construct a partially-populated chunk from the key in DynamoDB. This chunk can then be used to calculate the key needed to fetch the Chunk data from Memcache/S3, and then fully populate the chunk with decode().
Pre-checksums, the keys written to DynamoDB looked like `<fingerprint>:<start time>:<end time>` (aka the ID), and the key for memcache and S3 was `<user id>/<fingerprint>:<start time>:<end time>. Finger prints and times were written in base-10.
Post-checksums, externals keys become the same across DynamoDB, Memcache and S3. Numbers become hex encoded. Keys look like: `<user id>/<fingerprint>:<start time>:<end time>:<checksum>`.
v12+, fingerprint is now a prefix to support better read and write request parallelization: `<user>/<fprint>/<start>:<end>:<checksum>`
func (*Chunk) Decode ¶
func (c *Chunk) Decode(decodeContext *DecodeContext, input []byte) error
Decode the chunk from the given buffer, and confirm the chunk is the one we expected.
type Config ¶
type Config struct{}
Config configures the behaviour of chunk encoding
func (Config) RegisterFlags ¶
RegisterFlags registers configuration settings.
type Data ¶
type Data interface { // Add adds a SamplePair to the chunks, performs any necessary // re-encoding, and creates any necessary overflow chunk. // The returned Chunk is the overflow chunk if it was created. // The returned Chunk is nil if the sample got appended to the same chunk. Add(sample model.SamplePair) (Data, error) Marshal(io.Writer) error UnmarshalFromBuf([]byte) error Encoding() Encoding // Rebound returns a smaller chunk that includes all samples between start and end (inclusive). // We do not want to change existing Slice implementations because // it is built specifically for query optimization and is a noop for some of the encodings. Rebound(start, end model.Time) (Data, error) // Size returns the approximate length of the chunk in bytes. Size() int Utilization() float64 }
Data is the interface for all chunks. Chunks are generally not goroutine-safe.
func New ¶
func New() Data
New creates a new chunk according to the encoding set by the DefaultEncoding flag.
func NewForEncoding ¶
NewForEncoding allows configuring what chunk type you want
type DecodeContext ¶
type DecodeContext struct {
// contains filtered or unexported fields
}
DecodeContext holds data that can be re-used between decodes of different chunks
func NewDecodeContext ¶
func NewDecodeContext() *DecodeContext
NewDecodeContext creates a new, blank, DecodeContext
type Encoding ¶
type Encoding byte
Encoding defines which encoding we are using, delta, doubledelta, or varbit
type RequestChunkFilterer ¶
RequestChunkFilterer creates ChunkFilterer for a given request context.