Documentation ¶
Index ¶
Constants ¶
const ChunkLen = 1024
ChunkLen is the length of a chunk in bytes.
Variables ¶
var ( ErrInvalidChecksum = errs.New("invalid chunk checksum") ErrWrongMetadata = errs.New("wrong chunk metadata") ErrMetadataLength = errs.New("chunk metadata wrong length") ErrDataLength = errs.New("chunk data wrong length") ErrSliceOutOfRange = errs.New("chunk can't be sliced out of its data range") )
var ( ErrSliceNoDataInRange = errors.New("chunk has no data for given range to slice") ErrSliceChunkOverflow = errors.New("slicing should not overflow a chunk") )
Functions ¶
func MustRegisterEncoding ¶
MustRegisterEncoding add a new chunk encoding. There is no locking, so this must be called in init().
Types ¶
type Chunk ¶
type Chunk struct { logproto.ChunkRef Metric labels.Labels `json:"metric"` // We never use Delta encoding (the zero value), so if this entry is // missing, we default to DoubleDelta. Encoding Encoding `json:"encoding"` Data Data `json:"-"` // contains filtered or unexported fields }
Chunk contains encoded timeseries data
func NewChunk ¶
func NewChunk(userID string, fp model.Fingerprint, metric labels.Labels, c Data, from, through model.Time) Chunk
NewChunk creates a new chunk
func ParseExternalKey ¶
ParseExternalKey is used to construct a partially-populated chunk from the key in DynamoDB. This chunk can then be used to calculate the key needed to fetch the Chunk data from Memcache/S3, and then fully populate the chunk with decode().
Pre-checksums, the keys written to DynamoDB looked like `<fingerprint>:<start time>:<end time>` (aka the ID), and the key for memcache and S3 was `<user id>/<fingerprint>:<start time>:<end time>. Finger prints and times were written in base-10.
Post-checksums, externals keys become the same across DynamoDB, Memcache and S3. Numbers become hex encoded. Keys look like: `<user id>/<fingerprint>:<start time>:<end time>:<checksum>`.
v12+, fingerprint is now a prefix to support better read and write request parallelization: `<user>/<fprint>/<start>:<end>:<checksum>`
func (*Chunk) Decode ¶
func (c *Chunk) Decode(decodeContext *DecodeContext, input []byte) error
Decode the chunk from the given buffer, and confirm the chunk is the one we expected.
type Data ¶
type Data interface { // Add adds a SamplePair to the chunks, performs any necessary // re-encoding, and creates any necessary overflow chunk. // The returned Chunk is the overflow chunk if it was created. // The returned Chunk is nil if the sample got appended to the same chunk. Add(sample model.SamplePair) (Data, error) Marshal(io.Writer) error UnmarshalFromBuf([]byte) error Encoding() Encoding // Rebound returns a smaller chunk that includes all samples between start and end (inclusive). // We do not want to change existing Slice implementations because // it is built specifically for query optimization and is a noop for some of the encodings. Rebound(start, end model.Time, filter filter.Func) (Data, error) // Size returns the approximate length of the chunk in bytes. Size() int // UncompressedSize returns the length of uncompressed bytes. UncompressedSize() int // Entries returns the number of entries in a chunk Entries() int Utilization() float64 }
Data is the interface for all chunks. Chunks are generally not goroutine-safe.
func NewForEncoding ¶
NewForEncoding allows configuring what chunk type you want
type DecodeContext ¶
type DecodeContext struct {
// contains filtered or unexported fields
}
DecodeContext holds data that can be re-used between decodes of different chunks
func NewDecodeContext ¶
func NewDecodeContext() *DecodeContext
NewDecodeContext creates a new, blank, DecodeContext
type Encoding ¶
type Encoding byte
Encoding defines which encoding we are using, delta, doubledelta, or varbit
type RequestChunkFilterer ¶
RequestChunkFilterer creates ChunkFilterer for a given request context.