chunks

package
v3.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 18, 2024 License: AGPL-3.0 Imports: 17 Imported by: 0

Documentation

Overview

Package chunks provides functionality for efficient storage and retrieval of log data and metrics.

The chunks package implements a compact and performant way to store and access log entries and metric samples. It uses various compression and encoding techniques to minimize storage requirements while maintaining fast access times.

Key features:

  • Efficient chunk writing with multiple encoding options
  • Fast chunk reading with iterators for forward and backward traversal
  • Support for time-based filtering of log entries and metric samples
  • Integration with Loki's log query language (LogQL) for advanced filtering and processing
  • Separate iterators for log entries and metric samples

Main types and functions:

  • WriteChunk: Writes log entries to a compressed chunk format
  • NewChunkReader: Creates a reader for parsing and accessing chunk data
  • NewEntryIterator: Provides an iterator for efficient traversal of log entries in a chunk
  • NewSampleIterator: Provides an iterator for efficient traversal of metric samples in a chunk

Entry Iterator: The EntryIterator allows efficient traversal of log entries within a chunk. It supports both forward and backward iteration, time-based filtering, and integration with LogQL pipelines for advanced log processing.

Sample Iterator: The SampleIterator enables efficient traversal of metric samples within a chunk. It supports time-based filtering and integration with LogQL extractors for advanced metric processing. This iterator is particularly useful for handling numeric data extracted from logs or pre-aggregated metrics.

Both iterators implement methods for accessing the current entry or sample, checking for errors, and retrieving associated labels and stream hashes.

This package is designed to work seamlessly with other components of the Loki log aggregation system, providing a crucial layer for data storage and retrieval of both logs and metrics.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewEntryIterator

func NewEntryIterator(
	chunkData []byte,
	pipeline log.StreamPipeline,
	direction logproto.Direction,
	from, through int64,
) (iter.EntryIterator, error)

NewEntryIterator creates an iterator for efficiently traversing log entries in a chunk. It takes compressed chunk data, a processing pipeline, iteration direction, and a time range. The returned iterator filters entries based on the time range and applies the given pipeline. It handles both forward and backward iteration.

Parameters:

  • chunkData: Compressed chunk data containing log entries
  • pipeline: StreamPipeline for processing and filtering entries
  • direction: Direction of iteration (FORWARD or BACKWARD)
  • from: Start timestamp (inclusive) for filtering entries
  • through: End timestamp (exclusive) for filtering entries

Returns an EntryIterator and an error if creation fails.

func NewSampleIterator

func NewSampleIterator(
	chunkData []byte,
	pipeline log.StreamSampleExtractor,
	from, through int64,
) (iter.SampleIterator, error)

NewSampleIterator creates an iterator for efficiently traversing samples in a chunk. It takes compressed chunk data, a processing pipeline, iteration direction, and a time range. The returned iterator filters samples based on the time range and applies the given pipeline. It handles both forward and backward iteration.

Parameters:

  • chunkData: Compressed chunk data containing samples
  • pipeline: StreamSampleExtractor for processing and filtering samples
  • from: Start timestamp (inclusive) for filtering samples
  • through: End timestamp (exclusive) for filtering samples

Returns a SampleIterator and an error if creation fails.

func WriteChunk

func WriteChunk(writer io.Writer, entries []*logproto.Entry, encoding EncodingType) (int64, error)

WriteChunk writes the log entries to the writer w with the specified encoding type.

Types

type ChunkReader

type ChunkReader struct {
	// contains filtered or unexported fields
}

ChunkReader reads chunks from a byte slice

func NewChunkReader

func NewChunkReader(b []byte) (*ChunkReader, error)

NewChunkReader creates a new ChunkReader and performs CRC verification.

func (*ChunkReader) At

func (r *ChunkReader) At() (int64, []byte)

Entry implements iter.EntryIterator. Currrently the chunk reader returns the timestamp and the line, but it could returns all timestamps or/and all lines.

func (*ChunkReader) Close

func (r *ChunkReader) Close() error

Close implements iter.EntryIterator.

func (*ChunkReader) Err

func (r *ChunkReader) Err() error

Err implements iter.EntryIterator.

func (*ChunkReader) Next

func (r *ChunkReader) Next() bool

Next implements iter.EntryIterator. Reads the next entry from the chunk.

type ChunkRef

type ChunkRef uint64

func NewChunkRef

func NewChunkRef(offset, size uint64) ChunkRef

func (ChunkRef) Unpack

func (b ChunkRef) Unpack() (int, int)

type EncodingType

type EncodingType byte

EncodingType defines the type for encoding enums

const (
	EncodingSnappy EncodingType = iota + 1
)

Supported encoding types

type Meta

type Meta struct {
	// Start offset of the chunk
	Ref ChunkRef
	// Min and Max time nanoseconds precise.
	MinTime, MaxTime int64
}

type StatsWriter

type StatsWriter struct {
	io.Writer
	// contains filtered or unexported fields
}

func (*StatsWriter) Write

func (w *StatsWriter) Write(p []byte) (int, error)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL