Documentation ¶
Overview ¶
Package chunks provides functionality for efficient storage and retrieval of log data and metrics.
The chunks package implements a compact and performant way to store and access log entries and metric samples. It uses various compression and encoding techniques to minimize storage requirements while maintaining fast access times.
Key features:
- Efficient chunk writing with multiple encoding options
- Fast chunk reading with iterators for forward and backward traversal
- Support for time-based filtering of log entries and metric samples
- Integration with Loki's log query language (LogQL) for advanced filtering and processing
- Separate iterators for log entries and metric samples
Main types and functions:
- WriteChunk: Writes log entries to a compressed chunk format
- NewChunkReader: Creates a reader for parsing and accessing chunk data
- NewEntryIterator: Provides an iterator for efficient traversal of log entries in a chunk
- NewSampleIterator: Provides an iterator for efficient traversal of metric samples in a chunk
Entry Iterator: The EntryIterator allows efficient traversal of log entries within a chunk. It supports both forward and backward iteration, time-based filtering, and integration with LogQL pipelines for advanced log processing.
Sample Iterator: The SampleIterator enables efficient traversal of metric samples within a chunk. It supports time-based filtering and integration with LogQL extractors for advanced metric processing. This iterator is particularly useful for handling numeric data extracted from logs or pre-aggregated metrics.
Both iterators implement methods for accessing the current entry or sample, checking for errors, and retrieving associated labels and stream hashes.
This package is designed to work seamlessly with other components of the Loki log aggregation system, providing a crucial layer for data storage and retrieval of both logs and metrics.
Index ¶
- func NewEntryIterator(chunkData []byte, pipeline log.StreamPipeline, direction logproto.Direction, ...) (iter.EntryIterator, error)
- func NewSampleIterator(chunkData []byte, pipeline log.StreamSampleExtractor, from, through int64) (iter.SampleIterator, error)
- func WriteChunk(writer io.Writer, entries []*logproto.Entry, encoding EncodingType) (int64, error)
- type ChunkReader
- type ChunkRef
- type EncodingType
- type Meta
- type StatsWriter
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func NewEntryIterator ¶
func NewEntryIterator( chunkData []byte, pipeline log.StreamPipeline, direction logproto.Direction, from, through int64, ) (iter.EntryIterator, error)
NewEntryIterator creates an iterator for efficiently traversing log entries in a chunk. It takes compressed chunk data, a processing pipeline, iteration direction, and a time range. The returned iterator filters entries based on the time range and applies the given pipeline. It handles both forward and backward iteration.
Parameters:
- chunkData: Compressed chunk data containing log entries
- pipeline: StreamPipeline for processing and filtering entries
- direction: Direction of iteration (FORWARD or BACKWARD)
- from: Start timestamp (inclusive) for filtering entries
- through: End timestamp (exclusive) for filtering entries
Returns an EntryIterator and an error if creation fails.
func NewSampleIterator ¶
func NewSampleIterator( chunkData []byte, pipeline log.StreamSampleExtractor, from, through int64, ) (iter.SampleIterator, error)
NewSampleIterator creates an iterator for efficiently traversing samples in a chunk. It takes compressed chunk data, a processing pipeline, iteration direction, and a time range. The returned iterator filters samples based on the time range and applies the given pipeline. It handles both forward and backward iteration.
Parameters:
- chunkData: Compressed chunk data containing samples
- pipeline: StreamSampleExtractor for processing and filtering samples
- from: Start timestamp (inclusive) for filtering samples
- through: End timestamp (exclusive) for filtering samples
Returns a SampleIterator and an error if creation fails.
func WriteChunk ¶
WriteChunk writes the log entries to the writer w with the specified encoding type.
Types ¶
type ChunkReader ¶
type ChunkReader struct {
// contains filtered or unexported fields
}
ChunkReader reads chunks from a byte slice
func NewChunkReader ¶
func NewChunkReader(b []byte) (*ChunkReader, error)
NewChunkReader creates a new ChunkReader and performs CRC verification.
func (*ChunkReader) At ¶
func (r *ChunkReader) At() (int64, []byte)
Entry implements iter.EntryIterator. Currrently the chunk reader returns the timestamp and the line, but it could returns all timestamps or/and all lines.
func (*ChunkReader) Close ¶
func (r *ChunkReader) Close() error
Close implements iter.EntryIterator.
func (*ChunkReader) Next ¶
func (r *ChunkReader) Next() bool
Next implements iter.EntryIterator. Reads the next entry from the chunk.
type EncodingType ¶
type EncodingType byte
EncodingType defines the type for encoding enums
const (
EncodingSnappy EncodingType = iota + 1
)
Supported encoding types