chunks

package
v0.0.0-...-03ae564 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 31, 2024 License: Apache-2.0 Imports: 20 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// MagicChunks is 4 bytes at the head of a series file.
	MagicChunks = 0x85BD40DD
	// MagicChunksSize is the size in bytes of MagicChunks.
	MagicChunksSize = 4

	ChunksFormatVersionSize = 1

	// SegmentHeaderSize defines the total size of the header part.
	SegmentHeaderSize = MagicChunksSize + ChunksFormatVersionSize + segmentHeaderPaddingSize
)

Segment header fields constants.

View Source
const (
	// MaxChunkLengthFieldSize defines the maximum size of the data length part.
	MaxChunkLengthFieldSize = binary.MaxVarintLen32
	// ChunkEncodingSize defines the size of the chunk encoding part.
	ChunkEncodingSize = 1
)

Chunk fields constants.

View Source
const (
	// MintMaxtSize is the size of the mint/maxt for head chunk file and chunks.
	MintMaxtSize = 8
	// SeriesRefSize is the size of series reference on disk.
	SeriesRefSize = 8
	// HeadChunkFileHeaderSize is the total size of the header for a head chunk file.
	HeadChunkFileHeaderSize = SegmentHeaderSize
	// MaxHeadChunkFileSize is the max size of a head chunk file.
	MaxHeadChunkFileSize = 128 * 1024 * 1024 // 128 MiB.
	// CRCSize is the size of crc32 sum on disk.
	CRCSize = 4
	// MaxHeadChunkMetaSize is the max size of an mmapped chunks minus the chunks data.
	// Max because the uvarint size can be smaller.
	MaxHeadChunkMetaSize = SeriesRefSize + 2*MintMaxtSize + ChunkEncodingSize + MaxChunkLengthFieldSize + CRCSize
	// MinWriteBufferSize is the minimum write buffer size allowed.
	MinWriteBufferSize = 64 * 1024 // 64KB.
	// MaxWriteBufferSize is the maximum write buffer size allowed.
	MaxWriteBufferSize = 8 * 1024 * 1024 // 8 MiB.
	// DefaultWriteBufferSize is the default write buffer size.
	DefaultWriteBufferSize = 4 * 1024 * 1024 // 4 MiB.
	// DefaultWriteQueueSize is the default size of the in-memory queue used before flushing chunks to the disk.
	// A value of 0 completely disables this feature.
	DefaultWriteQueueSize = 0
)
View Source
const (
	// DefaultChunkSegmentSize is the default chunks segment size.
	DefaultChunkSegmentSize = 512 * 1024 * 1024
)
View Source
const (
	// MagicHeadChunks is 4 bytes at the beginning of a head chunk file.
	MagicHeadChunks = 0x0130BC91
)

Head chunk file header fields constants.

View Source
const (
	OutOfOrderMask = uint8(0b10000000)
)

Chunk encodings for out-of-order chunks. These encodings must be only used by the Head block for its internal bookkeeping.

Variables

View Source
var ErrChunkDiskMapperClosed = errors.New("ChunkDiskMapper closed")

ErrChunkDiskMapperClosed returned by any method indicates that the ChunkDiskMapper was closed.

View Source
var HeadChunkFilePreallocationSize int64 = MinWriteBufferSize * 2

HeadChunkFilePreallocationSize is the size to which the m-map file should be preallocated when a new file is cut. Windows needs pre-allocations while the other OS does not. But we observed that a 0 pre-allocation causes unit tests to flake. This small allocation for non-Windows OSes removes the flake.

Functions

This section is empty.

Types

type BlockChunkRef

type BlockChunkRef uint64

BlockChunkRef refers to a chunk within a persisted block. The upper 4 bytes are for the segment index and the lower 4 bytes are for the segment offset where the data starts for this chunk.

func NewBlockChunkRef

func NewBlockChunkRef(fileIndex, fileOffset uint64) BlockChunkRef

NewBlockChunkRef packs the file index and byte offset into a BlockChunkRef.

func (BlockChunkRef) Unpack

func (b BlockChunkRef) Unpack() (int, int)

type ByteSlice

type ByteSlice interface {
	Len() int
	Range(start, end int) []byte
}

ByteSlice abstracts a byte slice.

type ChunkDiskMapper

type ChunkDiskMapper struct {
	// contains filtered or unexported fields
}

ChunkDiskMapper is for writing the Head block chunks to the disk and access chunks via mmapped file.

func NewChunkDiskMapper

func NewChunkDiskMapper(reg prometheus.Registerer, dir string, pool chunkenc.Pool, writeBufferSize, writeQueueSize int) (*ChunkDiskMapper, error)

NewChunkDiskMapper returns a new ChunkDiskMapper against the given directory using the default head chunk file duration. NOTE: 'IterateAllChunks' method needs to be called at least once after creating ChunkDiskMapper to set the maxt of all the file.

func (*ChunkDiskMapper) ApplyOutOfOrderMask

func (cdm *ChunkDiskMapper) ApplyOutOfOrderMask(sourceEncoding chunkenc.Encoding) chunkenc.Encoding

func (*ChunkDiskMapper) Chunk

Chunk returns a chunk from a given reference.

func (*ChunkDiskMapper) Close

func (cdm *ChunkDiskMapper) Close() error

Close closes all the open files in ChunkDiskMapper. It is not longer safe to access chunks from this struct after calling Close.

func (*ChunkDiskMapper) CutNewFile

func (cdm *ChunkDiskMapper) CutNewFile()

CutNewFile makes that a new file will be created the next time a chunk is written.

func (*ChunkDiskMapper) DeleteCorrupted

func (cdm *ChunkDiskMapper) DeleteCorrupted(originalErr error) error

DeleteCorrupted deletes all the head chunk files after the one which had the corruption (including the corrupt file).

func (*ChunkDiskMapper) IsOutOfOrderChunk

func (cdm *ChunkDiskMapper) IsOutOfOrderChunk(e chunkenc.Encoding) bool

func (*ChunkDiskMapper) IsQueueEmpty

func (cdm *ChunkDiskMapper) IsQueueEmpty() bool

func (*ChunkDiskMapper) IterateAllChunks

func (cdm *ChunkDiskMapper) IterateAllChunks(f func(seriesRef HeadSeriesRef, chunkRef ChunkDiskMapperRef, mint, maxt int64, numSamples uint16, encoding chunkenc.Encoding, isOOO bool) error) (err error)

IterateAllChunks iterates all mmappedChunkFiles (in order of head chunk file name/number) and all the chunks within it and runs the provided function with information about each chunk. It returns on the first error encountered. NOTE: This method needs to be called at least once after creating ChunkDiskMapper to set the maxt of all the file.

func (*ChunkDiskMapper) RemoveMasks

func (cdm *ChunkDiskMapper) RemoveMasks(sourceEncoding chunkenc.Encoding) chunkenc.Encoding

func (*ChunkDiskMapper) Size

func (cdm *ChunkDiskMapper) Size() (int64, error)

Size returns the size of the chunk files.

func (*ChunkDiskMapper) Truncate

func (cdm *ChunkDiskMapper) Truncate(fileNo uint32) error

Truncate deletes the head chunk files whose file number is less than given fileNo.

func (*ChunkDiskMapper) WriteChunk

func (cdm *ChunkDiskMapper) WriteChunk(seriesRef HeadSeriesRef, mint, maxt int64, chk chunkenc.Chunk, isOOO bool, callback func(err error)) (chkRef ChunkDiskMapperRef)

WriteChunk writes the chunk to the disk. The returned chunk ref is the reference from where the chunk encoding starts for the chunk.

type ChunkDiskMapperRef

type ChunkDiskMapperRef uint64

ChunkDiskMapperRef represents the location of a head chunk on disk. The upper 4 bytes hold the index of the head chunk file and the lower 4 bytes hold the byte offset in the head chunk file where the chunk starts.

func (ChunkDiskMapperRef) GreaterThan

func (ref ChunkDiskMapperRef) GreaterThan(r ChunkDiskMapperRef) bool

func (ChunkDiskMapperRef) GreaterThanOrEqualTo

func (ref ChunkDiskMapperRef) GreaterThanOrEqualTo(r ChunkDiskMapperRef) bool

func (ChunkDiskMapperRef) Unpack

func (ref ChunkDiskMapperRef) Unpack() (seq, offset int)

type ChunkRef

type ChunkRef uint64

ChunkRef is a generic reference for reading chunk data. In prometheus it is either a HeadChunkRef or BlockChunkRef, though other implementations may have their own reference types.

type CorruptionErr

type CorruptionErr struct {
	Dir       string
	FileIndex int
	Err       error
}

CorruptionErr is an error that's returned when corruption is encountered.

func (*CorruptionErr) Error

func (e *CorruptionErr) Error() string

func (*CorruptionErr) Unwrap

func (e *CorruptionErr) Unwrap() error

type HeadChunkID

type HeadChunkID uint64

HeadChunkID refers to a specific chunk in a series (memSeries) in the Head. Each memSeries has its own monotonically increasing number to refer to its chunks. If the HeadChunkID value is...

  • memSeries.firstChunkID+len(memSeries.mmappedChunks), it's the head chunk.
  • less than the above, but >= memSeries.firstID, then it's memSeries.mmappedChunks[i] where i = HeadChunkID - memSeries.firstID.

If memSeries.headChunks is non-nil it points to a *memChunk that holds the current "open" (accepting appends) instance. *memChunk is a linked list and memChunk.next pointer might link to the older *memChunk instance. If there are multiple *memChunk instances linked to each other from memSeries.headChunks they will be m-mapped as soon as possible leaving only "open" *memChunk instance.

Example: assume a memSeries.firstChunkID=7 and memSeries.mmappedChunks=[p5,p6,p7,p8,p9].

| HeadChunkID value | refers to ...                                                                          |
|-------------------|----------------------------------------------------------------------------------------|
|               0-6 | chunks that have been compacted to blocks, these won't return data for queries in Head |
|              7-11 | memSeries.mmappedChunks[i] where i is 0 to 4.                                          |
|                12 |                                                         *memChunk{next: nil}
|                13 |                                         *memChunk{next: ^}
|                14 | memSeries.headChunks -> *memChunk{next: ^}

type HeadChunkRef

type HeadChunkRef uint64

HeadChunkRef packs a HeadSeriesRef and a ChunkID into a global 8 Byte ID. The HeadSeriesRef and ChunkID may not exceed 5 and 3 bytes respectively.

func NewHeadChunkRef

func NewHeadChunkRef(hsr HeadSeriesRef, chunkID HeadChunkID) HeadChunkRef

func (HeadChunkRef) Unpack

func (p HeadChunkRef) Unpack() (HeadSeriesRef, HeadChunkID)

type HeadSeriesRef

type HeadSeriesRef uint64

HeadSeriesRef refers to in-memory series.

type Iterator

type Iterator interface {
	// At returns the current meta.
	// It depends on implementation if the chunk is populated or not.
	At() Meta
	// Next advances the iterator by one.
	Next() bool
	// Err returns optional error if Next is false.
	Err() error
}

Iterator iterates over the chunks of a single time series.

type Meta

type Meta struct {
	// Ref and Chunk hold either a reference that can be used to retrieve
	// chunk data or the data itself.
	// If Chunk is nil, call ChunkReader.ChunkOrIterable(Meta.Ref) to get the
	// chunk and assign it to the Chunk field. If an iterable is returned from
	// that method, then it may not be possible to set Chunk as the iterable
	// might form several chunks.
	Ref   ChunkRef
	Chunk chunkenc.Chunk

	// Time range the data covers.
	// When MaxTime == math.MaxInt64 the chunk is still open and being appended to.
	MinTime, MaxTime int64

	// OOOLastRef, OOOLastMinTime and OOOLastMaxTime are kept as markers for
	// overlapping chunks.
	// These fields point to the last created out of order Chunk (the head) that existed
	// when Series() was called and was overlapping.
	// Series() and Chunk() method responses should be consistent for the same
	// query even if new data is added in between the calls.
	OOOLastRef                     ChunkRef
	OOOLastMinTime, OOOLastMaxTime int64
}

Meta holds information about one or more chunks. For examples of when chunks.Meta could refer to multiple chunks, see ChunkReader.ChunkOrIterable().

func ChunkFromSamples

func ChunkFromSamples(s []Sample) (Meta, error)

ChunkFromSamples requires all samples to have the same type.

func ChunkFromSamplesGeneric

func ChunkFromSamplesGeneric(s Samples) (Meta, error)

ChunkFromSamplesGeneric requires all samples to have the same type.

func (*Meta) OverlapsClosedInterval

func (cm *Meta) OverlapsClosedInterval(mint, maxt int64) bool

OverlapsClosedInterval Returns true if the chunk overlaps [mint, maxt].

type Reader

type Reader struct {
	// contains filtered or unexported fields
}

Reader implements a ChunkReader for a serialized byte stream of series data.

func NewDirReader

func NewDirReader(dir string, pool chunkenc.Pool) (*Reader, error)

NewDirReader returns a new Reader against sequentially numbered files in the given directory.

func (*Reader) ChunkOrIterable

func (s *Reader) ChunkOrIterable(meta Meta) (chunkenc.Chunk, chunkenc.Iterable, error)

ChunkOrIterable returns a chunk from a given reference.

func (*Reader) Close

func (s *Reader) Close() error

func (*Reader) Size

func (s *Reader) Size() int64

Size returns the size of the chunks.

type Sample

type Sample interface {
	T() int64
	F() float64
	H() *histogram.Histogram
	FH() *histogram.FloatHistogram
	Type() chunkenc.ValueType
}

func ChunkMetasToSamples

func ChunkMetasToSamples(chunks []Meta) (result []Sample)

ChunkMetasToSamples converts a slice of chunk meta data to a slice of samples. Used in tests to compare the content of chunks.

func GenerateSamples

func GenerateSamples(start, numSamples int) []Sample

GenerateSamples starting at start and counting up numSamples.

type SampleSlice

type SampleSlice []Sample

func (SampleSlice) Get

func (s SampleSlice) Get(i int) Sample

func (SampleSlice) Len

func (s SampleSlice) Len() int

type Samples

type Samples interface {
	Get(i int) Sample
	Len() int
}

type Writer

type Writer struct {
	// contains filtered or unexported fields
}

Writer implements the ChunkWriter interface for the standard serialization format.

func NewWriter

func NewWriter(dir string) (*Writer, error)

NewWriter returns a new writer against the given directory using the default segment size.

func NewWriterWithSegSize

func NewWriterWithSegSize(dir string, segmentSize int64) (*Writer, error)

NewWriterWithSegSize returns a new writer against the given directory and allows setting a custom size for the segments.

func (*Writer) Close

func (w *Writer) Close() error

func (*Writer) WriteChunks

func (w *Writer) WriteChunks(chks ...Meta) error

WriteChunks writes as many chunks as possible to the current segment, cuts a new segment when the current segment is full and writes the rest of the chunks in the new segment.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL