Documentation ¶
Index ¶
- Constants
- Variables
- type BlockChunkRef
- type ByteSlice
- type ChunkDiskMapper
- func (cdm *ChunkDiskMapper) Chunk(ref ChunkDiskMapperRef) (chunkenc.Chunk, error)
- func (cdm *ChunkDiskMapper) Close() error
- func (cdm *ChunkDiskMapper) CutNewFile()
- func (cdm *ChunkDiskMapper) DeleteCorrupted(originalErr error) error
- func (cdm *ChunkDiskMapper) IsQueueEmpty() bool
- func (cdm *ChunkDiskMapper) IterateAllChunks(...) (err error)
- func (cdm *ChunkDiskMapper) Size() (int64, error)
- func (cdm *ChunkDiskMapper) Truncate(mint int64) error
- func (cdm *ChunkDiskMapper) WriteChunk(seriesRef HeadSeriesRef, mint, maxt int64, chk chunkenc.Chunk, ...) (chkRef ChunkDiskMapperRef)
- type ChunkDiskMapperRef
- type ChunkRef
- type CorruptionErr
- type HeadChunkID
- type HeadChunkRef
- type HeadSeriesRef
- type Iterator
- type Meta
- type Reader
- type Writer
Constants ¶
const ( // MagicChunks is 4 bytes at the head of a series file. MagicChunks = 0x85BD40DD // MagicChunksSize is the size in bytes of MagicChunks. MagicChunksSize = 4 ChunksFormatVersionSize = 1 // SegmentHeaderSize defines the total size of the header part. SegmentHeaderSize = MagicChunksSize + ChunksFormatVersionSize + segmentHeaderPaddingSize )
Segment header fields constants.
const ( // MaxChunkLengthFieldSize defines the maximum size of the data length part. MaxChunkLengthFieldSize = binary.MaxVarintLen32 // ChunkEncodingSize defines the size of the chunk encoding part. ChunkEncodingSize = 1 )
Chunk fields constants.
const ( // MintMaxtSize is the size of the mint/maxt for head chunk file and chunks. MintMaxtSize = 8 // SeriesRefSize is the size of series reference on disk. SeriesRefSize = 8 // HeadChunkFileHeaderSize is the total size of the header for a head chunk file. HeadChunkFileHeaderSize = SegmentHeaderSize // MaxHeadChunkFileSize is the max size of a head chunk file. MaxHeadChunkFileSize = 128 * 1024 * 1024 // 128 MiB. // CRCSize is the size of crc32 sum on disk. CRCSize = 4 // MaxHeadChunkMetaSize is the max size of an mmapped chunks minus the chunks data. // Max because the uvarint size can be smaller. MaxHeadChunkMetaSize = SeriesRefSize + 2*MintMaxtSize + ChunkEncodingSize + MaxChunkLengthFieldSize + CRCSize // MinWriteBufferSize is the minimum write buffer size allowed. MinWriteBufferSize = 64 * 1024 // 64KB. // MaxWriteBufferSize is the maximum write buffer size allowed. MaxWriteBufferSize = 8 * 1024 * 1024 // 8 MiB. // DefaultWriteBufferSize is the default write buffer size. DefaultWriteBufferSize = 4 * 1024 * 1024 // 4 MiB. // DefaultWriteQueueSize is the default size of the in-memory queue used before flushing chunks to the disk. // A value of 0 completely disables this feature. DefaultWriteQueueSize = 0 )
const (
// DefaultChunkSegmentSize is the default chunks segment size.
DefaultChunkSegmentSize = 512 * 1024 * 1024
)
const (
// MagicHeadChunks is 4 bytes at the beginning of a head chunk file.
MagicHeadChunks = 0x0130BC91
)
Head chunk file header fields constants.
Variables ¶
var ErrChunkDiskMapperClosed = errors.New("ChunkDiskMapper closed")
ErrChunkDiskMapperClosed returned by any method indicates that the ChunkDiskMapper was closed.
var HeadChunkFilePreallocationSize int64 = MinWriteBufferSize * 2
HeadChunkFilePreallocationSize is the size to which the m-map file should be preallocated when a new file is cut. Windows needs pre-allocations while the other OS does not. But we observed that a 0 pre-allocation causes unit tests to flake. This small allocation for non-Windows OSes removes the flake.
Functions ¶
This section is empty.
Types ¶
type BlockChunkRef ¶
type BlockChunkRef uint64
BlockChunkRef refers to a chunk within a persisted block. The upper 4 bytes are for the segment index and the lower 4 bytes are for the segment offset where the data starts for this chunk.
func NewBlockChunkRef ¶
func NewBlockChunkRef(fileIndex, fileOffset uint64) BlockChunkRef
NewBlockChunkRef packs the file index and byte offset into a BlockChunkRef.
func (BlockChunkRef) Unpack ¶
func (b BlockChunkRef) Unpack() (int, int)
type ChunkDiskMapper ¶
type ChunkDiskMapper struct {
// contains filtered or unexported fields
}
ChunkDiskMapper is for writing the Head block chunks to the disk and access chunks via mmapped file.
func NewChunkDiskMapper ¶
func NewChunkDiskMapper(reg prometheus.Registerer, dir string, pool chunkenc.Pool, writeBufferSize, writeQueueSize int) (*ChunkDiskMapper, error)
NewChunkDiskMapper returns a new ChunkDiskMapper against the given directory using the default head chunk file duration. NOTE: 'IterateAllChunks' method needs to be called at least once after creating ChunkDiskMapper to set the maxt of all the file.
func (*ChunkDiskMapper) Chunk ¶
func (cdm *ChunkDiskMapper) Chunk(ref ChunkDiskMapperRef) (chunkenc.Chunk, error)
Chunk returns a chunk from a given reference.
func (*ChunkDiskMapper) Close ¶
func (cdm *ChunkDiskMapper) Close() error
Close closes all the open files in ChunkDiskMapper. It is not longer safe to access chunks from this struct after calling Close.
func (*ChunkDiskMapper) CutNewFile ¶
func (cdm *ChunkDiskMapper) CutNewFile()
CutNewFile makes that a new file will be created the next time a chunk is written.
func (*ChunkDiskMapper) DeleteCorrupted ¶
func (cdm *ChunkDiskMapper) DeleteCorrupted(originalErr error) error
DeleteCorrupted deletes all the head chunk files after the one which had the corruption (including the corrupt file).
func (*ChunkDiskMapper) IsQueueEmpty ¶
func (cdm *ChunkDiskMapper) IsQueueEmpty() bool
func (*ChunkDiskMapper) IterateAllChunks ¶
func (cdm *ChunkDiskMapper) IterateAllChunks(f func(seriesRef HeadSeriesRef, chunkRef ChunkDiskMapperRef, mint, maxt int64, numSamples uint16) error) (err error)
IterateAllChunks iterates all mmappedChunkFiles (in order of head chunk file name/number) and all the chunks within it and runs the provided function with information about each chunk. It returns on the first error encountered. NOTE: This method needs to be called at least once after creating ChunkDiskMapper to set the maxt of all the file.
func (*ChunkDiskMapper) Size ¶
func (cdm *ChunkDiskMapper) Size() (int64, error)
Size returns the size of the chunk files.
func (*ChunkDiskMapper) Truncate ¶
func (cdm *ChunkDiskMapper) Truncate(mint int64) error
Truncate deletes the head chunk files which are strictly below the mint. mint should be in milliseconds.
func (*ChunkDiskMapper) WriteChunk ¶
func (cdm *ChunkDiskMapper) WriteChunk(seriesRef HeadSeriesRef, mint, maxt int64, chk chunkenc.Chunk, callback func(err error)) (chkRef ChunkDiskMapperRef)
WriteChunk writes the chunk to the disk. The returned chunk ref is the reference from where the chunk encoding starts for the chunk.
type ChunkDiskMapperRef ¶
type ChunkDiskMapperRef uint64
ChunkDiskMapperRef represents the location of a head chunk on disk. The upper 4 bytes hold the index of the head chunk file and the lower 4 bytes hold the byte offset in the head chunk file where the chunk starts.
func (ChunkDiskMapperRef) Unpack ¶
func (ref ChunkDiskMapperRef) Unpack() (seq, offset int)
type ChunkRef ¶
type ChunkRef uint64
ChunkRef is a generic reference for reading chunk data. In prometheus it is either a HeadChunkRef or BlockChunkRef, though other implementations may have their own reference types.
type CorruptionErr ¶
CorruptionErr is an error that's returned when corruption is encountered.
func (*CorruptionErr) Error ¶
func (e *CorruptionErr) Error() string
type HeadChunkID ¶
type HeadChunkID uint64
HeadChunkID refers to a specific chunk in a series (memSeries) in the Head. Each memSeries has its own monotonically increasing number to refer to its chunks. If the HeadChunkID value is...
- memSeries.firstChunkID+len(memSeries.mmappedChunks), it's the head chunk.
- less than the above, but >= memSeries.firstID, then it's memSeries.mmappedChunks[i] where i = HeadChunkID - memSeries.firstID.
Example: assume a memSeries.firstChunkID=7 and memSeries.mmappedChunks=[p5,p6,p7,p8,p9]. | HeadChunkID value | refers to ... | |-------------------|----------------------------------------------------------------------------------------| | 0-6 | chunks that have been compacted to blocks, these won't return data for queries in Head | | 7-11 | memSeries.mmappedChunks[i] where i is 0 to 4. | | 12 | memSeries.headChunk |
type HeadChunkRef ¶
type HeadChunkRef uint64
HeadChunkRef packs a HeadSeriesRef and a ChunkID into a global 8 Byte ID. The HeadSeriesRef and ChunkID may not exceed 5 and 3 bytes respectively.
func NewHeadChunkRef ¶
func NewHeadChunkRef(hsr HeadSeriesRef, chunkID HeadChunkID) HeadChunkRef
func (HeadChunkRef) Unpack ¶
func (p HeadChunkRef) Unpack() (HeadSeriesRef, HeadChunkID)
type Iterator ¶
type Iterator interface { // At returns the current meta. // It depends on implementation if the chunk is populated or not. At() Meta // Next advances the iterator by one. Next() bool // Err returns optional error if Next is false. Err() error }
Iterator iterates over the chunks of a single time series.
type Meta ¶
type Meta struct { // Ref and Chunk hold either a reference that can be used to retrieve // chunk data or the data itself. // If Chunk is nil, call ChunkReader.Chunk(Meta.Ref) to get the chunk and assign it to the Chunk field Ref ChunkRef Chunk chunkenc.Chunk // Time range the data covers. // When MaxTime == math.MaxInt64 the chunk is still open and being appended to. MinTime, MaxTime int64 }
Meta holds information about a chunk of data.
func (*Meta) OverlapsClosedInterval ¶
OverlapsClosedInterval Returns true if the chunk overlaps [mint, maxt].
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader implements a ChunkReader for a serialized byte stream of series data.
func NewDirReader ¶
NewDirReader returns a new Reader against sequentially numbered files in the given directory.
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
Writer implements the ChunkWriter interface for the standard serialization format.
func NewWriter ¶
NewWriter returns a new writer against the given directory using the default segment size.
func NewWriterWithSegSize ¶
NewWriterWithSegSize returns a new writer against the given directory and allows setting a custom size for the segments.
func (*Writer) WriteChunks ¶
WriteChunks writes as many chunks as possible to the current segment, cuts a new segment when the current segment is full and writes the rest of the chunks in the new segment.