chunkenc

package
v1.4.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 6, 2020 License: Apache-2.0 Imports: 24 Imported by: 53

README

Chunk format

  |                 |             |
  | MagicNumber(4b) | version(1b) |
  |                 |             |
  --------------------------------------------------
  |         block-1 bytes         |  checksum (4b) |
  --------------------------------------------------
  |         block-2 bytes         |  checksum (4b) |
  --------------------------------------------------
  |         block-n bytes         |  checksum (4b) |
  --------------------------------------------------
  |         #blocks (uvarint)                      |
  --------------------------------------------------
  | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
  -------------------------------------------------------------------
  | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
  -------------------------------------------------------------------
  | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
  -------------------------------------------------------------------
  | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
  -------------------------------------------------------------------
  |                      checksum(from #blocks)                     |
  -------------------------------------------------------------------
  | metasOffset - offset to the point with #blocks |
  --------------------------------------------------

Documentation

Index

Constants

View Source
const GzipLogChunk = encoding.Encoding(128)

GzipLogChunk is a cortex encoding type for our chunks. Deprecated: the chunk encoding/compression format is inside the chunk data.

View Source
const LogChunk = encoding.Encoding(129)

LogChunk is a cortex encoding type for our chunks.

Variables

View Source
var (
	ErrChunkFull       = errors.New("chunk full")
	ErrOutOfOrder      = errors.New("entry out of order")
	ErrInvalidSize     = errors.New("invalid size")
	ErrInvalidFlag     = errors.New("invalid flag")
	ErrInvalidChecksum = errors.New("invalid checksum")
)

Errors returned by the chunk interface.

View Source
var (
	// Gzip is the gnu zip compression pool
	Gzip     = GzipPool{/* contains filtered or unexported fields */}
	Lz4_64k  = LZ4Pool{/* contains filtered or unexported fields */} // Lz4_64k is the l4z compression pool, with 64k buffer size
	Lz4_256k = LZ4Pool{/* contains filtered or unexported fields */} // Lz4_256k uses 256k buffer
	Lz4_1M   = LZ4Pool{/* contains filtered or unexported fields */} // Lz4_1M uses 1M buffer
	Lz4_4M   = LZ4Pool{/* contains filtered or unexported fields */} // Lz4_4M uses 4M buffer

	// Snappy is the snappy compression pool
	Snappy SnappyPool
	// Noop is the no compression pool
	Noop NoopPool

	// BufReaderPool is bufio.Reader pool
	BufReaderPool = &BufioReaderPool{
		pool: sync.Pool{
			New: func() interface{} { return bufio.NewReader(nil) },
		},
	}
	// BytesBufferPool is a bytes buffer used for lines decompressed.
	// Buckets [0.5KB,1KB,2KB,4KB,8KB]
	BytesBufferPool = pool.New(1<<9, 1<<13, 2, func(size int) interface{} { return make([]byte, 0, size) })
)

Functions

func NewFacade

func NewFacade(c Chunk) encoding.Chunk

NewFacade makes a new Facade.

func SupportedEncoding added in v1.3.0

func SupportedEncoding() string

SupportedEncoding returns the list of supported Encoding.

func UncompressedSize added in v0.4.0

func UncompressedSize(c encoding.Chunk) (int, bool)

UncompressedSize is a helper function to hide the type assertion kludge when wanting the uncompressed size of the Cortex interface encoding.Chunk.

Types

type BufioReaderPool added in v0.2.0

type BufioReaderPool struct {
	// contains filtered or unexported fields
}

BufioReaderPool is a bufio reader that uses sync.Pool.

func (*BufioReaderPool) Get added in v0.2.0

func (bufPool *BufioReaderPool) Get(r io.Reader) *bufio.Reader

Get returns a bufio.Reader which reads from r. The buffer size is that of the pool.

func (*BufioReaderPool) Put added in v0.2.0

func (bufPool *BufioReaderPool) Put(b *bufio.Reader)

Put puts the bufio.Reader back into the pool.

type Chunk

type Chunk interface {
	Bounds() (time.Time, time.Time)
	SpaceFor(*logproto.Entry) bool
	Append(*logproto.Entry) error
	Iterator(ctx context.Context, from, through time.Time, direction logproto.Direction, filter logql.LineFilter) (iter.EntryIterator, error)
	Size() int
	Bytes() ([]byte, error)
	Blocks() int
	Utilization() float64
	UncompressedSize() int
	CompressedSize() int
	Close() error
}

Chunk is the interface for the compressed logs chunk format.

func NewDumbChunk

func NewDumbChunk() Chunk

NewDumbChunk returns a new chunk that isn't very good.

type Encoding

type Encoding byte

Encoding is the identifier for a chunk encoding.

const (
	EncNone Encoding = iota
	EncGZIP
	EncDumb
	EncLZ4_64k
	EncSnappy

	// Added for testing.
	EncLZ4_256k
	EncLZ4_1M
	EncLZ4_4M
)

The different available encodings. Make sure to preserve the order, as these numeric values are written to the chunks!

func ParseEncoding added in v1.3.0

func ParseEncoding(enc string) (Encoding, error)

ParseEncoding parses an chunk encoding (compression algorithm) by its name.

func (Encoding) String

func (e Encoding) String() string

type Facade

type Facade struct {
	encoding.Chunk
	// contains filtered or unexported fields
}

Facade for compatibility with cortex chunk type, so we can use its chunk store.

func (Facade) Encoding

func (Facade) Encoding() encoding.Encoding

Encoding implements encoding.Chunk.

func (Facade) LokiChunk

func (f Facade) LokiChunk() Chunk

LokiChunk returns the chunkenc.Chunk.

func (Facade) Marshal

func (f Facade) Marshal(w io.Writer) error

Marshal implements encoding.Chunk.

func (*Facade) UnmarshalFromBuf

func (f *Facade) UnmarshalFromBuf(buf []byte) error

UnmarshalFromBuf implements encoding.Chunk.

func (Facade) Utilization added in v0.4.0

func (f Facade) Utilization() float64

Utilization implements encoding.Chunk.

type GzipPool added in v0.2.0

type GzipPool struct {
	// contains filtered or unexported fields
}

GzipPool is a gun zip compression pool

func (*GzipPool) GetReader added in v0.2.0

func (pool *GzipPool) GetReader(src io.Reader) io.Reader

GetReader gets or creates a new CompressionReader and reset it to read from src

func (*GzipPool) GetWriter added in v0.2.0

func (pool *GzipPool) GetWriter(dst io.Writer) io.WriteCloser

GetWriter gets or creates a new CompressionWriter and reset it to write to dst

func (*GzipPool) PutReader added in v0.2.0

func (pool *GzipPool) PutReader(reader io.Reader)

PutReader places back in the pool a CompressionReader

func (*GzipPool) PutWriter added in v0.2.0

func (pool *GzipPool) PutWriter(writer io.WriteCloser)

PutWriter places back in the pool a CompressionWriter

type LZ4Pool added in v1.3.0

type LZ4Pool struct {
	// contains filtered or unexported fields
}

func (*LZ4Pool) GetReader added in v1.3.0

func (pool *LZ4Pool) GetReader(src io.Reader) io.Reader

GetReader gets or creates a new CompressionReader and reset it to read from src

func (*LZ4Pool) GetWriter added in v1.3.0

func (pool *LZ4Pool) GetWriter(dst io.Writer) io.WriteCloser

GetWriter gets or creates a new CompressionWriter and reset it to write to dst

func (*LZ4Pool) PutReader added in v1.3.0

func (pool *LZ4Pool) PutReader(reader io.Reader)

PutReader places back in the pool a CompressionReader

func (*LZ4Pool) PutWriter added in v1.3.0

func (pool *LZ4Pool) PutWriter(writer io.WriteCloser)

PutWriter places back in the pool a CompressionWriter

type LazyChunk

type LazyChunk struct {
	Chunk   chunk.Chunk
	Fetcher *chunk.Fetcher
}

LazyChunk loads the chunk when it is accessed.

func (*LazyChunk) Iterator

func (c *LazyChunk) Iterator(ctx context.Context, from, through time.Time, direction logproto.Direction, filter logql.LineFilter) (iter.EntryIterator, error)

Iterator returns an entry iterator.

type MemChunk

type MemChunk struct {
	// contains filtered or unexported fields
}

MemChunk implements compressed log chunks.

func NewByteChunk

func NewByteChunk(b []byte) (*MemChunk, error)

NewByteChunk returns a MemChunk on the passed bytes.

func NewMemChunk

func NewMemChunk(enc Encoding) *MemChunk

NewMemChunk returns a new in-mem chunk for query.

func NewMemChunkSize

func NewMemChunkSize(enc Encoding, blockSize, targetSize int) *MemChunk

NewMemChunkSize returns a new in-mem chunk. Mainly for config push size.

func (*MemChunk) Append

func (c *MemChunk) Append(entry *logproto.Entry) error

Append implements Chunk.

func (*MemChunk) Blocks added in v1.3.0

func (c *MemChunk) Blocks() int

Blocks implements Chunk.

func (*MemChunk) Bounds

func (c *MemChunk) Bounds() (fromT, toT time.Time)

Bounds implements Chunk.

func (*MemChunk) Bytes

func (c *MemChunk) Bytes() ([]byte, error)

Bytes implements Chunk.

func (*MemChunk) Close

func (c *MemChunk) Close() error

Close implements Chunk. TODO: Fix this to check edge cases.

func (*MemChunk) CompressedSize added in v1.3.0

func (c *MemChunk) CompressedSize() int

CompressedSize implements Chunk

func (*MemChunk) Encoding

func (c *MemChunk) Encoding() Encoding

Encoding implements Chunk.

func (*MemChunk) Iterator

func (c *MemChunk) Iterator(ctx context.Context, mintT, maxtT time.Time, direction logproto.Direction, filter logql.LineFilter) (iter.EntryIterator, error)

Iterator implements Chunk.

func (*MemChunk) Size

func (c *MemChunk) Size() int

Size implements Chunk.

func (*MemChunk) SpaceFor

func (c *MemChunk) SpaceFor(e *logproto.Entry) bool

SpaceFor implements Chunk.

func (*MemChunk) UncompressedSize added in v0.4.0

func (c *MemChunk) UncompressedSize() int

UncompressedSize implements Chunk.

func (*MemChunk) Utilization added in v0.4.0

func (c *MemChunk) Utilization() float64

Utilization implements Chunk.

type NoopPool added in v1.3.0

type NoopPool struct{}

func (*NoopPool) GetReader added in v1.3.0

func (pool *NoopPool) GetReader(src io.Reader) io.Reader

GetReader gets or creates a new CompressionReader and reset it to read from src

func (*NoopPool) GetWriter added in v1.3.0

func (pool *NoopPool) GetWriter(dst io.Writer) io.WriteCloser

GetWriter gets or creates a new CompressionWriter and reset it to write to dst

func (*NoopPool) PutReader added in v1.3.0

func (pool *NoopPool) PutReader(reader io.Reader)

PutReader places back in the pool a CompressionReader

func (*NoopPool) PutWriter added in v1.3.0

func (pool *NoopPool) PutWriter(writer io.WriteCloser)

PutWriter places back in the pool a CompressionWriter

type ReaderPool added in v1.3.0

type ReaderPool interface {
	GetReader(io.Reader) io.Reader
	PutReader(io.Reader)
}

ReaderPool similar to WriterPool but for reading chunks.

type SnappyPool added in v1.3.0

type SnappyPool struct {
	// contains filtered or unexported fields
}

func (*SnappyPool) GetReader added in v1.3.0

func (pool *SnappyPool) GetReader(src io.Reader) io.Reader

GetReader gets or creates a new CompressionReader and reset it to read from src

func (*SnappyPool) GetWriter added in v1.3.0

func (pool *SnappyPool) GetWriter(dst io.Writer) io.WriteCloser

GetWriter gets or creates a new CompressionWriter and reset it to write to dst

func (*SnappyPool) PutReader added in v1.3.0

func (pool *SnappyPool) PutReader(reader io.Reader)

PutReader places back in the pool a CompressionReader

func (*SnappyPool) PutWriter added in v1.3.0

func (pool *SnappyPool) PutWriter(writer io.WriteCloser)

PutWriter places back in the pool a CompressionWriter

type WriterPool added in v1.3.0

type WriterPool interface {
	GetWriter(io.Writer) io.WriteCloser
	PutWriter(io.WriteCloser)
}

WriterPool is a pool of io.Writer This is used by every chunk to avoid unnecessary allocations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL