block

package
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 29, 2023 License: AGPL-3.0 Imports: 26 Imported by: 0

Documentation

Index

Constants

View Source
const (
	IndexFilename        = "index.tsdb"
	ParquetSuffix        = ".parquet"
	DeletionMarkFilename = "deletion-mark.json"

	HostnameLabel = "__hostname__"
)
View Source
const (
	// Version1 is a enumeration of Pyroscope section of TSDB meta supported by Pyroscope.
	MetaVersion1 = MetaVersion(1)

	// MetaVersion2 indicates the block format version.
	// https://github.com/grafana/phlare/pull/767.
	//  1. In this version we introduced symdb:
	//     - stacktraces.parquet table has been deprecated.
	//     - StacktracePartition column added to profiles.parquet table.
	//     - symdb is stored in ./symbols sub-directory.
	//  2. TotalValue column added to profiles.parquet table.
	//  3. pprof labels discarded and never stored in the block.
	MetaVersion2 = MetaVersion(2)

	// MetaVersion3 indicates the block format version.
	// https://github.com/grafana/pyroscope/pull/2196.
	//  1. Introduction of symdb v2:
	//     - locations, functions, mappings, strings parquet tables
	//       moved to ./symbols sub-directory (symdb) and partitioned
	//       by StacktracePartition. References to the partitions
	//       are stored in the index.symdb file.
	//  2. In this version, parquet tables are never loaded into
	//     memory entirely. Instead, each partition (row range) is read
	//     from the block on demand at query time.
	MetaVersion3 = MetaVersion(3)
)
View Source
const (
	MetaFilename = "meta.json"
)

Variables

This section is empty.

Functions

func Delete

func Delete(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID) error

Delete removes directory that is meant to be block directory. NOTE: Always prefer this method for deleting blocks.

  • We have to delete block's files in the certain order (meta.json first and deletion-mark.json last) to ensure we don't end up with malformed partial blocks. Thanos system handles well partial blocks only if they don't have meta.json. If meta.json is present Thanos assumes valid block.
  • This avoids deleting empty dir (whole bucket) by mistake.

func HashBlockID

func HashBlockID(id ulid.ULID) uint32

HashBlockID returns a 32-bit hash of the block ID useful for ring-based sharding.

func InRange

func InRange(min, max, start, end model.Time) bool

func IsBlockDir

func IsBlockDir(path string) (id ulid.ULID, ok bool)

func IterBlockMetas

func IterBlockMetas(ctx context.Context, bkt phlareobj.Bucket, from, to time.Time, fn func(*Meta)) error

IterBlockMetas iterates over all block metas in the given time range. It calls the given function for each block meta. It returns the first error returned by the function. It returns nil if all calls succeed. The function is called concurrently. Currently doesn't work with filesystem bucket.

func ListBlocks

func ListBlocks(path string, ulidMinTime time.Time) (map[ulid.ULID]*Meta, error)

func Upload

func Upload(ctx context.Context, logger log.Logger, bkt objstore.Bucket, bdir string) error

Upload uploads a TSDB block to the object storage. It verifies basic features of Thanos block.

Types

type BlockStats

type BlockStats struct {
	NumSamples  uint64 `json:"numSamples,omitempty"`
	NumSeries   uint64 `json:"numSeries,omitempty"`
	NumProfiles uint64 `json:"numProfiles,omitempty"`
}

type File

type File struct {
	RelPath string `json:"relPath"`
	// SizeBytes is optional (e.g meta.json does not show size).
	SizeBytes uint64 `json:"sizeBytes,omitempty"`

	// Parquet can contain some optional Parquet file info
	Parquet *ParquetFile `json:"parquet,omitempty"`
	// TSDB can contain some optional TSDB file info
	TSDB *TSDBFile `json:"tsdb,omitempty"`
}

type Meta

type Meta struct {
	// Unique identifier for the block and its contents. Changes on compaction.
	ULID ulid.ULID `json:"ulid"`

	// MinTime and MaxTime specify the time range all samples
	// in the block are in.
	MinTime model.Time `json:"minTime"`
	MaxTime model.Time `json:"maxTime"`

	// Stats about the contents of the block.
	Stats BlockStats `json:"stats,omitempty"`

	// File is a sorted (by rel path) list of all files in block directory of this block known to PyroscopeDB.
	// Sorted by relative path.
	Files []File `json:"files,omitempty"`

	// Information on compactions the block was created from.
	Compaction tsdb.BlockMetaCompaction `json:"compaction"`

	// Version of the index format.
	Version MetaVersion `json:"version"`

	// Labels are the external labels identifying the producer as well as tenant.
	Labels map[string]string `json:"labels,omitempty"`

	// Source is a real upload source of the block.
	Source SourceType `json:"source,omitempty"`
}

func DownloadMeta

func DownloadMeta(ctx context.Context, logger log.Logger, bkt objstore.Bucket, id ulid.ULID) (Meta, error)

DownloadMeta downloads only meta file from bucket by block ID. TODO(bwplotka): Differentiate between network error & partial upload.

func MetaFromDir

func MetaFromDir(dir string) (*Meta, int64, error)

func NewMeta

func NewMeta() *Meta

func Read

func Read(rc io.ReadCloser) (_ *Meta, err error)

Read the block meta from the given reader.

func ReadFromDir

func ReadFromDir(dir string) (*Meta, error)

ReadFromDir reads the given meta from <dir>/meta.json.

func SortBlocks

func SortBlocks(metas map[ulid.ULID]*Meta) []*Meta

func (*Meta) Clone

func (m *Meta) Clone() *Meta

func (*Meta) FileByRelPath

func (m *Meta) FileByRelPath(name string) *File

func (*Meta) InRange

func (m *Meta) InRange(start, end model.Time) bool

func (*Meta) String

func (m *Meta) String() string

func (*Meta) TSDBBlockMeta

func (meta *Meta) TSDBBlockMeta() tsdb.BlockMeta

func (*Meta) WriteTo

func (meta *Meta) WriteTo(w io.Writer) (int64, error)

func (*Meta) WriteToFile

func (meta *Meta) WriteToFile(logger log.Logger, dir string) (int64, error)

type MetaVersion

type MetaVersion int

type ParquetFile

type ParquetFile struct {
	NumRowGroups uint64 `json:"numRowGroups,omitempty"`
	NumRows      uint64 `json:"numRows,omitempty"`
}

type SourceType

type SourceType string
const (
	UnknownSource   SourceType = ""
	IngesterSource  SourceType = "ingester"
	CompactorSource SourceType = "compactor"
)

type TSDBFile

type TSDBFile struct {
	NumSeries uint64 `json:"numSeries,omitempty"`
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL