Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( ErrMetaDoesNotExist = fmt.Errorf("meta does not exist") ErrEmptyTenantID = fmt.Errorf("empty tenant id") ErrEmptyBlockID = fmt.Errorf("empty block id") )
var SupportedEncoding = []Encoding{ EncNone, EncGZIP, EncLZ4_64k, EncLZ4_256k, EncLZ4_1M, EncLZ4_4M, EncSnappy, EncZstd, }
SupportedEncoding is a slice of all supported encodings
Functions ¶
func SupportedEncodingString ¶ added in v0.6.0
func SupportedEncodingString() string
SupportedEncodingString returns the list of supported Encoding.
Types ¶
type AllReader ¶ added in v0.7.0
AllReader is an interface that supports both io.Reader and io.ReaderAt methods
type AppendTracker ¶
type AppendTracker interface{}
AppendTracker is an empty interface usable by the backend to track a long running append operation
type BlockMeta ¶ added in v0.5.0
type BlockMeta struct { Version string `json:"format"` // Version indicates the block format version. This includes specifics of how the indexes and data is stored BlockID uuid.UUID `json:"blockID"` // Unique block id MinID []byte `json:"minID"` // Minimum object id stored in this block MaxID []byte `json:"maxID"` // Maximum object id stored in this block TenantID string `json:"tenantID"` // ID of tehant to which this block belongs StartTime time.Time `json:"startTime"` // Currently mostly meaningless but roughly matches to the time the first obj was written to this block EndTime time.Time `json:"endTime"` // Currently mostly meaningless but roughly matches to the time the last obj was written to this block TotalObjects int `json:"totalObjects"` // Total objects in this block Size uint64 `json:"size"` // Total size in bytes of the data object CompactionLevel uint8 `json:"compactionLevel"` // Kind of the number of times this block has been compacted Encoding Encoding `json:"encoding"` // Encoding/compression format IndexPageSize uint32 `json:"indexPageSize"` // Size of each index page in bytes TotalRecords uint32 `json:"totalRecords"` // Total Records stored in the index file DataEncoding string `json:"dataEncoding"` // DataEncoding is a string provided externally, but tracked by tempodb that indicates the way the bytes are encoded BloomShardCount uint16 `json:"bloomShards"` // Number of bloom filter shards }
func NewBlockMeta ¶ added in v0.5.0
func (*BlockMeta) ObjectAdded ¶ added in v0.5.0
type CompactedBlockMeta ¶ added in v0.5.0
type Compactor ¶
type Compactor interface { MarkBlockCompacted(blockID uuid.UUID, tenantID string) error ClearBlock(blockID uuid.UUID, tenantID string) error CompactedBlockMeta(blockID uuid.UUID, tenantID string) (*CompactedBlockMeta, error) }
Compactor is a collection of methods to interact with compacted elements of a tempodb block
type ContextReader ¶ added in v0.7.0
type ContextReader interface { ReadAt(ctx context.Context, p []byte, off int64) (int, error) ReadAll(ctx context.Context) ([]byte, error) // Return an io.Reader representing the underlying. May not be supported by all implementations Reader() (io.Reader, error) }
ContextReader is an io.ReaderAt interface that passes context. It is used to simplify access to backend objects and abstract away the name/meta and other details so that the data can be accessed directly and simply
func NewContextReader ¶ added in v0.7.0
func NewContextReader(meta *BlockMeta, name string, r Reader) ContextReader
NewContextReader creates a ReaderAt for the given BlockMeta
func NewContextReaderWithAllReader ¶ added in v0.7.0
func NewContextReaderWithAllReader(r AllReader) ContextReader
NewContextReaderWithAllReader wraps a normal ReaderAt and drops the context
type Encoding ¶ added in v0.6.0
type Encoding byte
Encoding is the identifier for a chunk encoding.
const ( EncNone Encoding = iota EncGZIP EncLZ4_64k EncLZ4_256k EncLZ4_1M EncLZ4_4M EncSnappy EncZstd )
The different available encodings. Make sure to preserve the order, as these numeric values are written to the chunks!
func ParseEncoding ¶ added in v0.6.0
ParseEncoding parses an chunk encoding (compression algorithm) by its name.
func (Encoding) MarshalJSON ¶ added in v0.6.0
MarshalJSON implements the marshaler interface of the json pkg.
func (Encoding) MarshalYAML ¶ added in v0.6.0
MarshalYAML implements the Marshaler interface of the yaml pkg
func (*Encoding) UnmarshalJSON ¶ added in v0.6.0
UnmarshalJSON implements the Unmarshaler interface of the json pkg.
func (*Encoding) UnmarshalYAML ¶ added in v0.6.0
UnmarshalYAML implements the Unmarshaler interface of the yaml pkg.
type Reader ¶
type Reader interface { // Reader is for reading entire objects from the backend. It is expected that there will be an attempt to retrieve this from cache Read(ctx context.Context, name string, blockID uuid.UUID, tenantID string) ([]byte, error) // ReadReader is for streaming entire objects from the backend. It is expected this will _not_ be cached. ReadReader(ctx context.Context, name string, blockID uuid.UUID, tenantID string) (io.ReadCloser, int64, error) // ReadRange is for reading parts of large objects from the backend. It is expected this will _not_ be cached. ReadRange(ctx context.Context, name string, blockID uuid.UUID, tenantID string, offset uint64, buffer []byte) error Tenants(ctx context.Context) ([]string, error) Blocks(ctx context.Context, tenantID string) ([]uuid.UUID, error) BlockMeta(ctx context.Context, blockID uuid.UUID, tenantID string) (*BlockMeta, error) Shutdown() }
Reader is a collection of methods to read data from tempodb backends
type Writer ¶
type Writer interface { // Write is for in memory data. It is expected that this data will be cached. Write(ctx context.Context, name string, blockID uuid.UUID, tenantID string, buffer []byte) error // WriteReader is for larger data payloads streamed through an io.Reader. It is expected this will _not_ be cached. WriteReader(ctx context.Context, name string, blockID uuid.UUID, tenantID string, data io.Reader, size int64) error WriteBlockMeta(ctx context.Context, meta *BlockMeta) error Append(ctx context.Context, name string, blockID uuid.UUID, tenantID string, tracker AppendTracker, buffer []byte) (AppendTracker, error) CloseAppend(ctx context.Context, tracker AppendTracker) error }
Writer is a collection of methods to write data to tempodb backends