Documentation ¶
Index ¶
- func ActivePeriodConfig(configs []chunk.PeriodConfig) int
- func IsBlockOverlapping(b chunkenc.Block, with *LazyChunk, direction logproto.Direction) bool
- func NewTableClient(name string, cfg Config) (chunk.TableClient, error)
- func RegisterCustomIndexClients(cfg *Config, cm storage.ClientMetrics, registerer prometheus.Registerer)
- func UsingBoltdbShipper(configs []chunk.PeriodConfig) bool
- type AsyncStore
- type ChunkFilterer
- type ChunkMetrics
- type ChunkStoreConfig
- type Config
- type IngesterQuerier
- type LazyChunk
- func (c *LazyChunk) IsOverlapping(with *LazyChunk, direction logproto.Direction) bool
- func (c *LazyChunk) Iterator(ctx context.Context, from, through time.Time, direction logproto.Direction, ...) (iter.EntryIterator, error)
- func (c *LazyChunk) SampleIterator(ctx context.Context, from, through time.Time, ...) (iter.SampleIterator, error)
- type RequestChunkFilterer
- type SchemaConfig
- type Store
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ActivePeriodConfig ¶
func ActivePeriodConfig(configs []chunk.PeriodConfig) int
ActivePeriodConfig returns index of active PeriodicConfig which would be applicable to logs that would be pushed starting now. Note: Another PeriodicConfig might be applicable for future logs which can change index type.
func IsBlockOverlapping ¶
func NewTableClient ¶
func NewTableClient(name string, cfg Config) (chunk.TableClient, error)
NewTableClient creates a TableClient for managing tables for index/chunk store. ToDo: Add support in Cortex for registering custom table client like index client.
func RegisterCustomIndexClients ¶
func RegisterCustomIndexClients(cfg *Config, cm storage.ClientMetrics, registerer prometheus.Registerer)
func UsingBoltdbShipper ¶
func UsingBoltdbShipper(configs []chunk.PeriodConfig) bool
UsingBoltdbShipper checks whether current or the next index type is boltdb-shipper, returns true if yes.
Types ¶
type AsyncStore ¶
AsyncStore does querying to both ingesters and chunk store and combines the results after deduping them. This should be used when using an async store like boltdb-shipper. AsyncStore is meant to be used only in queriers or any other service other than ingesters. It should never be used in ingesters otherwise it would start spiraling around doing queries over and over again to other ingesters.
func NewAsyncStore ¶
func NewAsyncStore(store chunk.Store, scfg chunk.SchemaConfig, querier IngesterQuerier, queryIngestersWithin time.Duration) *AsyncStore
type ChunkFilterer ¶
ChunkFilterer filters chunks based on the metric.
type ChunkMetrics ¶
type ChunkMetrics struct {
// contains filtered or unexported fields
}
func NewChunkMetrics ¶
func NewChunkMetrics(r prometheus.Registerer, maxBatchSize int) *ChunkMetrics
type ChunkStoreConfig ¶
type ChunkStoreConfig struct { chunk.StoreConfig `yaml:",inline"` // Limits query start time to be greater than now() - MaxLookBackPeriod, if set. // Will be deprecated in the next major release. MaxLookBackPeriod model.Duration `yaml:"max_look_back_period"` }
func (*ChunkStoreConfig) RegisterFlags ¶
func (cfg *ChunkStoreConfig) RegisterFlags(f *flag.FlagSet)
RegisterFlags adds the flags required to configure this flag set.
type Config ¶
type Config struct { storage.Config `yaml:",inline"` MaxChunkBatchSize int `yaml:"max_chunk_batch_size"` BoltDBShipperConfig shipper.Config `yaml:"boltdb_shipper"` }
Config is the loki storage configuration
func (*Config) RegisterFlags ¶
RegisterFlags adds the flags required to configure this flag set.
type IngesterQuerier ¶
type LazyChunk ¶
type LazyChunk struct { Chunk chunk.Chunk IsValid bool Fetcher *chunk.Fetcher // contains filtered or unexported fields }
LazyChunk loads the chunk when it is accessed.
func (*LazyChunk) IsOverlapping ¶
func (*LazyChunk) Iterator ¶
func (c *LazyChunk) Iterator( ctx context.Context, from, through time.Time, direction logproto.Direction, pipeline log.StreamPipeline, nextChunk *LazyChunk, ) (iter.EntryIterator, error)
Iterator returns an entry iterator. The iterator returned will cache overlapping block's entries with the next chunk if passed. This way when we re-use them for ordering across batches we don't re-decompress the data again.
func (*LazyChunk) SampleIterator ¶
func (c *LazyChunk) SampleIterator( ctx context.Context, from, through time.Time, extractor log.StreamSampleExtractor, nextChunk *LazyChunk, ) (iter.SampleIterator, error)
SampleIterator returns an sample iterator. The iterator returned will cache overlapping block's entries with the next chunk if passed. This way when we re-use them for ordering across batches we don't re-decompress the data again.
type RequestChunkFilterer ¶
type RequestChunkFilterer interface {
ForRequest(ctx context.Context) ChunkFilterer
}
RequestChunkFilterer creates ChunkFilterer for a given request context.
type SchemaConfig ¶
type SchemaConfig struct {
chunk.SchemaConfig `yaml:",inline"`
}
SchemaConfig contains the config for our chunk index schemas
func (*SchemaConfig) Validate ¶
func (cfg *SchemaConfig) Validate() error
Validate the schema config and returns an error if the validation doesn't pass
type Store ¶
type Store interface { chunk.Store SelectSamples(ctx context.Context, req logql.SelectSampleParams) (iter.SampleIterator, error) SelectLogs(ctx context.Context, req logql.SelectLogParams) (iter.EntryIterator, error) GetSeries(ctx context.Context, req logql.SelectLogParams) ([]logproto.SeriesIdentifier, error) GetSchemaConfigs() []chunk.PeriodConfig SetChunkFilterer(chunkFilter RequestChunkFilterer) }
Store is the Loki chunk store to retrieve and save chunks.
func NewStore ¶
func NewStore(cfg Config, schemaCfg SchemaConfig, chunkStore chunk.Store, registerer prometheus.Registerer) (Store, error)
NewStore creates a new Loki Store using configuration supplied.