Documentation ¶
Overview ¶
Package bqlog implements best effort low-overhead structured logging to BigQuery.
It is suitable only for data that can have occasional gaps or duplications, like debug logs. Do not use it for business-critical data.
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ModuleName = module.RegisterName("go.chromium.org/luci/server/bqlog")
ModuleName can be used to refer to this module when declaring dependencies.
Functions ¶
func Log ¶
Log asynchronously logs the given message to a BQ table associated with the message type via a prior RegisterSink call.
This is a best effort operation (and thus returns no error).
Messages are dropped when:
- Writes to BigQuery are failing with a fatal error:
- The table doesn't exist.
- The table has an incompatible schema.
- The server account has no permission to write to the table.
- Etc.
- The server crashes before it manages to flush buffered logs.
- The internal flush buffer is full (per MaxLiveSizeBytes).
In case of transient write errors messages may be duplicated.
Panics if `m` was not registered via RegisterSink or if the bundler is not running yet.
func NewModule ¶
func NewModule(opts *ModuleOptions) module.Module
NewModule returns a server module that sets up a cron dispatcher.
func NewModuleFromFlags ¶
NewModuleFromFlags is a variant of NewModule that initializes options through command line flags.
Calling this function registers flags in flag.CommandLine. They are usually parsed in server.Main(...).
func RegisterSink ¶
func RegisterSink(sink Sink)
RegisterSink tells the bundler where and how to log messages of some concrete proto type.
There can currently be only one sink per proto message type. Must be called before the bundler is running. Can be called during the init() time.
Types ¶
type BigQueryWriter ¶
type BigQueryWriter interface { AppendRows(ctx context.Context, opts ...gax.CallOption) (storagepb.BigQueryWrite_AppendRowsClient, error) Close() error }
BigQueryWriter is a subset of BigQueryWriteClient API used by Bundler.
Use storage.NewBigQueryWriteClient to construct a production instance.
type Bundler ¶
type Bundler struct { CloudProject string // the cloud project with the dataset, required Dataset string // the BQ dataset with log tables, required // contains filtered or unexported fields }
Bundler buffers logs in memory before sending them to BigQuery.
var Default Bundler
Default is a bundler installed into the server when using NewModule or NewModuleFromFlags.
The module takes care of configuring this bundler based on the server environment and module's options.
You still need to register your types in it using RegisterSink.
func (*Bundler) Log ¶
Log asynchronously logs the given message to a BQ table associated with the message type via a prior RegisterSink call.
This is a best effort operation (and thus returns no error).
Messages are dropped when:
- Writes to BigQuery are failing with a fatal error:
- The table doesn't exist.
- The table has an incompatible schema.
- The server account has no permission to write to the table.
- Etc.
- The server crashes before it manages to flush buffered logs.
- The internal flush buffer is full (per MaxLiveSizeBytes).
In case of transient write errors messages may be duplicated.
Panics if `m` was not registered via RegisterSink or if the bundler is not running yet.
func (*Bundler) RegisterSink ¶
RegisterSink tells the bundler where and how to log messages of some concrete proto type.
There can currently be only one sink per proto message type. Must be called before the bundler is running. Can be called during the init() time.
type FakeBigQueryWriter ¶
type FakeBigQueryWriter struct { Send func(*storagepb.AppendRowsRequest) error Recv func() (*storagepb.AppendRowsResponse, error) }
FakeBigQueryWriter is a fake for BigQueryWriter.
Calls given callbacks if they are non-nil. Useful in tests.
func (*FakeBigQueryWriter) AppendRows ¶
func (f *FakeBigQueryWriter) AppendRows(ctx context.Context, opts ...gax.CallOption) (storagepb.BigQueryWrite_AppendRowsClient, error)
func (*FakeBigQueryWriter) Close ¶
func (*FakeBigQueryWriter) Close() error
type ModuleOptions ¶
type ModuleOptions struct { // Bundler is a bundler to use. // // Default is the global Default instance. Bundler *Bundler // CloudProject is a project with the dataset to send logs to. // // Default is the project the server is running in. CloudProject string // Dataset is a BQ dataset to send logs to. // // Can be omitted in the development mode. In that case BQ writes will be // disabled and the module will be running in the dry run mode. Required in // the production. Dataset string }
ModuleOptions contain configuration of the bqlog server module.
It will be used to initialize Default bundler.
func (*ModuleOptions) Register ¶
func (o *ModuleOptions) Register(f *flag.FlagSet)
Register registers the command line flags.
type Sink ¶
type Sink struct { Prototype proto.Message // used only for its type descriptor, required Table string // the BQ table name within the bundler's dataset, required BatchAgeMax time.Duration // for how long to buffer messages (or 0 for some default) MaxLiveSizeBytes int // approximate limit on memory buffer size (or 0 for some default) }
Sink describes where and how to log proto messages of the given type.