bigtable

package
v0.0.0-...-a8f2654 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 3, 2024 License: Apache-2.0 Imports: 26 Imported by: 12

Documentation

Overview

Package bigtable provides an implementation of the Storage interface backed by Google Cloud Platform's BigTable.

Intermediate Log Table

The Intermediate Log Table stores LogEntry protobufs that have been ingested, but haven't yet been archived. It is a tall table whose rows are keyed off of the log's (Path,Stream-Index) in that order.

Each entry in the table will contain the following schema:

  • Column Family "log"
  • Column "data": the LogEntry raw protobuf data. Soft size limit of ~1MB.

The log path is the composite of the log's (Prefix, Name) properties. Logs belonging to the same stream will share the same path, so they will be clustered together and suitable for efficient iteration. Immediately following the path will be the log's stream index.

[            20 bytes          ]     ~    [       1-5 bytes      ]
 B64(SHA256(Path(Prefix, Name)))  + '~' + HEX(cmpbin(StreamIndex))

As there is no (technical) size constraint to either the Prefix or Name values, they will both be hashed using SHA256 to produce a unique key representing that specific log stream.

This allows a key to be generated representing "immediately after the row" by appending two '~' characters to the base hash. Since the second '~' character is always greater than any HEX(cmpbin(*)) value, this will effectively upper-bound the row.

"cmpbin" (go.chromium.org/luci/common/cmpbin) will be used to format the stream index. It is a variable-width number encoding scheme that offers the guarantee that byte-sorted encoded numbers will maintain the same order as the numbers themselves.

Index

Constants

This section is empty.

Variables

View Source
var (
	// StorageScopes is the set of OAuth scopes needed to use the storage
	// functionality.
	StorageScopes = []string{
		bigtable.Scope,
	}

	// StorageReadOnlyScopes is the set of OAuth scopes needed to use the storage
	// functionality.
	StorageReadOnlyScopes = []string{
		bigtable.ReadonlyScope,
	}
)

Functions

This section is empty.

Types

type Flags

type Flags struct {
	// Project is the name of the Cloud Project containing the BigTable instance.
	Project string
	// Instance if the name of the BigTable instance within the project.
	Instance string
	// LogTable is the name of the BigTable instance's log table.
	LogTable string

	// AppProfile is the BigTable application profile name to use (or "" for
	// default).
	//
	// This is INTENTIONALLY not wired to a CLI flag; The value here is tied to
	// the _code_, not the runtime environment of the code.
	//
	// The application profile must be configured in the GCP BigTable settings
	// before use.
	//
	// However, in the future it may become necessary to disambiguate between e.g.
	// prod and dev. If this is the case, then I would recommend StorageFromFlags
	// adding "-prod" and "-dev" to the given AppProfile name here, rather than
	// making it fully configurable as a CLI flag (to reduce coupling during
	// rollouts).
	AppProfile string
}

Flags contains the BigTable storage config.

func (*Flags) Register

func (f *Flags) Register(fs *flag.FlagSet)

Register registers flags in the flag set.

func (*Flags) Validate

func (f *Flags) Validate() error

Validate returns an error if some parsed flags have invalid values.

type Storage

type Storage struct {
	// Client, if not nil, is the BigTable client to use for BigTable accesses.
	Client *bigtable.Client

	// LogTable is the name of the BigTable table to use for logs.
	LogTable string

	// Cache, if not nil, will be used to cache data.
	Cache storage.Cache
	// contains filtered or unexported fields
}

Storage is a BigTable storage configuration client.

func StorageFromFlags

func StorageFromFlags(ctx context.Context, f *Flags) (*Storage, error)

StorageFromFlags instantiates the *bigtable.Storage given parsed flags.

func (*Storage) Close

func (s *Storage) Close()

Close implements storage.Storage.

func (*Storage) Expunge

Expunge implements storage.Storage.

func (*Storage) Get

Get implements storage.Storage.

func (*Storage) Put

Put implements storage.Storage.

func (*Storage) Tail

func (s *Storage) Tail(c context.Context, project string, path types.StreamPath) (*storage.Entry, error)

Tail implements storage.Storage.

type Testing

type Testing interface {
	storage.Storage

	DataMap() map[string][]byte
	SetMaxRowSize(int)
	SetErr(error)
}

Testing is an extension of storage.Storage with additional testing capabilities.

func NewMemoryInstance

func NewMemoryInstance(cache storage.Cache) Testing

NewMemoryInstance returns an in-memory BigTable Storage implementation. This can be supplied in the Raw field in Options to simulate a BigTable connection.

An optional cache can be supplied to test caching logic.

Close should be called on the resulting value after the user is finished in order to free resources.

Directories

Path Synopsis
Package main implements a simple CLI tool to load and interact with storage data in Google BigTable data.
Package main implements a simple CLI tool to load and interact with storage data in Google BigTable data.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL