Documentation ¶
Overview ¶
Package bigtable provides an implementation of the Storage interface backed by Google Cloud Platform's BigTable.
Intermediate Log Table ¶
The Intermediate Log Table stores LogEntry protobufs that have been ingested, but haven't yet been archived. It is a tall table whose rows are keyed off of the log's (Path,Stream-Index) in that order.
Each entry in the table will contain the following schema:
- Column Family "log"
- Column "data": the LogEntry raw protobuf data. Soft size limit of ~1MB.
The log path is the composite of the log's (Prefix, Name) properties. Logs belonging to the same stream will share the same path, so they will be clustered together and suitable for efficient iteration. Immediately following the path will be the log's stream index.
[ 20 bytes ] ~ [ 1-5 bytes ] B64(SHA256(Path(Prefix, Name))) + '~' + HEX(cmpbin(StreamIndex))
As there is no (technical) size constraint to either the Prefix or Name values, they will both be hashed using SHA256 to produce a unique key representing that specific log stream.
This allows a key to be generated representing "immediately after the row" by appending two '~' characters to the base hash. Since the second '~' character is always greater than any HEX(cmpbin(*)) value, this will effectively upper-bound the row.
"cmpbin" (go.chromium.org/luci/common/cmpbin) will be used to format the stream index. It is a variable-width number encoding scheme that offers the guarantee that byte-sorted encoded numbers will maintain the same order as the numbers themselves.
Index ¶
- Variables
- type Flags
- type Storage
- func (s *Storage) Close()
- func (s *Storage) Expunge(c context.Context, r storage.ExpungeRequest) error
- func (s *Storage) Get(c context.Context, r storage.GetRequest, cb storage.GetCallback) error
- func (s *Storage) Put(c context.Context, r storage.PutRequest) error
- func (s *Storage) Tail(c context.Context, project string, path types.StreamPath) (*storage.Entry, error)
- type Testing
Constants ¶
This section is empty.
Variables ¶
var ( // StorageScopes is the set of OAuth scopes needed to use the storage // functionality. StorageScopes = []string{ bigtable.Scope, } // StorageReadOnlyScopes is the set of OAuth scopes needed to use the storage // functionality. StorageReadOnlyScopes = []string{ bigtable.ReadonlyScope, } )
Functions ¶
This section is empty.
Types ¶
type Flags ¶
type Flags struct { // Project is the name of the Cloud Project containing the BigTable instance. Project string // Instance if the name of the BigTable instance within the project. Instance string // LogTable is the name of the BigTable instance's log table. LogTable string // AppProfile is the BigTable application profile name to use (or "" for // default). // // This is INTENTIONALLY not wired to a CLI flag; The value here is tied to // the _code_, not the runtime environment of the code. // // The application profile must be configured in the GCP BigTable settings // before use. // // However, in the future it may become necessary to disambiguate between e.g. // prod and dev. If this is the case, then I would recommend StorageFromFlags // adding "-prod" and "-dev" to the given AppProfile name here, rather than // making it fully configurable as a CLI flag (to reduce coupling during // rollouts). AppProfile string }
Flags contains the BigTable storage config.
type Storage ¶
type Storage struct { // Client, if not nil, is the BigTable client to use for BigTable accesses. Client *bigtable.Client // LogTable is the name of the BigTable table to use for logs. LogTable string // Cache, if not nil, will be used to cache data. Cache storage.Cache // contains filtered or unexported fields }
Storage is a BigTable storage configuration client.
func StorageFromFlags ¶
StorageFromFlags instantiates the *bigtable.Storage given parsed flags.
func (*Storage) Get ¶
func (s *Storage) Get(c context.Context, r storage.GetRequest, cb storage.GetCallback) error
Get implements storage.Storage.
type Testing ¶
type Testing interface { storage.Storage DataMap() map[string][]byte SetMaxRowSize(int) SetErr(error) }
Testing is an extension of storage.Storage with additional testing capabilities.
func NewMemoryInstance ¶
NewMemoryInstance returns an in-memory BigTable Storage implementation. This can be supplied in the Raw field in Options to simulate a BigTable connection.
An optional cache can be supplied to test caching logic.
Close should be called on the resulting value after the user is finished in order to free resources.
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
Package main implements a simple CLI tool to load and interact with storage data in Google BigTable data.
|
Package main implements a simple CLI tool to load and interact with storage data in Google BigTable data. |