Documentation ¶
Overview ¶
Package spanner provides high-level abstractions for working with Cloud Spanner that are not available from the core Cloud Spanner libraries.
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BatchWriter ¶
type BatchWriter struct {
// contains filtered or unexported fields
}
BatchWriter accumulates rows of data (via AddRow) and assembles them into batches that it asynchronously writes to Spanner. Rows are written to Spanner using insert semantics i.e. if a row already exists in the database, the row will fail with error 'AlreadyExists'. If Spanner returns an error for a batch, BatchWriter splits the batch into smaller chunks to retry, as it attempts to isolate which row(s) in a batch is bad. BatchWriter respects Spanner's limits on byte size and mutation count and has configurable limits on the number of in-progress writes, amount of data buffered and retry behavior. BatchWriter is not threadsafe: only one call to AddRow or Flush should be active at any time. See ExampleBatchWriter (batchwriter_test.go) for sample usage code.
Example ¶
write := func(m []*sp.Mutation) error { var err error // Code to write to Spanner e.g. // client := sp.NewClient(...) // _, err = client.Apply(context.Background(), m) return err } config := BatchWriterConfig{ WriteLimit: 40, // Limit on number of in-progress writes; 40 is a good default. BytesLimit: 100 * 1 << 20, // Limit on bytes buffered; 100MB is a good default. RetryLimit: 1000, // Limit on retries (if a large set of mutations fails, we split it into smaller pieces and re-try). Write: write, Verbose: false, // Whether to print messages about each write. } writer := NewBatchWriter(config) // Code to generate rows of data to write to Spanner. cols := []string{"id", "quote"} vals := []interface{}{42, "The answer to life the universe and everything."} writer.AddRow("mytable", cols, vals) vals = []interface{}{6, "I am not a number."} writer.AddRow("mytable", cols, vals) // End of code to generate rows. writer.Flush() // Flush out remaining writes and wait for them to complete.
Output:
func NewBatchWriter ¶
func NewBatchWriter(config BatchWriterConfig) *BatchWriter
NewBatchWriter returns a new BatchWriter with parameters defined by config.
func (*BatchWriter) AddRow ¶
func (bw *BatchWriter) AddRow(table string, cols []string, vals []interface{})
AddRow appends a new row of data to bw's buffer of rows. Depending on the state of BatchWriter, AddRow may immediately return, or it may initiate writes, or it may block (waiting for some of the writes already in progress to complete) and then initiate writes.
func (*BatchWriter) DroppedRowsByTable ¶
func (bw *BatchWriter) DroppedRowsByTable() map[string]int64
DroppedRowsByTable returns a map of tables to counts of dropped rows. Dropped rows are rows that were not written to Spanner.
func (*BatchWriter) Errors ¶
func (bw *BatchWriter) Errors() map[string]int64
Errors returns a map summarizing errors encountered. Keys are error strings, and values are the count of that error.
func (*BatchWriter) Flush ¶
func (bw *BatchWriter) Flush()
Flush initiates writes to Spanner of all buffered rows of data, and waits for them to complete.
func (*BatchWriter) SampleBadRows ¶
func (bw *BatchWriter) SampleBadRows(n int) []string
SampleBadRows returns a string-formatted list of sample rows that generated errors. Returns at most n rows. Note that we split up batches to isolate errors. Each row returned by SampleBadRows generated an error when sent to Spanner as a single-row batch.
type BatchWriterConfig ¶
type BatchWriterConfig struct { WriteLimit int64 // Limit on number of in-progress writes. BytesLimit int64 // Limit on bytes buffered. RetryLimit int64 // Limit on retries. Write func([]*sp.Mutation) error // Function to call to write to Spanner (typically a closure that calls client.Apply). Verbose bool // If true, print out messages about each write batch. }
BatchWriterConfig specifies parameters for configuring BatchWriter.