Documentation ¶
Overview ¶
Package storage is an auto-generated package for the BigQuery Storage API.
NOTE: This package is in beta. It is not stable, and may be subject to changes.
General documentation ¶
For information that is relevant for all client libraries please reference https://pkg.go.dev/cloud.google.com/go#pkg-overview. Some information on this page includes:
- Authentication and Authorization
- Timeouts and Cancellation
- Testing against Client Libraries
- Debugging Client Libraries
- Inspecting errors
Example usage ¶
To get started with this package, create a client.
// go get cloud.google.com/go/bigquery/storage/apiv1beta2@latest ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryReadClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close()
The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. The returned client must be Closed when it is done being used.
Using the Client ¶
The following is an example of making an API call with the newly created client, mentioned above.
req := &storagepb.CreateReadSessionRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#CreateReadSessionRequest. } resp, err := c.CreateReadSession(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp
Use of Context ¶
The ctx passed to NewBigQueryReadClient is used for authentication requests and for creating the underlying connection, but is not used for subsequent calls. Individual methods on the client use the ctx given to them.
To close the open connection, use the Close() method.
Index ¶
- func DefaultAuthScopes() []string
- type BigQueryReadCallOptions
- type BigQueryReadClient
- func (c *BigQueryReadClient) Close() error
- func (c *BigQueryReadClient) Connection() *grpc.ClientConndeprecated
- func (c *BigQueryReadClient) CreateReadSession(ctx context.Context, req *storagepb.CreateReadSessionRequest, ...) (*storagepb.ReadSession, error)
- func (c *BigQueryReadClient) ReadRows(ctx context.Context, req *storagepb.ReadRowsRequest, opts ...gax.CallOption) (storagepb.BigQueryRead_ReadRowsClient, error)
- func (c *BigQueryReadClient) SplitReadStream(ctx context.Context, req *storagepb.SplitReadStreamRequest, ...) (*storagepb.SplitReadStreamResponse, error)
- type BigQueryWriteCallOptions
- type BigQueryWriteClientdeprecated
- func (c *BigQueryWriteClient) AppendRows(ctx context.Context, opts ...gax.CallOption) (storagepb.BigQueryWrite_AppendRowsClient, error)deprecated
- func (c *BigQueryWriteClient) BatchCommitWriteStreams(ctx context.Context, req *storagepb.BatchCommitWriteStreamsRequest, ...) (*storagepb.BatchCommitWriteStreamsResponse, error)deprecated
- func (c *BigQueryWriteClient) Close() error
- func (c *BigQueryWriteClient) Connection() *grpc.ClientConndeprecated
- func (c *BigQueryWriteClient) CreateWriteStream(ctx context.Context, req *storagepb.CreateWriteStreamRequest, ...) (*storagepb.WriteStream, error)deprecated
- func (c *BigQueryWriteClient) FinalizeWriteStream(ctx context.Context, req *storagepb.FinalizeWriteStreamRequest, ...) (*storagepb.FinalizeWriteStreamResponse, error)deprecated
- func (c *BigQueryWriteClient) FlushRows(ctx context.Context, req *storagepb.FlushRowsRequest, opts ...gax.CallOption) (*storagepb.FlushRowsResponse, error)deprecated
- func (c *BigQueryWriteClient) GetWriteStream(ctx context.Context, req *storagepb.GetWriteStreamRequest, ...) (*storagepb.WriteStream, error)deprecated
Examples ¶
- BigQueryReadClient.CreateReadSession
- BigQueryReadClient.SplitReadStream
- BigQueryWriteClient.AppendRows
- BigQueryWriteClient.BatchCommitWriteStreams
- BigQueryWriteClient.CreateWriteStream
- BigQueryWriteClient.FinalizeWriteStream
- BigQueryWriteClient.FlushRows
- BigQueryWriteClient.GetWriteStream
- NewBigQueryReadClient
- NewBigQueryReadRESTClient
- NewBigQueryWriteClient
- NewBigQueryWriteRESTClient
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func DefaultAuthScopes ¶
func DefaultAuthScopes() []string
DefaultAuthScopes reports the default set of authentication scopes to use with this package.
Types ¶
type BigQueryReadCallOptions ¶
type BigQueryReadCallOptions struct { CreateReadSession []gax.CallOption ReadRows []gax.CallOption SplitReadStream []gax.CallOption }
BigQueryReadCallOptions contains the retry settings for each method of BigQueryReadClient.
type BigQueryReadClient ¶
type BigQueryReadClient struct { // The call options for this service. CallOptions *BigQueryReadCallOptions // contains filtered or unexported fields }
BigQueryReadClient is a client for interacting with BigQuery Storage API. Methods, except Close, may be called concurrently. However, fields must not be modified concurrently with method calls.
BigQuery Read API.
The Read API can be used to read data from BigQuery.
New code should use the v1 Read API going forward, if they don’t use Write API at the same time.
func NewBigQueryReadClient ¶
func NewBigQueryReadClient(ctx context.Context, opts ...option.ClientOption) (*BigQueryReadClient, error)
NewBigQueryReadClient creates a new big query read client based on gRPC. The returned client must be Closed when it is done being used to clean up its underlying connections.
BigQuery Read API.
The Read API can be used to read data from BigQuery.
New code should use the v1 Read API going forward, if they don’t use Write API at the same time.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryReadClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() // TODO: Use client. _ = c }
Output:
func NewBigQueryReadRESTClient ¶ added in v1.35.0
func NewBigQueryReadRESTClient(ctx context.Context, opts ...option.ClientOption) (*BigQueryReadClient, error)
NewBigQueryReadRESTClient creates a new big query read rest client.
BigQuery Read API.
The Read API can be used to read data from BigQuery.
New code should use the v1 Read API going forward, if they don’t use Write API at the same time.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryReadRESTClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() // TODO: Use client. _ = c }
Output:
func (*BigQueryReadClient) Close ¶
func (c *BigQueryReadClient) Close() error
Close closes the connection to the API service. The user should invoke this when the client is no longer required.
func (*BigQueryReadClient) Connection
deprecated
func (c *BigQueryReadClient) Connection() *grpc.ClientConn
Connection returns a connection to the API service.
Deprecated: Connections are now pooled so this method does not always return the same resource.
func (*BigQueryReadClient) CreateReadSession ¶
func (c *BigQueryReadClient) CreateReadSession(ctx context.Context, req *storagepb.CreateReadSessionRequest, opts ...gax.CallOption) (*storagepb.ReadSession, error)
CreateReadSession creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.
A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments.
Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryReadClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() req := &storagepb.CreateReadSessionRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#CreateReadSessionRequest. } resp, err := c.CreateReadSession(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp }
Output:
func (*BigQueryReadClient) ReadRows ¶
func (c *BigQueryReadClient) ReadRows(ctx context.Context, req *storagepb.ReadRowsRequest, opts ...gax.CallOption) (storagepb.BigQueryRead_ReadRowsClient, error)
ReadRows reads rows from the stream in the format prescribed by the ReadSession. Each response contains one or more table rows, up to a maximum of 100 MiB per response; read requests which attempt to read individual rows larger than 100 MiB will fail.
Each request also returns a set of stream statistics reflecting the current state of the stream.
func (*BigQueryReadClient) SplitReadStream ¶
func (c *BigQueryReadClient) SplitReadStream(ctx context.Context, req *storagepb.SplitReadStreamRequest, opts ...gax.CallOption) (*storagepb.SplitReadStreamResponse, error)
SplitReadStream splits a given ReadStream into two ReadStream objects. These ReadStream objects are referred to as the primary and the residual streams of the split. The original ReadStream can still be read from in the same manner as before. Both of the returned ReadStream objects can also be read from, and the rows returned by both child streams will be the same as the rows read from the original stream.
Moreover, the two child streams will be allocated back-to-back in the original ReadStream. Concretely, it is guaranteed that for streams original, primary, and residual, that original[0-j] = primary[0-j] and original[j-n] = residual[0-m] once the streams have been read to completion.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryReadClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() req := &storagepb.SplitReadStreamRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#SplitReadStreamRequest. } resp, err := c.SplitReadStream(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp }
Output:
type BigQueryWriteCallOptions ¶ added in v1.13.0
type BigQueryWriteCallOptions struct { CreateWriteStream []gax.CallOption AppendRows []gax.CallOption GetWriteStream []gax.CallOption FinalizeWriteStream []gax.CallOption BatchCommitWriteStreams []gax.CallOption FlushRows []gax.CallOption }
BigQueryWriteCallOptions contains the retry settings for each method of BigQueryWriteClient.
type BigQueryWriteClient
deprecated
added in
v1.13.0
type BigQueryWriteClient struct { // The call options for this service. CallOptions *BigQueryWriteCallOptions // contains filtered or unexported fields }
BigQueryWriteClient is a client for interacting with BigQuery Storage API. Methods, except Close, may be called concurrently. However, fields must not be modified concurrently with method calls.
BigQuery Write API.
The Write API can be used to write data to BigQuery.
The google.cloud.bigquery.storage.v1 API (at /bigquery/docs/reference/storage/rpc/google.cloud.bigquery.storage.v1) should be used instead of the v1beta2 API for BigQueryWrite operations.
Deprecated: BigQueryWrite may be removed in a future version.
func NewBigQueryWriteClient
deprecated
added in
v1.13.0
func NewBigQueryWriteClient(ctx context.Context, opts ...option.ClientOption) (*BigQueryWriteClient, error)
NewBigQueryWriteClient creates a new big query write client based on gRPC. The returned client must be Closed when it is done being used to clean up its underlying connections.
BigQuery Write API.
The Write API can be used to write data to BigQuery.
The google.cloud.bigquery.storage.v1 API (at /bigquery/docs/reference/storage/rpc/google.cloud.bigquery.storage.v1) should be used instead of the v1beta2 API for BigQueryWrite operations.
Deprecated: BigQueryWrite may be removed in a future version.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() // TODO: Use client. _ = c }
Output:
func NewBigQueryWriteRESTClient
deprecated
added in
v1.35.0
func NewBigQueryWriteRESTClient(ctx context.Context, opts ...option.ClientOption) (*BigQueryWriteClient, error)
NewBigQueryWriteRESTClient creates a new big query write rest client.
BigQuery Write API.
The Write API can be used to write data to BigQuery.
The google.cloud.bigquery.storage.v1 API (at /bigquery/docs/reference/storage/rpc/google.cloud.bigquery.storage.v1) should be used instead of the v1beta2 API for BigQueryWrite operations.
Deprecated: BigQueryWrite may be removed in a future version.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteRESTClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() // TODO: Use client. _ = c }
Output:
func (*BigQueryWriteClient) AppendRows
deprecated
added in
v1.13.0
func (c *BigQueryWriteClient) AppendRows(ctx context.Context, opts ...gax.CallOption) (storagepb.BigQueryWrite_AppendRowsClient, error)
AppendRows appends data to the given stream.
If offset is specified, the offset is checked against the end of stream. The server returns OUT_OF_RANGE in AppendRowsResponse if an attempt is made to append to an offset beyond the current end of the stream or ALREADY_EXISTS if user provids an offset that has already been written to. User can retry with adjusted offset within the same RPC stream. If offset is not specified, append happens at the end of the stream.
The response contains the offset at which the append happened. Responses are received in the same order in which requests are sent. There will be one response for each successful request. If the offset is not set in response, it means append didn’t happen due to some errors. If one request fails, all the subsequent requests will also fail until a success request is made again.
If the stream is of PENDING type, data will only be available for read operations after the stream is committed.
This method is not supported for the REST transport.
Deprecated: AppendRows may be removed in a future version.
Example ¶
package main import ( "context" "io" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() stream, err := c.AppendRows(ctx) if err != nil { // TODO: Handle error. } go func() { reqs := []*storagepb.AppendRowsRequest{ // TODO: Create requests. } for _, req := range reqs { if err := stream.Send(req); err != nil { // TODO: Handle error. } } stream.CloseSend() }() for { resp, err := stream.Recv() if err == io.EOF { break } if err != nil { // TODO: handle error. } // TODO: Use resp. _ = resp } }
Output:
func (*BigQueryWriteClient) BatchCommitWriteStreams
deprecated
added in
v1.13.0
func (c *BigQueryWriteClient) BatchCommitWriteStreams(ctx context.Context, req *storagepb.BatchCommitWriteStreamsRequest, opts ...gax.CallOption) (*storagepb.BatchCommitWriteStreamsResponse, error)
BatchCommitWriteStreams atomically commits a group of PENDING streams that belong to the same parent table. Streams must be finalized before commit and cannot be committed multiple times. Once a stream is committed, data in the stream becomes available for read operations.
Deprecated: BatchCommitWriteStreams may be removed in a future version.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() req := &storagepb.BatchCommitWriteStreamsRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#BatchCommitWriteStreamsRequest. } resp, err := c.BatchCommitWriteStreams(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp }
Output:
func (*BigQueryWriteClient) Close ¶ added in v1.13.0
func (c *BigQueryWriteClient) Close() error
Close closes the connection to the API service. The user should invoke this when the client is no longer required.
func (*BigQueryWriteClient) Connection
deprecated
added in
v1.13.0
func (c *BigQueryWriteClient) Connection() *grpc.ClientConn
Connection returns a connection to the API service.
Deprecated: Connections are now pooled so this method does not always return the same resource.
func (*BigQueryWriteClient) CreateWriteStream
deprecated
added in
v1.13.0
func (c *BigQueryWriteClient) CreateWriteStream(ctx context.Context, req *storagepb.CreateWriteStreamRequest, opts ...gax.CallOption) (*storagepb.WriteStream, error)
CreateWriteStream creates a write stream to the given table. Additionally, every table has a special COMMITTED stream named ‘_default’ to which data can be written. This stream doesn’t need to be created using CreateWriteStream. It is a stream that can be used simultaneously by any number of clients. Data written to this stream is considered committed as soon as an acknowledgement is received.
Deprecated: CreateWriteStream may be removed in a future version.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() req := &storagepb.CreateWriteStreamRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#CreateWriteStreamRequest. } resp, err := c.CreateWriteStream(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp }
Output:
func (*BigQueryWriteClient) FinalizeWriteStream
deprecated
added in
v1.13.0
func (c *BigQueryWriteClient) FinalizeWriteStream(ctx context.Context, req *storagepb.FinalizeWriteStreamRequest, opts ...gax.CallOption) (*storagepb.FinalizeWriteStreamResponse, error)
FinalizeWriteStream finalize a write stream so that no new data can be appended to the stream. Finalize is not supported on the ‘_default’ stream.
Deprecated: FinalizeWriteStream may be removed in a future version.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() req := &storagepb.FinalizeWriteStreamRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#FinalizeWriteStreamRequest. } resp, err := c.FinalizeWriteStream(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp }
Output:
func (*BigQueryWriteClient) FlushRows
deprecated
added in
v1.13.0
func (c *BigQueryWriteClient) FlushRows(ctx context.Context, req *storagepb.FlushRowsRequest, opts ...gax.CallOption) (*storagepb.FlushRowsResponse, error)
FlushRows flushes rows to a BUFFERED stream. If users are appending rows to BUFFERED stream, flush operation is required in order for the rows to become available for reading. A Flush operation flushes up to any previously flushed offset in a BUFFERED stream, to the offset specified in the request. Flush is not supported on the _default stream, since it is not BUFFERED.
Deprecated: FlushRows may be removed in a future version.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() req := &storagepb.FlushRowsRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#FlushRowsRequest. } resp, err := c.FlushRows(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp }
Output:
func (*BigQueryWriteClient) GetWriteStream
deprecated
added in
v1.13.0
func (c *BigQueryWriteClient) GetWriteStream(ctx context.Context, req *storagepb.GetWriteStreamRequest, opts ...gax.CallOption) (*storagepb.WriteStream, error)
GetWriteStream gets a write stream.
Deprecated: GetWriteStream may be removed in a future version.
Example ¶
package main import ( "context" storage "cloud.google.com/go/bigquery/storage/apiv1beta2" storagepb "cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb" ) func main() { ctx := context.Background() // This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in: // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options c, err := storage.NewBigQueryWriteClient(ctx) if err != nil { // TODO: Handle error. } defer c.Close() req := &storagepb.GetWriteStreamRequest{ // TODO: Fill request struct fields. // See https://pkg.go.dev/cloud.google.com/go/bigquery/storage/apiv1beta2/storagepb#GetWriteStreamRequest. } resp, err := c.GetWriteStream(ctx, req) if err != nil { // TODO: Handle error. } // TODO: Use resp. _ = resp }
Output: