Documentation ¶
Overview ¶
Package blob provides an easy and portable way to interact with blobs within a storage location, hereafter called a "bucket". See https://gocloud.dev/howto/blob/ for how-to guides.
It supports operations like reading and writing blobs (using standard interfaces from the io package), deleting blobs, and listing blobs in a bucket.
Subpackages contain distinct implementations of blob for various providers, including Cloud and on-prem solutions. For example, "fileblob" supports blobs backed by a filesystem. Your application should import one of these provider-specific subpackages and use its exported function(s) to create a *Bucket; do not use the NewBucket function in this package. For example:
bucket, err := fileblob.OpenBucket("path/to/dir", nil) if err != nil { return fmt.Errorf("could not open bucket: %v", err) } buf, err := bucket.ReadAll(context.Background(), "myfile.txt") ...
Then, write your application code using the *Bucket type. You can easily reconfigure your initialization code to choose a different provider. You can develop your application locally using fileblob, or deploy it to multiple Cloud providers. You may find http://github.com/google/wire useful for managing your initialization code.
Alternatively, you can construct a *Bucket via a URL and OpenBucket. See https://gocloud.dev/concepts/urls/ for more information.
Errors ¶
The errors returned from this package can be inspected in several ways:
The Code function from gocloud.dev/gcerrors will return an error code, also defined in that package, when invoked on an error.
The Bucket.ErrorAs method can retrieve the driver error underlying the returned error.
OpenCensus Integration ¶
OpenCensus supports tracing and metric collection for multiple languages and backend providers. See https://opencensus.io.
This API collects OpenCensus traces and metrics for the following methods:
- Attributes
- Copy
- Delete
- NewRangeReader, from creation until the call to Close. (NewReader and ReadAll are included because they call NewRangeReader.)
- NewWriter, from creation until the call to Close.
All trace and metric names begin with the package import path. The traces add the method name. For example, "gocloud.dev/blob/Attributes". The metrics are "completed_calls", a count of completed method calls by provider, method and status (error code); and "latency", a distribution of method latency by provider and method. For example, "gocloud.dev/blob/latency".
It also collects the following metrics: - gocloud.dev/blob/bytes_read: the total number of bytes read, by provider. - gocloud.dev/blob/bytes_written: the total number of bytes written, by provider.
To enable trace collection in your application, see "Configure Exporter" at https://opencensus.io/quickstart/go/tracing. To enable metric collection in your application, see "Exporting stats" at https://opencensus.io/quickstart/go/metrics.
Example ¶
package main import ( "context" "fmt" "io/ioutil" "log" "os" "gocloud.dev/blob/fileblob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // Connect to a bucket when your program starts up. // This example uses the file-based implementation in fileblob, and creates // a temporary directory to use as the root directory. dir, cleanup := newTempDir() defer cleanup() bucket, err := fileblob.OpenBucket(dir, nil) if err != nil { log.Fatal(err) } defer bucket.Close() // We now have a *blob.Bucket! We can write our application using the // *blob.Bucket type, and have the freedom to change the initialization code // above to choose a different provider later. // In this example, we'll write a blob and then read it. ctx := context.Background() if err := bucket.WriteAll(ctx, "foo.txt", []byte("Go Cloud Development Kit"), nil); err != nil { log.Fatal(err) } b, err := bucket.ReadAll(ctx, "foo.txt") if err != nil { log.Fatal(err) } fmt.Println(string(b)) } func newTempDir() (string, func()) { dir, err := ioutil.TempDir("", "go-cloud-blob-example") if err != nil { panic(err) } return dir, func() { os.RemoveAll(dir) } }
Output: Go Cloud Development Kit
Example (OpenFromURL) ¶
package main import ( "context" "fmt" "log" "gocloud.dev/blob" _ "gocloud.dev/blob/memblob" ) func main() { ctx := context.Background() // Connect to a bucket using a URL. // This example uses "memblob", the in-memory implementation. // We need to add a blank import line to register the memblob provider's // URLOpener, which implements blob.BucketURLOpener: // import _ "gocloud.dev/blob/memblob" // memblob registers for the "mem" scheme. // All blob.OpenBucket URLs also work with "blob+" or "blob+bucket+" prefixes, // e.g., "blob+mem://" or "blob+bucket+mem://". b, err := blob.OpenBucket(ctx, "mem://") if err != nil { log.Fatal(err) } defer b.Close() // Now we can use b to read or write to blobs in the bucket. if err := b.WriteAll(ctx, "my-key", []byte("hello world"), nil); err != nil { log.Fatal(err) } data, err := b.ReadAll(ctx, "my-key") if err != nil { log.Fatal(err) } fmt.Println(string(data)) }
Output: hello world
Example (OpenFromURLWithPrefix) ¶
package main import ( "context" "log" "gocloud.dev/blob" _ "gocloud.dev/blob/memblob" ) func main() { // This example is used in https://gocloud.dev/howto/blob/open-bucket/#prefix // Variables set up elsewhere: ctx := context.Background() // Connect to a bucket using a URL, using the "prefix" query parameter to // target a subfolder in the bucket. // The prefix should end with "/", so that the resulting bucket operates // in a subfolder. b, err := blob.OpenBucket(ctx, "mem://?prefix=a/subfolder/") if err != nil { log.Fatal(err) } defer b.Close() // Bucket operations on <key> will be translated to "a/subfolder/<key>". }
Output:
Index ¶
- Constants
- Variables
- type Attributes
- type Bucket
- func (b *Bucket) As(i interface{}) bool
- func (b *Bucket) Attributes(ctx context.Context, key string) (_ *Attributes, err error)
- func (b *Bucket) Close() error
- func (b *Bucket) Copy(ctx context.Context, dstKey, srcKey string, opts *CopyOptions) (err error)
- func (b *Bucket) Delete(ctx context.Context, key string) (err error)
- func (b *Bucket) ErrorAs(err error, i interface{}) bool
- func (b *Bucket) Exists(ctx context.Context, key string) (bool, error)
- func (b *Bucket) List(opts *ListOptions) *ListIterator
- func (b *Bucket) NewRangeReader(ctx context.Context, key string, offset, length int64, opts *ReaderOptions) (_ *Reader, err error)
- func (b *Bucket) NewReader(ctx context.Context, key string, opts *ReaderOptions) (*Reader, error)
- func (b *Bucket) NewWriter(ctx context.Context, key string, opts *WriterOptions) (_ *Writer, err error)
- func (b *Bucket) ReadAll(ctx context.Context, key string) (_ []byte, err error)
- func (b *Bucket) SignedURL(ctx context.Context, key string, opts *SignedURLOptions) (string, error)
- func (b *Bucket) WriteAll(ctx context.Context, key string, p []byte, opts *WriterOptions) (err error)
- type BucketURLOpener
- type CopyOptions
- type ListIterator
- type ListObject
- type ListOptions
- type Reader
- type ReaderOptions
- type SignedURLOptions
- type URLMux
- func (mux *URLMux) BucketSchemes() []string
- func (mux *URLMux) OpenBucket(ctx context.Context, urlstr string) (*Bucket, error)
- func (mux *URLMux) OpenBucketURL(ctx context.Context, u *url.URL) (*Bucket, error)
- func (mux *URLMux) RegisterBucket(scheme string, opener BucketURLOpener)
- func (mux *URLMux) ValidBucketScheme(scheme string) bool
- type Writer
- type WriterOptions
Examples ¶
- Package
- Package (OpenFromURL)
- Package (OpenFromURLWithPrefix)
- Attributes.As
- Bucket.As
- Bucket.Delete
- Bucket.ErrorAs
- Bucket.List
- Bucket.List (WithDelimiter)
- Bucket.NewRangeReader
- Bucket.NewReader
- Bucket.NewWriter
- Bucket.NewWriter (Cancel)
- ListObject.As
- ListOptions
- PrefixedBucket
- Reader.As
- WriterOptions
Constants ¶
const DefaultSignedURLExpiry = 1 * time.Hour
DefaultSignedURLExpiry is the default duration for SignedURLOptions.Expiry.
Variables ¶
var NewBucket = newBucket
NewBucket is intended for use by provider implementations.
var ( // OpenCensusViews are predefined views for OpenCensus metrics. // The views include counts and latency distributions for API method calls, // and total bytes read and written. // See the example at https://godoc.org/go.opencensus.io/stats/view for usage. OpenCensusViews = append( oc.Views(pkgName, latencyMeasure), &view.View{ Name: pkgName + "/bytes_read", Measure: bytesReadMeasure, Description: "Sum of bytes read from the provider service.", TagKeys: []tag.Key{oc.ProviderKey}, Aggregation: view.Sum(), }, &view.View{ Name: pkgName + "/bytes_written", Measure: bytesWrittenMeasure, Description: "Sum of bytes written to the provider service.", TagKeys: []tag.Key{oc.ProviderKey}, Aggregation: view.Sum(), }) )
Functions ¶
This section is empty.
Types ¶
type Attributes ¶
type Attributes struct { // CacheControl specifies caching attributes that providers may use // when serving the blob. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control CacheControl string // ContentDisposition specifies whether the blob content is expected to be // displayed inline or as an attachment. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition ContentDisposition string // ContentEncoding specifies the encoding used for the blob's content, if any. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding ContentEncoding string // ContentLanguage specifies the language used in the blob's content, if any. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Language ContentLanguage string // ContentType is the MIME type of the blob. It will not be empty. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type ContentType string // Metadata holds key/value pairs associated with the blob. // Keys are guaranteed to be in lowercase, even if the backend provider // has case-sensitive keys (although note that Metadata written via // this package will always be lowercased). If there are duplicate // case-insensitive keys (e.g., "foo" and "FOO"), only one value // will be kept, and it is undefined which one. Metadata map[string]string // ModTime is the time the blob was last modified. ModTime time.Time // Size is the size of the blob's content in bytes. Size int64 // MD5 is an MD5 hash of the blob contents or nil if not available. MD5 []byte // contains filtered or unexported fields }
Attributes contains attributes about a blob.
func (*Attributes) As ¶
func (a *Attributes) As(i interface{}) bool
As converts i to provider-specific types. See https://gocloud.dev/concepts/as/ for background information, the "As" examples in this package for examples, and the provider-specific package documentation for the specific types supported for that provider.
Example ¶
package main import ( "context" "fmt" "log" "cloud.google.com/go/storage" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is specific to the gcsblob implementation; it demonstrates // access to the underlying cloud.google.com/go/storage.ObjectAttrs type. // The types exposed for As by gcsblob are documented in // https://godoc.org/gocloud.dev/blob/gcsblob#hdr-As ctx := context.Background() b, err := blob.OpenBucket(ctx, "gs://my-bucket") if err != nil { log.Fatal(err) } defer b.Close() attrs, err := b.Attributes(ctx, "gopher.png") if err != nil { log.Fatal(err) } var oa storage.ObjectAttrs if attrs.As(&oa) { fmt.Println(oa.Owner) } }
Output:
type Bucket ¶
type Bucket struct {
// contains filtered or unexported fields
}
Bucket provides an easy and portable way to interact with blobs within a "bucket", including read, write, and list operations. To create a Bucket, use constructors found in provider-specific subpackages.
func OpenBucket ¶ added in v0.10.0
OpenBucket opens the bucket identified by the URL given.
See the URLOpener documentation in provider-specific subpackages for details on supported URL formats, and https://gocloud.dev/concepts/urls/ for more information.
In addition to provider-specific query parameters, OpenBucket supports the following query parameters:
- prefix: wraps the resulting Bucket using PrefixedBucket with the given prefix.
func PrefixedBucket ¶ added in v0.14.0
PrefixedBucket returns a *Bucket based on b with all keys modified to have prefix, which will usually end with a "/" to target a subdirectory in the bucket.
bucket will be closed and no longer usable after this function returns.
Example ¶
package main import ( "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is used in https://gocloud.dev/howto/blob/open-bucket/#prefix // Variables set up elsewhere: var bucket *blob.Bucket // Wrap the bucket using blob.PrefixedBucket. // The prefix should end with "/", so that the resulting bucket operates // in a subfolder. bucket = blob.PrefixedBucket(bucket, "a/subfolder/") // The original bucket is no longer usable; it has been closed. // The wrapped bucket should be closed when done. defer bucket.Close() // Bucket operations on <key> will be translated to "a/subfolder/<key>". }
Output:
func (*Bucket) As ¶
As converts i to provider-specific types. See https://gocloud.dev/concepts/as/ for background information, the "As" examples in this package for examples, and the provider-specific package documentation for the specific types supported for that provider.
Example ¶
package main import ( "context" "log" "cloud.google.com/go/storage" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is specific to the gcsblob implementation; it demonstrates // access to the underlying cloud.google.com/go/storage.Client type. // The types exposed for As by gcsblob are documented in // https://godoc.org/gocloud.dev/blob/gcsblob#hdr-As // This URL will open the bucket "my-bucket" using default credentials. ctx := context.Background() b, err := blob.OpenBucket(ctx, "gs://my-bucket") if err != nil { log.Fatal(err) } defer b.Close() // Access storage.Client fields via gcsClient here. var gcsClient *storage.Client if b.As(&gcsClient) { email, err := gcsClient.ServiceAccount(ctx, "project-name") if err != nil { log.Fatal(err) } _ = email } else { log.Println("Unable to access storage.Client through Bucket.As") } }
Output:
func (*Bucket) Attributes ¶
Attributes returns attributes for the blob stored at key.
If the blob does not exist, Attributes returns an error for which gcerrors.Code will return gcerrors.NotFound.
func (*Bucket) Copy ¶ added in v0.13.0
Copy the blob stored at srcKey to dstKey. A nil CopyOptions is treated the same as the zero value.
If the source blob does not exist, Copy returns an error for which gcerrors.Code will return gcerrors.NotFound.
If the destination blob already exists, it is overwritten.
func (*Bucket) Delete ¶
Delete deletes the blob stored at key.
If the blob does not exist, Delete returns an error for which gcerrors.Code will return gcerrors.NotFound.
Example ¶
package main import ( "context" "log" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is used in https://gocloud.dev/howto/blob/data/#deleting // Variables set up elsewhere: ctx := context.Background() var bucket *blob.Bucket if err := bucket.Delete(ctx, "foo.txt"); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) ErrorAs ¶ added in v0.10.0
ErrorAs converts err to provider-specific types. ErrorAs panics if i is nil or not a pointer. ErrorAs returns false if err == nil. See https://gocloud.dev/concepts/as/ for background information.
Example ¶
package main import ( "context" "fmt" "log" "github.com/aws/aws-sdk-go/aws/awserr" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is specific to the s3blob implementation; it demonstrates // access to the underlying awserr.Error type. // The types exposed for ErrorAs by s3blob are documented in // https://godoc.org/gocloud.dev/blob/s3blob#hdr-As ctx := context.Background() b, err := blob.OpenBucket(ctx, "s3://my-bucket") if err != nil { log.Fatal(err) } defer b.Close() _, err = b.ReadAll(ctx, "nosuchfile") if err != nil { var awsErr awserr.Error if b.ErrorAs(err, &awsErr) { fmt.Println(awsErr.Code()) } } }
Output:
func (*Bucket) Exists ¶ added in v0.11.0
Exists returns true if a blob exists at key, false if it does not exist, or an error. It is a shortcut for calling Attributes and checking if it returns an error with code gcerrors.NotFound.
func (*Bucket) List ¶
func (b *Bucket) List(opts *ListOptions) *ListIterator
List returns a ListIterator that can be used to iterate over blobs in a bucket, in lexicographical order of UTF-8 encoded keys. The underlying implementation fetches results in pages.
A nil ListOptions is treated the same as the zero value.
List is not guaranteed to include all recently-written blobs; some providers are only eventually consistent.
Example ¶
package main import ( "context" "fmt" "io" "io/ioutil" "log" "os" "gocloud.dev/blob/fileblob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // Connect to a bucket when your program starts up. // This example uses the file-based implementation. dir, cleanup := newTempDir() defer cleanup() // Create the file-based bucket. bucket, err := fileblob.OpenBucket(dir, nil) if err != nil { log.Fatal(err) } defer bucket.Close() // Create some blob objects for listing: "foo[0..4].txt". ctx := context.Background() for i := 0; i < 5; i++ { if err := bucket.WriteAll(ctx, fmt.Sprintf("foo%d.txt", i), []byte("Go Cloud Development Kit"), nil); err != nil { log.Fatal(err) } } // Iterate over them. // This will list the blobs created above because fileblob is strongly // consistent, but is not guaranteed to work on all providers. iter := bucket.List(nil) for { obj, err := iter.Next(ctx) if err == io.EOF { break } if err != nil { log.Fatal(err) } fmt.Println(obj.Key) } } func newTempDir() (string, func()) { dir, err := ioutil.TempDir("", "go-cloud-blob-example") if err != nil { panic(err) } return dir, func() { os.RemoveAll(dir) } }
Output: foo0.txt foo1.txt foo2.txt foo3.txt foo4.txt
Example (WithDelimiter) ¶
package main import ( "context" "fmt" "io" "io/ioutil" "log" "os" "gocloud.dev/blob" "gocloud.dev/blob/fileblob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // Connect to a bucket when your program starts up. // This example uses the file-based implementation. dir, cleanup := newTempDir() defer cleanup() // Create the file-based bucket. bucket, err := fileblob.OpenBucket(dir, nil) if err != nil { log.Fatal(err) } defer bucket.Close() // Create some blob objects in a hierarchy. ctx := context.Background() for _, key := range []string{ "dir1/subdir/a.txt", "dir1/subdir/b.txt", "dir2/c.txt", "d.txt", } { if err := bucket.WriteAll(ctx, key, []byte("Go Cloud Development Kit"), nil); err != nil { log.Fatal(err) } } // list lists files in b starting with prefix. It uses the delimiter "/", // and recurses into "directories", adding 2 spaces to indent each time. // It will list the blobs created above because fileblob is strongly // consistent, but is not guaranteed to work on all providers. var list func(context.Context, *blob.Bucket, string, string) list = func(ctx context.Context, b *blob.Bucket, prefix, indent string) { iter := b.List(&blob.ListOptions{ Delimiter: "/", Prefix: prefix, }) for { obj, err := iter.Next(ctx) if err == io.EOF { break } if err != nil { log.Fatal(err) } fmt.Printf("%s%s\n", indent, obj.Key) if obj.IsDir { list(ctx, b, obj.Key, indent+" ") } } } list(ctx, bucket, "", "") } func newTempDir() (string, func()) { dir, err := ioutil.TempDir("", "go-cloud-blob-example") if err != nil { panic(err) } return dir, func() { os.RemoveAll(dir) } }
Output: d.txt dir1/ dir1/subdir/ dir1/subdir/a.txt dir1/subdir/b.txt dir2/ dir2/c.txt
func (*Bucket) NewRangeReader ¶
func (b *Bucket) NewRangeReader(ctx context.Context, key string, offset, length int64, opts *ReaderOptions) (_ *Reader, err error)
NewRangeReader returns a Reader to read content from the blob stored at key. It reads at most length bytes starting at offset (>= 0). If length is negative, it will read till the end of the blob.
If the blob does not exist, NewRangeReader returns an error for which gcerrors.Code will return gcerrors.NotFound. Exists is a lighter-weight way to check for existence.
A nil ReaderOptions is treated the same as the zero value.
The caller must call Close on the returned Reader when done reading.
Example ¶
package main import ( "context" "io" "log" "os" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is used in https://gocloud.dev/howto/blob/data/#reading // Variables set up elsewhere: ctx := context.Background() var bucket *blob.Bucket // Open the key "foo.txt" for reading at offset 1024 and read up to 4096 bytes. r, err := bucket.NewRangeReader(ctx, "foo.txt", 1024, 4096, nil) if err != nil { log.Fatal(err) } defer r.Close() // Copy from the read range to stdout. if _, err := io.Copy(os.Stdout, r); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) NewReader ¶
NewReader is a shortcut for NewRangedReader with offset=0 and length=-1.
Example ¶
package main import ( "context" "fmt" "io" "log" "os" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is used in https://gocloud.dev/howto/blob/data/#reading // Variables set up elsewhere: ctx := context.Background() var bucket *blob.Bucket // Open the key "foo.txt" for reading with the default options. r, err := bucket.NewReader(ctx, "foo.txt", nil) if err != nil { log.Fatal(err) } defer r.Close() // Readers also have a limited view of the blob's metadata. fmt.Println("Content-Type:", r.ContentType()) fmt.Println() // Copy from the reader to stdout. if _, err := io.Copy(os.Stdout, r); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) NewWriter ¶
func (b *Bucket) NewWriter(ctx context.Context, key string, opts *WriterOptions) (_ *Writer, err error)
NewWriter returns a Writer that writes to the blob stored at key. A nil WriterOptions is treated the same as the zero value.
If a blob with this key already exists, it will be replaced. The blob being written is not guaranteed to be readable until Close has been called; until then, any previous blob will still be readable. Even after Close is called, newly written blobs are not guaranteed to be returned from List; some providers are only eventually consistent.
The returned Writer will store ctx for later use in Write and/or Close. To abort a write, cancel ctx; otherwise, it must remain open until Close is called.
The caller must call Close on the returned Writer, even if the write is aborted.
Example ¶
package main import ( "context" "fmt" "log" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is used in https://gocloud.dev/howto/blob/data/#writing // Variables set up elsewhere: ctx := context.Background() var bucket *blob.Bucket // Open the key "foo.txt" for writing with the default options. w, err := bucket.NewWriter(ctx, "foo.txt", nil) if err != nil { log.Fatal(err) } _, writeErr := fmt.Fprintln(w, "Hello, World!") // Always check the return value of Close when writing. closeErr := w.Close() if writeErr != nil { log.Fatal(writeErr) } if closeErr != nil { log.Fatal(closeErr) } }
Output:
Example (Cancel) ¶
package main import ( "context" "log" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is used in https://gocloud.dev/howto/blob/data/#writing // Variables set up elsewhere: ctx := context.Background() var bucket *blob.Bucket // Create a cancelable context from the existing context. writeCtx, cancelWrite := context.WithCancel(ctx) defer cancelWrite() // Open the key "foo.txt" for writing with the default options. w, err := bucket.NewWriter(writeCtx, "foo.txt", nil) if err != nil { log.Fatal(err) } // Assume some writes happened and we encountered an error. // Now we want to abort the write. if err != nil { // First cancel the context. cancelWrite() // You must still close the writer to avoid leaking resources. w.Close() } }
Output:
func (*Bucket) ReadAll ¶
ReadAll is a shortcut for creating a Reader via NewReader with nil ReaderOptions, and reading the entire blob.
func (*Bucket) SignedURL ¶
SignedURL returns a URL that can be used to GET the blob for the duration specified in opts.Expiry.
A nil SignedURLOptions is treated the same as the zero value.
It is valid to call SignedURL for a key that does not exist.
If the provider implementation does not support this functionality, SignedURL will return an error for which gcerrors.Code will return gcerrors.Unimplemented.
func (*Bucket) WriteAll ¶
func (b *Bucket) WriteAll(ctx context.Context, key string, p []byte, opts *WriterOptions) (err error)
WriteAll is a shortcut for creating a Writer via NewWriter and writing p.
If opts.ContentMD5 is not set, WriteAll will compute the MD5 of p and use it as the ContentMD5 option for the Writer it creates.
type BucketURLOpener ¶ added in v0.10.0
BucketURLOpener represents types that can open buckets based on a URL. The opener must not modify the URL argument. OpenBucketURL must be safe to call from multiple goroutines.
This interface is generally implemented by types in driver packages.
type CopyOptions ¶ added in v0.13.0
type CopyOptions struct { // BeforeCopy is a callback that will be called before the copy is // initiated. // // asFunc converts its argument to provider-specific types. // See https://gocloud.dev/concepts/as/ for background information. BeforeCopy func(asFunc func(interface{}) bool) error }
CopyOptions sets options for Copy.
type ListIterator ¶
type ListIterator struct {
// contains filtered or unexported fields
}
ListIterator iterates over List results.
func (*ListIterator) Next ¶
func (i *ListIterator) Next(ctx context.Context) (*ListObject, error)
Next returns a *ListObject for the next blob. It returns (nil, io.EOF) if there are no more.
type ListObject ¶
type ListObject struct { // Key is the key for this blob. Key string // ModTime is the time the blob was last modified. ModTime time.Time // Size is the size of the blob's content in bytes. Size int64 // MD5 is an MD5 hash of the blob contents or nil if not available. MD5 []byte // IsDir indicates that this result represents a "directory" in the // hierarchical namespace, ending in ListOptions.Delimiter. Key can be // passed as ListOptions.Prefix to list items in the "directory". // Fields other than Key and IsDir will not be set if IsDir is true. IsDir bool // contains filtered or unexported fields }
ListObject represents a single blob returned from List.
func (*ListObject) As ¶
func (o *ListObject) As(i interface{}) bool
As converts i to provider-specific types. See https://gocloud.dev/concepts/as/ for background information, the "As" examples in this package for examples, and the provider-specific package documentation for the specific types supported for that provider.
Example ¶
package main import ( "context" "io" "log" "cloud.google.com/go/storage" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is specific to the gcsblob implementation; it demonstrates // access to the underlying cloud.google.com/go/storage.ObjectAttrs type. // The types exposed for As by gcsblob are documented in // https://godoc.org/gocloud.dev/blob/gcsblob#hdr-As ctx := context.Background() b, err := blob.OpenBucket(ctx, "gs://my-bucket") if err != nil { log.Fatal(err) } defer b.Close() iter := b.List(nil) for { obj, err := iter.Next(ctx) if err == io.EOF { break } if err != nil { log.Fatal(err) } // Access storage.ObjectAttrs via oa here. var oa storage.ObjectAttrs if obj.As(&oa) { _ = oa.Owner } } }
Output:
type ListOptions ¶
type ListOptions struct { // Prefix indicates that only blobs with a key starting with this prefix // should be returned. Prefix string // Delimiter sets the delimiter used to define a hierarchical namespace, // like a filesystem with "directories". It is highly recommended that you // use "" or "/" as the Delimiter. Other values should work through this API, // but provider UIs generally assume "/". // // An empty delimiter means that the bucket is treated as a single flat // namespace. // // A non-empty delimiter means that any result with the delimiter in its key // after Prefix is stripped will be returned with ListObject.IsDir = true, // ListObject.Key truncated after the delimiter, and zero values for other // ListObject fields. These results represent "directories". Multiple results // in a "directory" are returned as a single result. Delimiter string // BeforeList is a callback that will be called before each call to the // the underlying provider's list functionality. // asFunc converts its argument to provider-specific types. // See https://gocloud.dev/concepts/as/ for background information. BeforeList func(asFunc func(interface{}) bool) error }
ListOptions sets options for listing blobs via Bucket.List.
Example ¶
package main import ( "context" "io" "log" "cloud.google.com/go/storage" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is specific to the gcsblob implementation; it demonstrates // access to the underlying cloud.google.com/go/storage.Query type. // The types exposed for As by gcsblob are documented in // https://godoc.org/gocloud.dev/blob/gcsblob#hdr-As ctx := context.Background() b, err := blob.OpenBucket(ctx, "gs://my-bucket") if err != nil { log.Fatal(err) } defer b.Close() beforeList := func(as func(interface{}) bool) error { // Access storage.Query via q here. var q *storage.Query if as(&q) { _ = q.Delimiter } return nil } iter := b.List(&blob.ListOptions{Prefix: "", Delimiter: "/", BeforeList: beforeList}) for { obj, err := iter.Next(ctx) if err == io.EOF { break } if err != nil { log.Fatal(err) } _ = obj } }
Output:
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader reads bytes from a blob. It implements io.ReadCloser, and must be closed after reads are finished.
func (*Reader) As ¶
As converts i to provider-specific types. See https://gocloud.dev/concepts/as/ for background information, the "As" examples in this package for examples, and the provider-specific package documentation for the specific types supported for that provider.
Example ¶
package main import ( "context" "log" "cloud.google.com/go/storage" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is specific to the gcsblob implementation; it demonstrates // access to the underlying cloud.google.com/go/storage.Reader type. // The types exposed for As by gcsblob are documented in // https://godoc.org/gocloud.dev/blob/gcsblob#hdr-As ctx := context.Background() b, err := blob.OpenBucket(ctx, "gs://my-bucket") if err != nil { log.Fatal(err) } defer b.Close() r, err := b.NewReader(ctx, "gopher.png", nil) if err != nil { log.Fatal(err) } defer r.Close() // Access storage.Reader via sr here. var sr *storage.Reader if r.As(&sr) { _ = sr.Attrs } }
Output:
func (*Reader) Close ¶
Close implements io.Closer (https://golang.org/pkg/io/#Closer).
func (*Reader) ContentType ¶
ContentType returns the MIME type of the blob.
func (*Reader) Read ¶
Read implements io.Reader (https://golang.org/pkg/io/#Reader).
type ReaderOptions ¶
type ReaderOptions struct { // BeforeRead is a callback that will be called exactly once, before // any data is read (unless NewReader returns an error before then, in which // case it may not be called at all). // // asFunc converts its argument to provider-specific types. // See https://gocloud.dev/concepts/as/ for background information. BeforeRead func(asFunc func(interface{}) bool) error }
ReaderOptions sets options for NewReader and NewRangedReader.
type SignedURLOptions ¶
type SignedURLOptions struct { // Expiry sets how long the returned URL is valid for. // Defaults to DefaultSignedURLExpiry. Expiry time.Duration }
SignedURLOptions sets options for SignedURL.
type URLMux ¶ added in v0.10.0
type URLMux struct {
// contains filtered or unexported fields
}
URLMux is a URL opener multiplexer. It matches the scheme of the URLs against a set of registered schemes and calls the opener that matches the URL's scheme. See https://gocloud.dev/concepts/urls/ for more information.
The zero value is a multiplexer with no registered schemes.
func DefaultURLMux ¶ added in v0.10.0
func DefaultURLMux() *URLMux
DefaultURLMux returns the URLMux used by OpenBucket.
Driver packages can use this to register their BucketURLOpener on the mux.
func (*URLMux) BucketSchemes ¶ added in v0.13.0
BucketSchemes returns a sorted slice of the registered Bucket schemes.
func (*URLMux) OpenBucket ¶ added in v0.10.0
OpenBucket calls OpenBucketURL with the URL parsed from urlstr. OpenBucket is safe to call from multiple goroutines.
func (*URLMux) OpenBucketURL ¶ added in v0.10.0
OpenBucketURL dispatches the URL to the opener that is registered with the URL's scheme. OpenBucketURL is safe to call from multiple goroutines.
func (*URLMux) RegisterBucket ¶ added in v0.10.0
func (mux *URLMux) RegisterBucket(scheme string, opener BucketURLOpener)
RegisterBucket registers the opener with the given scheme. If an opener already exists for the scheme, RegisterBucket panics.
func (*URLMux) ValidBucketScheme ¶ added in v0.13.0
ValidBucketScheme returns true iff scheme has been registered for Buckets.
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
Writer writes bytes to a blob.
It implements io.WriteCloser (https://golang.org/pkg/io/#Closer), and must be closed after all writes are done.
func (*Writer) Close ¶
Close closes the blob writer. The write operation is not guaranteed to have succeeded until Close returns with no error. Close may return an error if the context provided to create the Writer is canceled or reaches its deadline.
func (*Writer) Write ¶
Write implements the io.Writer interface (https://golang.org/pkg/io/#Writer).
Writes may happen asynchronously, so the returned error can be nil even if the actual write eventually fails. The write is only guaranteed to have succeeded if Close returns no error.
type WriterOptions ¶
type WriterOptions struct { // BufferSize changes the default size in bytes of the chunks that // Writer will upload in a single request; larger blobs will be split into // multiple requests. // // This option may be ignored by some provider implementations. // // If 0, the provider implementation will choose a reasonable default. // // If the Writer is used to do many small writes concurrently, using a // smaller BufferSize may reduce memory usage. BufferSize int // CacheControl specifies caching attributes that providers may use // when serving the blob. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control CacheControl string // ContentDisposition specifies whether the blob content is expected to be // displayed inline or as an attachment. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition ContentDisposition string // ContentEncoding specifies the encoding used for the blob's content, if any. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding ContentEncoding string // ContentLanguage specifies the language used in the blob's content, if any. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Language ContentLanguage string // ContentType specifies the MIME type of the blob being written. If not set, // it will be inferred from the content using the algorithm described at // http://mimesniff.spec.whatwg.org/. // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type ContentType string // ContentMD5 is used as a message integrity check. // If len(ContentMD5) > 0, the MD5 hash of the bytes written must match // ContentMD5, or Close will return an error without completing the write. // https://tools.ietf.org/html/rfc1864 ContentMD5 []byte // Metadata holds key/value strings to be associated with the blob, or nil. // Keys may not be empty, and are lowercased before being written. // Duplicate case-insensitive keys (e.g., "foo" and "FOO") will result in // an error. Metadata map[string]string // BeforeWrite is a callback that will be called exactly once, before // any data is written (unless NewWriter returns an error, in which case // it will not be called at all). Note that this is not necessarily during // or after the first Write call, as providers may buffer bytes before // sending an upload request. // // asFunc converts its argument to provider-specific types. // See https://gocloud.dev/concepts/as/ for background information. BeforeWrite func(asFunc func(interface{}) bool) error }
WriterOptions sets options for NewWriter.
Example ¶
package main import ( "context" "fmt" "log" "cloud.google.com/go/storage" "gocloud.dev/blob" _ "gocloud.dev/blob/gcsblob" _ "gocloud.dev/blob/s3blob" ) func main() { // This example is specific to the gcsblob implementation; it demonstrates // access to the underlying cloud.google.com/go/storage.Writer type. // The types exposed for As by gcsblob are documented in // https://godoc.org/gocloud.dev/blob/gcsblob#hdr-As ctx := context.Background() b, err := blob.OpenBucket(ctx, "gs://my-bucket") if err != nil { log.Fatal(err) } defer b.Close() beforeWrite := func(as func(interface{}) bool) error { var sw *storage.Writer if as(&sw) { fmt.Println(sw.ChunkSize) } return nil } options := blob.WriterOptions{BeforeWrite: beforeWrite} if err := b.WriteAll(ctx, "newfile.txt", []byte("hello\n"), &options); err != nil { log.Fatal(err) } }
Output:
Directories ¶
Path | Synopsis |
---|---|
Package azureblob provides a blob implementation that uses Azure Storage’s BlockBlob.
|
Package azureblob provides a blob implementation that uses Azure Storage’s BlockBlob. |
Package driver defines a set of interfaces that the blob package uses to interact with the underlying blob services.
|
Package driver defines a set of interfaces that the blob package uses to interact with the underlying blob services. |
Package drivertest provides a conformance test for implementations of driver.
|
Package drivertest provides a conformance test for implementations of driver. |
Package fileblob provides a blob implementation that uses the filesystem.
|
Package fileblob provides a blob implementation that uses the filesystem. |
Package gcsblob provides a blob implementation that uses GCS.
|
Package gcsblob provides a blob implementation that uses GCS. |
Package memblob provides an in-memory blob implementation.
|
Package memblob provides an in-memory blob implementation. |
Package s3blob provides a blob implementation that uses S3.
|
Package s3blob provides a blob implementation that uses S3. |