Documentation
¶
Overview ¶
Package blob provides an easy way to interact with Blob objects within a bucket. It utilizes standard io packages to handle reads and writes.
Index ¶
- func GetBlobName(name string) string
- func IsNotExist(err error) bool
- type Bucket
- func (b *Bucket) Attributes(ctx context.Context, key string, isUID bool) (*driver.ObjectAttrs, error)
- func (b *Bucket) CreateArea(ctx context.Context, area string, groups []string) error
- func (b *Bucket) Delete(ctx context.Context, key string) error
- func (b *Bucket) Move(ctx context.Context, keySrc string, keyDst string) error
- func (b *Bucket) NewRangeReader(ctx context.Context, key string, offset, length int64, exactKeyName bool) (*Reader, error)
- func (b *Bucket) NewReader(ctx context.Context, key string, exactKeyName bool) (*Reader, error)
- func (b *Bucket) NewWriter(ctx context.Context, key string, opt *WriterOptions) (*Writer, error)
- type Reader
- type Writer
- type WriterOptions
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetBlobName ¶
func IsNotExist ¶
IsNotExist returns whether an error is a driver.Error with NotFound kind.
Types ¶
type Bucket ¶
type Bucket struct {
// contains filtered or unexported fields
}
Bucket manages the underlying blob service and provides read, write and delete operations on given object within it.
func (*Bucket) Attributes ¶
func (b *Bucket) Attributes(ctx context.Context, key string, isUID bool) (*driver.ObjectAttrs, error)
Attributes returns attributes for the blob. If the specified object does not exist, Attributes must return an error for which ErrorCode returns gcerrors.NotFound. The portable type will not modify the returned Attributes.
func (*Bucket) CreateArea ¶
CreateUserArea setups a new area with the given id
only local filesystem need to support this object storage based providers use object whole path as its key, so there is no need to pre create or setup its area
func (*Bucket) Delete ¶
Delete deletes the object associated with key. It returns an error if that object does not exist, which can be checked by calling IsNotExist.
func (*Bucket) Move ¶
Move the object associated with key to a new location. It returns an error if that object does not exist, which can be checked by calling IsNotExist.
func (*Bucket) NewRangeReader ¶
func (b *Bucket) NewRangeReader(ctx context.Context, key string, offset, length int64, exactKeyName bool) (*Reader, error)
NewRangeReader returns a Reader that reads part of an object, reading at most length bytes starting at the given offset. If length is 0, it will read only the metadata. If length is negative, it will read till the end of the object. It returns an error if that object does not exist, which can be checked by calling IsNotExist.
The caller must call Close on the returned Reader when done reading.
Example ¶
package main import ( "context" "io" "io/ioutil" "log" "os" "path/filepath" "github.com/Lioric/go-cloud/blob/fileblob" ) func main() { // Connect to a bucket when your program starts up. // This example uses the file-based implementation. dir, cleanup := newTempDir() defer cleanup() // Write a file to read using the bucket. err := ioutil.WriteFile(filepath.Join(dir, "foo.txt"), []byte("Hello, World!\n"), 0666) if err != nil { log.Fatal(err) } // Create the file-based bucket. bucket, err := fileblob.NewBucket(dir) if err != nil { log.Fatal(err) } // Open a reader using the blob's key at a specific offset at length. ctx := context.Background() r, err := bucket.NewRangeReader(ctx, "foo.txt", 1, 4) if err != nil { log.Fatal(err) } defer r.Close() // The blob reader implements io.Reader, so we can use any function that // accepts an io.Reader. if _, err := io.Copy(os.Stdout, r); err != nil { log.Fatal(err) } } func newTempDir() (string, func()) { dir, err := ioutil.TempDir("", "go-cloud-blob-example") if err != nil { panic(err) } return dir, func() { os.RemoveAll(dir) } }
Output: ello
func (*Bucket) NewReader ¶
NewReader returns a Reader to read from an object, or an error when the object is not found by the given key, which can be checked by calling IsNotExist.
The caller must call Close on the returned Reader when done reading. if exactName is true, the underlaying implementation must use the name exactly as provided
Example ¶
package main import ( "context" "io" "io/ioutil" "log" "os" "path/filepath" "github.com/Lioric/go-cloud/blob/fileblob" ) func main() { // Connect to a bucket when your program starts up. // This example uses the file-based implementation. dir, cleanup := newTempDir() defer cleanup() // Write a file to read using the bucket. err := ioutil.WriteFile(filepath.Join(dir, "foo.txt"), []byte("Hello, World!\n"), 0666) if err != nil { log.Fatal(err) } // Create the file-based bucket. bucket, err := fileblob.NewBucket(dir) if err != nil { log.Fatal(err) } // Open a reader using the blob's key. ctx := context.Background() r, err := bucket.NewReader(ctx, "foo.txt") if err != nil { log.Fatal(err) } defer r.Close() // The blob reader implements io.Reader, so we can use any function that // accepts an io.Reader. if _, err := io.Copy(os.Stdout, r); err != nil { log.Fatal(err) } } func newTempDir() (string, func()) { dir, err := ioutil.TempDir("", "go-cloud-blob-example") if err != nil { panic(err) } return dir, func() { os.RemoveAll(dir) } }
Output: Hello, World!
func (*Bucket) NewWriter ¶
NewWriter returns a Writer that writes to an object associated with key.
A new object will be created unless an object with this key already exists. Otherwise any previous object with the same key will be replaced. The object is not guaranteed to be available until Close has been called.
The call may store the ctx for later use in Write and/or Close. The ctx must remain open until the returned Writer is closed.
The caller must call Close on the returned Writer when done writing.
Example ¶
package main import ( "context" "fmt" "io" "io/ioutil" "log" "os" "github.com/Lioric/go-cloud/blob" "github.com/Lioric/go-cloud/blob/fileblob" ) func main() { // Connect to a bucket when your program starts up. // This example uses the file-based implementation. dir, cleanup := newTempDir() defer cleanup() bucket, err := fileblob.NewBucket(dir) if err != nil { log.Fatal(err) } // Open a writer using the key "foo.txt" and the default options. ctx := context.Background() // fileblob doesn't support custom content-type yet, see // https://github.com/Lioric/go-cloud/issues/111. w, err := bucket.NewWriter(ctx, "foo.txt", &blob.WriterOptions{ ContentType: "application/octet-stream", }) if err != nil { log.Fatal(err) } // The blob writer implements io.Writer, so we can use any function that // accepts an io.Writer. A writer must always be closed. _, printErr := fmt.Fprintln(w, "Hello, World!") closeErr := w.Close() if printErr != nil { log.Fatal(printErr) } if closeErr != nil { log.Fatal(closeErr) } // Copy the written blob to stdout. r, err := bucket.NewReader(ctx, "foo.txt") if err != nil { log.Fatal(err) } defer r.Close() if _, err := io.Copy(os.Stdout, r); err != nil { log.Fatal(err) } } func newTempDir() (string, func()) { dir, err := ioutil.TempDir("", "go-cloud-blob-example") if err != nil { panic(err) } return dir, func() { os.RemoveAll(dir) } }
Output: Hello, World!
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader implements io.ReadCloser to read a blob. It must be closed after reads are finished.
func (*Reader) Attrs ¶
func (r *Reader) Attrs() *driver.ObjectAttrs
Attrs returns metadata attributes of the blob object.
func (*Reader) ContentType ¶
ContentType returns the MIME type of the blob object.
func (*Reader) ModTime ¶
ModTime returns the modification time of the blob object. This is optional and will be time.Time zero value if unknown.
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
Writer implements io.WriteCloser to write to blob. It must be closed after all writes are done.
type WriterOptions ¶
type WriterOptions struct { // BufferSize changes the default size in bytes of the maximum part Writer can // write in a single request. Larger objects will be split into multiple requests. // // The support specification of this operation varies depending on the underlying // blob service. If zero value is given, it is set to a reasonable default value. // If negative value is given, it will be either disabled (if supported by the // service), which means Writer will write as a whole, or reset to default value. // It could be a no-op when not supported at all. // // If the Writer is used to write small objects concurrently, set the buffer size // to a smaller size to avoid high memory usage. BufferSize int // ContentType specifies the MIME type of the object being written. If not set, // then it will be inferred from the content using the algorithm described at // http://mimesniff.spec.whatwg.org/ ContentType string // Tiddler metadata Meta map[string]string Revision int64 // Extra options for platform specific implementations Id int Name string Extra map[string]string // Size of the text segment ContentSize int }
WriterOptions controls Writer behaviors.
Directories
¶
Path | Synopsis |
---|---|
Package driver defines a set of interfaces that the blob package uses to interact with the underlying blob services.
|
Package driver defines a set of interfaces that the blob package uses to interact with the underlying blob services. |
Package drivertest provides a conformance test for implementations of driver.
|
Package drivertest provides a conformance test for implementations of driver. |
Package fileblob provides a bucket implementation that operates on the local filesystem.
|
Package fileblob provides a bucket implementation that operates on the local filesystem. |
Package gcsblob provides an implementation of using blob API on GCS.
|
Package gcsblob provides an implementation of using blob API on GCS. |
Package s3blob provides an implementation of using blob API on S3.
|
Package s3blob provides an implementation of using blob API on S3. |
Package fileblob provides a bucket implementation that operates on the local filesystem.
|
Package fileblob provides a bucket implementation that operates on the local filesystem. |