Documentation ¶
Overview ¶
Package b2 provides a high-level interface to Backblaze's B2 cloud storage service.
It is specifically designed to abstract away the Backblaze API details by providing familiar Go interfaces, specifically an io.Writer for object storage, and an io.Reader for object download. Handling of transient errors, including network and authentication timeouts, is transparent.
Methods that perform network requests accept a context.Context argument. Callers should use the context's cancellation abilities to end requests early, or to provide timeout or deadline guarantees.
This package is in development and may make API changes.
Index ¶
- Constants
- func IsNotExist(err error) bool
- func IsUpdateConflict(err error) bool
- type Attrs
- type Bucket
- func (b *Bucket) Attrs(ctx context.Context) (*BucketAttrs, error)
- func (b *Bucket) AuthToken(ctx context.Context, prefix string, valid time.Duration) (string, error)
- func (b *Bucket) BaseURL() string
- func (b *Bucket) CreateKey(ctx context.Context, name string, opts ...KeyOption) (*Key, error)
- func (b *Bucket) Delete(ctx context.Context) error
- func (b *Bucket) List(ctx context.Context, opts ...ListOption) *ObjectIterator
- func (b *Bucket) Name() string
- func (b *Bucket) Object(name string) *Object
- func (b *Bucket) Reveal(ctx context.Context, name string) error
- func (b *Bucket) Update(ctx context.Context, attrs *BucketAttrs) error
- type BucketAttrs
- type BucketType
- type Client
- func (c *Client) Bucket(ctx context.Context, name string) (*Bucket, error)
- func (c *Client) CreateKey(ctx context.Context, name string, opts ...KeyOption) (*Key, error)
- func (c *Client) ListBuckets(ctx context.Context) ([]*Bucket, error)
- func (c *Client) ListKeys(ctx context.Context, count int, cursor string) ([]*Key, string, error)
- func (c *Client) NewBucket(ctx context.Context, name string, attrs *BucketAttrs) (*Bucket, error)
- func (c *Client) ServeHTTP(rw http.ResponseWriter, req *http.Request)
- func (c *Client) Status() *StatusInfo
- type ClientOption
- func APIBase(url string) ClientOption
- func DefaultWriterOptions(opts ...WriterOption) ClientOption
- func ExpireSomeAuthTokens() ClientOption
- func FailSomeUploads() ClientOption
- func ForceCapExceeded() ClientOption
- func Transport(rt http.RoundTripper) ClientOption
- func UserAgent(agent string) ClientOption
- type Key
- type KeyOption
- type LifecycleRule
- type ListOption
- type MethodList
- type Object
- func (o *Object) Attrs(ctx context.Context) (*Attrs, error)
- func (o *Object) AuthURL(ctx context.Context, valid time.Duration, b2cd string) (*url.URL, error)
- func (o *Object) Delete(ctx context.Context) error
- func (o *Object) Hide(ctx context.Context) error
- func (o *Object) Name() string
- func (o *Object) NewRangeReader(ctx context.Context, offset, length int64) *Reader
- func (o *Object) NewReader(ctx context.Context) *Reader
- func (o *Object) NewWriter(ctx context.Context, opts ...WriterOption) *Writer
- func (o *Object) URL() string
- type ObjectIterator
- type ObjectState
- type Reader
- type ReaderStatus
- type StatusInfo
- type Writer
- type WriterOption
- type WriterStatus
Constants ¶
const ( UnknownType BucketType = "" Private = "allPrivate" Public = "allPublic" Snapshot = "snapshot" )
Variables ¶
This section is empty.
Functions ¶
func IsNotExist ¶
IsNotExist reports whether a given error indicates that an object or bucket does not exist.
func IsUpdateConflict ¶
IsUpdateConflict reports whether a given error is the result of a bucket update conflict.
Types ¶
type Attrs ¶
type Attrs struct { Name string // Not used on upload. Size int64 // Not used on upload. ContentType string // Used on upload, default is "application/octet-stream". Status ObjectState // Not used on upload. UploadTimestamp time.Time // Not used on upload. SHA1 string // Can be "none" for large files. If set on upload, will be used for large files. LastModified time.Time // If present, and there are fewer than 10 keys in the Info field, this is saved on upload. Info map[string]string // Save arbitrary metadata on upload, but limited to 10 keys. }
Attrs holds an object's metadata.
type Bucket ¶
type Bucket struct {
// contains filtered or unexported fields
}
Bucket is a reference to a B2 bucket.
func (*Bucket) Attrs ¶
func (b *Bucket) Attrs(ctx context.Context) (*BucketAttrs, error)
Attrs retrieves and returns the current bucket's attributes.
func (*Bucket) AuthToken ¶
AuthToken returns an authorization token that can be used to access objects in a private bucket. Only objects that begin with prefix can be accessed. The token expires after the given duration.
func (*Bucket) CreateKey ¶ added in v0.5.0
CreateKey creates a scoped application key that is valid only for this bucket.
func (*Bucket) List ¶ added in v0.4.0
func (b *Bucket) List(ctx context.Context, opts ...ListOption) *ObjectIterator
List returns an iterator for selecting objects in a bucket. The default behavior, with no options, is to list all currently un-hidden objects.
func (*Bucket) Object ¶
Object returns a reference to the named object in the bucket. Hidden objects cannot be referenced in this manner; they can only be found by finding the appropriate reference in ListObjects.
func (*Bucket) Reveal ¶
Reveal unhides (if hidden) the named object. If there are multiple objects of a given name, it will reveal the most recent.
func (*Bucket) Update ¶
func (b *Bucket) Update(ctx context.Context, attrs *BucketAttrs) error
Update modifies the given bucket with new attributes. It is possible that this method could fail with an update conflict, in which case you should retrieve the latest bucket attributes with Attrs and try again.
type BucketAttrs ¶
type BucketAttrs struct { // Type lists or sets the new bucket type. If Type is UnknownType during a // bucket.Update, the type is not changed. Type BucketType // Info records user data, limited to ten keys. If nil during a // bucket.Update, the existing bucket info is not modified. A bucket's // metadata can be removed by updating with an empty map. Info map[string]string // Reports or sets bucket lifecycle rules. If nil during a bucket.Update, // the rules are not modified. A bucket's rules can be removed by updating // with an empty slice. LifecycleRules []LifecycleRule }
BucketAttrs holds a bucket's metadata attributes.
type BucketType ¶
type BucketType string
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client is a Backblaze B2 client.
func (*Client) CreateKey ¶ added in v0.5.0
CreateKey creates a global application key that is valid for all buckets in this project. The key's secret will only be accessible on the object returned from this call.
func (*Client) ListBuckets ¶
ListBuckets returns all the available buckets.
func (*Client) ListKeys ¶ added in v0.5.0
ListKeys lists all the keys associated with this project. It takes the maximum number of keys it should return in a call, as well as a cursor (which should be empty for the initial call). It will return up to count keys, as well as the cursor for the next invocation.
ListKeys returns io.EOF when there are no more keys, although it may do so concurrently with the final set of keys.
func (*Client) NewBucket ¶
NewBucket returns a bucket. The bucket is created with the given attributes if it does not already exist. If attrs is nil, it is created as a private bucket with no info metadata and no lifecycle rules.
func (*Client) ServeHTTP ¶ added in v0.3.1
func (c *Client) ServeHTTP(rw http.ResponseWriter, req *http.Request)
ServeHTTP serves diagnostic information about the current state of the client; essentially everything available from Client.Status()
ServeHTTP satisfies the http.Handler interface. This means that a Client can be passed directly to a path via http.Handle (or on a custom ServeMux or a custom http.Server).
func (*Client) Status ¶
func (c *Client) Status() *StatusInfo
Status returns information about the current state of the client.
type ClientOption ¶
type ClientOption func(*clientOptions)
A ClientOption allows callers to adjust various per-client settings.
func APIBase ¶ added in v0.5.0
func APIBase(url string) ClientOption
APIBase returns a ClientOption specifying the URL root of API requests.
func DefaultWriterOptions ¶ added in v0.5.0
func DefaultWriterOptions(opts ...WriterOption) ClientOption
DefaultWriterOptions returns a ClientOption that will apply the given WriterOptions to every Writer. These options can be overridden by passing new options to NewWriter.
func ExpireSomeAuthTokens ¶
func ExpireSomeAuthTokens() ClientOption
ExpireSomeAuthTokens requests intermittent authentication failures from the B2 service.
func FailSomeUploads ¶
func FailSomeUploads() ClientOption
FailSomeUploads requests intermittent upload failures from the B2 service. This is mostly useful for testing.
func ForceCapExceeded ¶
func ForceCapExceeded() ClientOption
ForceCapExceeded requests a cap limit from the B2 service. This causes all uploads to be treated as if they would exceed the configure B2 capacity.
func Transport ¶
func Transport(rt http.RoundTripper) ClientOption
Transport sets the underlying HTTP transport mechanism. If unset, http.DefaultTransport is used.
func UserAgent ¶ added in v0.2.0
func UserAgent(agent string) ClientOption
UserAgent sets the User-Agent HTTP header. The default header is "blazer/<version>"; the value set here will be prepended to that. This can be set multiple times.
A user agent is generally of the form "<product>/<version> (<comments>)".
type Key ¶ added in v0.5.0
type Key struct {
// contains filtered or unexported fields
}
Key is a B2 application key. A Key grants limited access on a global or per-bucket basis.
func (*Key) Capabilities ¶ added in v0.5.0
Capabilities returns the list of capabilites granted by this application key.
func (*Key) ID ¶ added in v0.5.1
ID returns the application key ID. This, plus the secret, is necessary to authenticate to B2.
type KeyOption ¶ added in v0.5.0
type KeyOption func(*keyOptions)
KeyOption specifies desired properties for application keys.
func Capabilities ¶ added in v0.5.1
Capabilities requests a key with the given capability.
type LifecycleRule ¶
type LifecycleRule struct { // Prefix specifies all the files in the bucket to which this rule applies. Prefix string // DaysUploadedUntilHidden specifies the number of days after which a file // will automatically be hidden. 0 means "do not automatically hide new // files". DaysNewUntilHidden int // DaysHiddenUntilDeleted specifies the number of days after which a hidden // file is deleted. 0 means "do not automatically delete hidden files". DaysHiddenUntilDeleted int }
A LifecycleRule describes an object's life cycle, namely how many days after uploading an object should be hidden, and after how many days hidden an object should be deleted. Multiple rules may not apply to the same file or set of files. Be careful when using this feature; it can (is designed to) delete your data.
type ListOption ¶ added in v0.4.0
type ListOption func(*objectIteratorOptions)
A ListOption alters the default behavor of List.
func ListDelimiter ¶ added in v0.4.0
func ListDelimiter(delimiter string) ListOption
ListDelimiter denotes the path separator. If set, object listings will be truncated at this character.
For example, if the bucket contains objects foo/bar, foo/baz, and foo, then a delimiter of "/" will cause the listing to return "foo" and "foo/". Otherwise, the listing would have returned all object names.
Note that objects returned that end in the delimiter may not be actual objects, e.g. you cannot read from (or write to, or delete) an object "foo/", both because no actual object exists and because B2 disallows object names that end with "/". If you want to ensure that all objects returned are actual objects, leave this unset.
func ListHidden ¶ added in v0.4.0
func ListHidden() ListOption
ListHidden will include hidden objects in the output.
func ListLocker ¶ added in v0.4.4
func ListLocker(l sync.Locker) ListOption
ListLocker passes the iterator a lock which will be held during network round-trips.
func ListPageSize ¶ added in v0.4.3
func ListPageSize(count int) ListOption
ListPageSize configures the iterator to request the given number of objects per network round-trip. The default (and maximum) is 1000 objects, except for unfinished large files, which is 100.
func ListPrefix ¶ added in v0.4.0
func ListPrefix(pfx string) ListOption
ListPrefix will restrict the output to objects whose names begin with prefix.
func ListUnfinished ¶ added in v0.4.0
func ListUnfinished() ListOption
ListUnfinished will list unfinished large file operations instead of existing objects.
type MethodList ¶ added in v0.3.1
type MethodList []method
MethodList is an accumulation of RPC calls that have been made over a given period of time.
func (MethodList) CountByMethod ¶ added in v0.3.1
func (ml MethodList) CountByMethod() map[string]int
CountByMethod returns the total RPC calls made per method.
type Object ¶
type Object struct {
// contains filtered or unexported fields
}
Object represents a B2 object.
func (*Object) AuthURL ¶ added in v0.5.0
AuthURL returns a URL for the given object with embedded token and, possibly, b2ContentDisposition arguments. Leave b2cd blank for no content disposition.
func (*Object) NewRangeReader ¶
NewRangeReader returns a reader for the given object, reading up to length bytes. If length is negative, the rest of the object is read.
type ObjectIterator ¶ added in v0.4.0
type ObjectIterator struct {
// contains filtered or unexported fields
}
ObjectIterator abtracts away the tricky bits of iterating over a bucket's contents.
It is intended to be called in a loop:
for iter.Next() { obj := iter.Object() // act on obj } if err := iter.Err(); err != nil { // handle err }
func (*ObjectIterator) Err ¶ added in v0.4.0
func (o *ObjectIterator) Err() error
Err returns the current error or nil. If Next() returns false and Err() is nil, then all objects have been seen.
func (*ObjectIterator) Next ¶ added in v0.4.0
func (o *ObjectIterator) Next() bool
Next advances the iterator to the next object. It should be called before any calls to Object(). If Next returns true, then the next call to Object() will be valid. Once Next returns false, it is important to check the return value of Err().
func (*ObjectIterator) Object ¶ added in v0.4.0
func (o *ObjectIterator) Object() *Object
Object returns the current object.
type ObjectState ¶
type ObjectState int
ObjectState represents the various states an object can be in.
const ( Unknown ObjectState = iota // Started represents a large upload that has been started but not finished // or canceled. Started // Uploaded represents an object that has finished uploading and is complete. Uploaded // Hider represents an object that exists only to hide another object. It // cannot in itself be downloaded and, in particular, is not a hidden object. Hider // Folder is a special state given to non-objects that are returned during a // List call with a ListDelimiter option. Folder )
type Reader ¶
type Reader struct { // ConcurrentDownloads is the number of simultaneous downloads to pull from // B2. Values greater than one will cause B2 to make multiple HTTP requests // for a given file, increasing available bandwidth at the cost of buffering // the downloads in memory. ConcurrentDownloads int // ChunkSize is the size to fetch per ConcurrentDownload. The default is // 10MB. ChunkSize int // contains filtered or unexported fields }
Reader reads files from B2.
func (*Reader) Verify ¶ added in v0.5.0
Verify checks the SHA1 hash on download and compares it to the SHA1 hash submitted on upload. If the two differ, this returns an error. If the correct hash could not be calculated (if, for example, the entire object was not read, or if the object was uploaded as a "large file" and thus the SHA1 hash was not sent), this returns (nil, false).
type ReaderStatus ¶
type ReaderStatus struct { // Progress is a slice of completion ratios. The index of a ratio is its // chunk id less one. Progress []float64 }
ReaderStatus reports the status for each reader.
type StatusInfo ¶
type StatusInfo struct { // Writers contains the status of all current uploads with progress. Writers map[string]*WriterStatus // Readers contains the status of all current downloads with progress. Readers map[string]*ReaderStatus // RPCs contains information about recently made RPC calls over the last // minute, five minutes, hour, and for all time. RPCs map[time.Duration]MethodList }
StatusInfo reports information about a client.
type Writer ¶
type Writer struct { // ConcurrentUploads is number of different threads sending data concurrently // to Backblaze for large files. This can increase performance greatly, as // each thread will hit a different endpoint. However, there is a ChunkSize // buffer for each thread. Values less than 1 are equivalent to 1. ConcurrentUploads int // Resume an upload. If true, and the upload is a large file, and a file of // the same name was started but not finished, then assume that we are // resuming that file, and don't upload duplicate chunks. Resume bool // ChunkSize is the size, in bytes, of each individual part, when writing // large files, and also when determining whether to upload a file normally // or when to split it into parts. The default is 100M (1e8) The minimum is // 5M (5e6); values less than this are not an error, but will fail. The // maximum is 5GB (5e9). ChunkSize int // UseFileBuffer controls whether to use an in-memory buffer (the default) or // scratch space on the file system. If this is true, b2 will save chunks in // FileBufferDir. UseFileBuffer bool // FileBufferDir specifies the directory where scratch files are kept. If // blank, os.TempDir() is used. FileBufferDir string // contains filtered or unexported fields }
Writer writes data into Backblaze. It automatically switches to the large file API if the file exceeds ChunkSize bytes. Due to that and other Backblaze API details, there is a large buffer.
Changes to public Writer attributes must be made before the first call to Write.
func (*Writer) Close ¶
Close satisfies the io.Closer interface. It is critical to check the return value of Close for all writers.
func (*Writer) ReadFrom ¶ added in v0.2.0
ReadFrom reads all of r into w, returning the first error or no error if r returns io.EOF. If r is also an io.Seeker, ReadFrom will stream r directly over the wire instead of buffering it locally. This reduces memory usage.
Do not issue multiple calls to ReadFrom, or mix ReadFrom and Write. If you have multiple readers you want to concatenate into the same B2 object, use an io.MultiReader.
Note that io.Copy will automatically choose to use ReadFrom.
ReadFrom currently doesn't handle w.Resume; if w.Resume is true, ReadFrom will act as if r is not an io.Seeker.
type WriterOption ¶ added in v0.5.0
type WriterOption func(*Writer)
A WriterOption sets Writer-specific behavior.
func WithAttrsOption ¶ added in v0.5.0
func WithAttrsOption(attrs *Attrs) WriterOption
WithAttrs attaches the given Attrs to the writer.
func WithCancelOnError ¶ added in v0.5.2
func WithCancelOnError(ctxf func() context.Context, errf func(error)) WriterOption
WithCancelOnError requests the writer, if it has started a large file upload, to call b2_cancel_large_file on any permanent error. It calls ctxf to obtain a context with which to cancel the file; this is to allow callers to set specific timeouts. If errf is non-nil, then it is called with the (possibly nil) output of b2_cancel_large_file.
type WriterStatus ¶
type WriterStatus struct { // Progress is a slice of completion ratios. The index of a ratio is its // chunk id less one. Progress []float64 }
WriterStatus reports the status for each writer.