Documentation ¶
Overview ¶
Package uplink is the main entrypoint to interacting with Storj Labs' decentralized storage network.
Sign up for an account on a Satellite today! https://storj.io/
Access Grants ¶
The fundamental unit of access in the Storj Labs storage network is the Access Grant. An access grant is a serialized structure that is internally comprised of an API Key, a set of encryption key information, and information about which Storj Labs or Tardigrade network Satellite is responsible for the metadata. An access grant is always associated with exactly one Project on one Satellite.
If you don't already have an access grant, you will need make an account on a Satellite, generate an API Key, and encapsulate that API Key with encryption information into an access grant.
If you don't already have an account on a Satellite, first make one at https://storj.io/ and note the Satellite you choose (such as us1.storj.io, eu1.storj.io, etc). Then, make an API Key in the web interface.
The first step to any project is to generate a restricted access grant with the minimal permissions that are needed. Access grants contains all encryption information and they should be restricted as much as possible.
To make an access grant, you can create one using our Uplink CLI tool's 'share' subcommand (after setting up the Uplink CLI tool), or you can make one as follows:
access, err := uplink.RequestAccessWithPassphrase(ctx, satelliteAddress, apiKey, rootPassphrase) if err != nil { return err } // create an access grant for reading bucket "logs" permission := uplink.ReadOnlyPermission() shared := uplink.SharePrefix{Bucket: "logs"} restrictedAccess, err := access.Share(permission, shared) if err != nil { return err } // serialize the restricted access grant serializedAccess, err := restrictedAccess.Serialize() if err != nil { return err }
In the above example, 'serializedAccess' is a human-readable string that represents read-only access to just the "logs" bucket, and is only able to decrypt that one bucket thanks to hierarchical deterministic key derivation.
Note: RequestAccessWithPassphrase is CPU-intensive, and your application's normal lifecycle should avoid it and use ParseAccess where possible instead.
To revoke an access grant see the Project.RevokeAccess method.
Multitenancy in a Single Application Bucket ¶
A common architecture for building applications is to have a single bucket for the entire application to store the objects of all users. In such architecture, it is of utmost importance to guarantee that users can access only their objects but not the objects of other users.
This can be achieved by implementing an app-specific authentication service that generates an access grant for each user by restricting the main access grant of the application. This user-specific access grant is restricted to access the objects only within a specific key prefix defined for the user.
When initialized, the authentication server creates the main application access grant with an empty passphrase as follows.
appAccess, err := uplink.RequestAccessWithPassphrase(ctx, satellite, appAPIKey, "")
The authentication service does not hold any encryption information about users, so the passphrase used to request the main application access grant does not matter. The encryption keys related to user objects will be overridden in a next step on the client-side. It is important that once set to a specific value, this passphrase never changes in the future. Therefore, the best practice is to use an empty passphrase.
Whenever a user is authenticated, the authentication service generates the user-specific access grant as follows:
// create a user access grant for accessing their files, limited for the next 8 hours now := time.Now() permission := uplink.FullPermission() // 2 minutes leeway to avoid time sync issues with the satellite permission.NotBefore = now.Add(-2 * time.Minute) permission.NotAfter = now.Add(8 * time.Hour) userPrefix := uplink.SharePrefix{ Bucket: appBucket, Prefix: userID + "/", } userAccess, err := appAccess.Share(permission, userPrefix) if err != nil { return err } // serialize the user access grant serializedAccess, err := userAccess.Serialize() if err != nil { return err }
The userID is something that uniquely identifies the users in the application and must never change.
Along with the user access grant, the authentication service should return a user-specific salt. The salt must be always the same for this user. The salt size is 16-byte or 32-byte.
Once the application receives the user-specific access grant and the user-specific salt from the authentication service, it has to override the encryption key in the access grant, so users can encrypt and decrypt their files with encryption keys derived from their passphrase.
userAccess, err = uplink.ParseAccess(serializedUserAccess) if err != nil { return nil, err } saltedUserKey, err := uplink.DeriveEncryptionKey(userPassphrase, userSalt) if err != nil { return nil, err } err = userAccess.OverrideEncryptionKey(appBucket, userID+"/", saltedUserKey) if err != nil { return nil, err }
The user-specific access grant is now ready to use by the application.
Projects ¶
Once you have a valid access grant, you can open a Project with the access that access grant allows for.
project, err := uplink.OpenProject(ctx, access) if err != nil { return err } defer project.Close()
Projects allow you to manage buckets and objects within buckets.
Buckets ¶
A bucket represents a collection of objects. You can upload, download, list, and delete objects of any size or shape. Objects within buckets are represented by keys, where keys can optionally be listed using the "/" delimiter.
Note: Objects and object keys within buckets are end-to-end encrypted, but bucket names themselves are not encrypted, so the billing interface on the Satellite can show you bucket line items.
buckets := project.ListBuckets(ctx, nil) for buckets.Next() { fmt.Println(buckets.Item().Name) } if err := buckets.Err(); err != nil { return err }
Download Object ¶
Objects support a couple kilobytes of arbitrary key/value metadata, and arbitrary-size primary data streams with the ability to read at arbitrary offsets.
object, err := project.DownloadObject(ctx, "logs", "2020-04-18/webserver.log", nil) if err != nil { return err } defer object.Close() _, err = io.Copy(w, object) return err
If you want to access only a small subrange of the data you uploaded, you can use `uplink.DownloadOptions` to specify the download range.
object, err := project.DownloadObject(ctx, "logs", "2020-04-18/webserver.log", &uplink.DownloadOptions{Offset: 10, Length: 100}) if err != nil { return err } defer object.Close() _, err = io.Copy(w, object) return err
List Objects ¶
Listing objects returns an iterator that allows to walk through all the items:
objects := project.ListObjects(ctx, "logs", nil) for objects.Next() { item := objects.Item() fmt.Println(item.IsPrefix, item.Key) } if err := objects.Err(); err != nil { return err }
Index ¶
- Variables
- type Access
- type Bucket
- type BucketIterator
- type CommitUploadOptions
- type Config
- type CopyObjectOptions
- type CustomMetadata
- type Download
- type DownloadOptions
- type EncryptionKey
- type ListBucketsOptions
- type ListObjectsOptions
- type ListUploadPartsOptions
- type ListUploadsOptions
- type MoveObjectOptions
- type Object
- type ObjectIterator
- type Part
- type PartIterator
- type PartUpload
- type Permission
- type Project
- func (project *Project) AbortUpload(ctx context.Context, bucket, key, uploadID string) (err error)
- func (project *Project) BeginUpload(ctx context.Context, bucket, key string, options *UploadOptions) (info UploadInfo, err error)
- func (project *Project) Close() (err error)
- func (project *Project) CommitUpload(ctx context.Context, bucket, key, uploadID string, opts *CommitUploadOptions) (object *Object, err error)
- func (project *Project) CopyObject(ctx context.Context, oldBucket, oldKey, newBucket, newKey string, ...) (_ *Object, err error)
- func (project *Project) CreateBucket(ctx context.Context, bucket string) (created *Bucket, err error)
- func (project *Project) DeleteBucket(ctx context.Context, bucket string) (deleted *Bucket, err error)
- func (project *Project) DeleteBucketWithObjects(ctx context.Context, bucket string) (deleted *Bucket, err error)
- func (project *Project) DeleteObject(ctx context.Context, bucket, key string) (deleted *Object, err error)
- func (project *Project) DownloadObject(ctx context.Context, bucket, key string, options *DownloadOptions) (_ *Download, err error)
- func (project *Project) EnsureBucket(ctx context.Context, bucket string) (ensured *Bucket, err error)
- func (project *Project) ListBuckets(ctx context.Context, options *ListBucketsOptions) *BucketIterator
- func (project *Project) ListObjects(ctx context.Context, bucket string, options *ListObjectsOptions) *ObjectIterator
- func (project *Project) ListUploadParts(ctx context.Context, bucket, key, uploadID string, ...) *PartIterator
- func (project *Project) ListUploads(ctx context.Context, bucket string, options *ListUploadsOptions) *UploadIterator
- func (project *Project) MoveObject(ctx context.Context, oldbucket, oldkey, newbucket, newkey string, ...) (err error)
- func (project *Project) RevokeAccess(ctx context.Context, access *Access) (err error)
- func (project *Project) StatBucket(ctx context.Context, bucket string) (info *Bucket, err error)
- func (project *Project) StatObject(ctx context.Context, bucket, key string) (info *Object, err error)
- func (project *Project) UpdateObjectMetadata(ctx context.Context, bucket, key string, newMetadata CustomMetadata, ...) (err error)
- func (project *Project) UploadObject(ctx context.Context, bucket, key string, options *UploadOptions) (_ *Upload, err error)
- func (project *Project) UploadPart(ctx context.Context, bucket, key, uploadID string, partNumber uint32) (_ *PartUpload, err error)
- type SharePrefix
- type SystemMetadata
- type Upload
- type UploadInfo
- type UploadIterator
- type UploadObjectMetadataOptions
- type UploadOptions
Constants ¶
This section is empty.
Variables ¶
var ErrBandwidthLimitExceeded = errors.New("bandwidth limit exceeded")
ErrBandwidthLimitExceeded is returned when project will exceeded bandwidth limit.
var ErrBucketAlreadyExists = errors.New("bucket already exists")
ErrBucketAlreadyExists is returned when the bucket already exists during creation.
var ErrBucketNameInvalid = errors.New("bucket name invalid")
ErrBucketNameInvalid is returned when the bucket name is invalid.
var ErrBucketNotEmpty = errors.New("bucket not empty")
ErrBucketNotEmpty is returned when the bucket is not empty during deletion.
var ErrBucketNotFound = errors.New("bucket not found")
ErrBucketNotFound is returned when the bucket is not found.
var ErrObjectKeyInvalid = errors.New("object key invalid")
ErrObjectKeyInvalid is returned when the object key is invalid.
var ErrObjectNotFound = errors.New("object not found")
ErrObjectNotFound is returned when the object is not found.
var ErrPermissionDenied = errors.New("permission denied")
ErrPermissionDenied is returned when the request is denied due to invalid permissions.
var ErrSegmentsLimitExceeded = errors.New("segments limit exceeded")
ErrSegmentsLimitExceeded is returned when project will exceeded segments limit.
var ErrStorageLimitExceeded = errors.New("storage limit exceeded")
ErrStorageLimitExceeded is returned when project will exceeded storage limit.
var ErrTooManyRequests = errors.New("too many requests")
ErrTooManyRequests is returned when user has sent too many requests in a given amount of time.
var ErrUploadDone = errors.New("upload done")
ErrUploadDone is returned when either Abort or Commit has already been called.
var ErrUploadIDInvalid = errors.New("upload ID invalid")
ErrUploadIDInvalid is returned when the upload ID is invalid.
Functions ¶
This section is empty.
Types ¶
type Access ¶
type Access struct {
// contains filtered or unexported fields
}
An Access Grant contains everything to access a project and specific buckets. It includes a potentially-restricted API Key, a potentially-restricted set of encryption information, and information about the Satellite responsible for the project's metadata.
func ParseAccess ¶
ParseAccess parses a serialized access grant string.
This should be the main way to instantiate an access grant for opening a project. See the note on RequestAccessWithPassphrase.
func RequestAccessWithPassphrase ¶
func RequestAccessWithPassphrase(ctx context.Context, satelliteAddress, apiKey, passphrase string) (*Access, error)
RequestAccessWithPassphrase generates a new access grant using a passhprase. It must talk to the Satellite provided to get a project-based salt for deterministic key derivation.
Note: this is a CPU-heavy function that uses a password-based key derivation function (Argon2). This should be a setup-only step. Most common interactions with the library should be using a serialized access grant through ParseAccess directly.
func (*Access) OverrideEncryptionKey ¶ added in v1.2.0
func (access *Access) OverrideEncryptionKey(bucket, prefix string, encryptionKey *EncryptionKey) error
OverrideEncryptionKey overrides the root encryption key for the prefix in bucket with encryptionKey. The prefix argument must end with slash, otherwise the method returns an error.
This function is useful for overriding the encryption key in user-specific access grants when implementing multitenancy in a single app bucket. See the relevant section in the package documentation.
func (*Access) SatelliteAddress ¶ added in v1.4.0
SatelliteAddress returns the satellite node URL for this access grant.
func (*Access) Serialize ¶
Serialize serializes an access grant such that it can be used later with ParseAccess or other tools.
func (*Access) Share ¶
func (access *Access) Share(permission Permission, prefixes ...SharePrefix) (*Access, error)
Share creates a new access grant with specific permissions.
Access grants can only have their existing permissions restricted, and the resulting access grant will only allow for the intersection of all previous Share calls in the access grant construction chain.
Prefixes, if provided, restrict the access grant (and internal encryption information) to only contain enough information to allow access to just those prefixes.
To revoke an access grant see the Project.RevokeAccess method.
type BucketIterator ¶
type BucketIterator struct {
// contains filtered or unexported fields
}
BucketIterator is an iterator over a collection of buckets.
func (*BucketIterator) Err ¶
func (buckets *BucketIterator) Err() error
Err returns error, if one happened during iteration.
func (*BucketIterator) Item ¶
func (buckets *BucketIterator) Item() *Bucket
Item returns the current bucket in the iterator.
func (*BucketIterator) Next ¶
func (buckets *BucketIterator) Next() bool
Next prepares next Bucket for reading. It returns false if the end of the iteration is reached and there are no more buckets, or if there is an error.
type CommitUploadOptions ¶ added in v1.6.0
type CommitUploadOptions struct {
CustomMetadata CustomMetadata
}
CommitUploadOptions options for committing multipart upload.
type Config ¶
type Config struct { // UserAgent defines a registered partner's Value Attribution Code, and is used by the satellite to associate // a bucket with the partner at the time of bucket creation. // See https://docs.storj.io/dcs/how-tos/configure-tools-for-the-partner-program for info on the Partner Program. // UserAgent should follow https://tools.ietf.org/html/rfc7231#section-5.5.3. UserAgent string // DialTimeout defines how long client should wait for establishing // a connection to peers. // No explicit value or 0 means default 20s will be used. Value lower than 0 means there is no timeout. // DialTimeout is ignored if DialContext is provided. // // Deprecated: with the advent of Noise and TCP_FASTOPEN use, traditional dialing // doesn't necessarily happen anymore. This is already ignored for certain // connections and will be removed in a future release. DialTimeout time.Duration // DialContext is an extremely low level concern. It should almost certainly // remain unset so that this library can make informed choices about how to // talk to each node. // DialContext is how sockets are opened to nodes of all kinds and is called to // establish a connection. If DialContext is nil, it'll try to use the implementation // best suited for each node. // // Deprecated: this will be removed in a future release. All analyzed uses of // setting this value in open source projects are attempting to solve some more // nuanced problem (like QoS) which can only be handled for some types of // connections. This value is a hammer where we need a scalpel. DialContext func(ctx context.Context, network, address string) (net.Conn, error) // ChainPEM and KeyPEM are optional fields that specify the tls identity used by // the uplink while talking to other peers on the network. Don't set just one. // It is expected that generally these will be left unset and a new tls identity // will be generated. ChainPEM, KeyPEM []byte // contains filtered or unexported fields }
Config defines configuration for using uplink library.
func (Config) OpenProject ¶
OpenProject opens a project with the specific access grant.
func (Config) RequestAccessWithPassphrase ¶
func (config Config) RequestAccessWithPassphrase(ctx context.Context, satelliteAddress, apiKey, passphrase string) (*Access, error)
RequestAccessWithPassphrase generates a new access grant using a passhprase. It must talk to the Satellite provided to get a project-based salt for deterministic key derivation.
Note: this is a CPU-heavy function that uses a password-based key derivation function (Argon2). This should be a setup-only step. Most common interactions with the library should be using a serialized access grant through ParseAccess directly.
type CopyObjectOptions ¶ added in v1.9.0
type CopyObjectOptions struct { }
CopyObjectOptions options for CopyObject method.
type CustomMetadata ¶
CustomMetadata contains custom user metadata about the object.
The keys and values in custom metadata are expected to be valid UTF-8.
When choosing a custom key for your application start it with a prefix "app:key", as an example application named "Image Board" might use a key "image-board:title".
func (CustomMetadata) Clone ¶
func (meta CustomMetadata) Clone() CustomMetadata
Clone makes a deep clone.
func (CustomMetadata) Verify ¶
func (meta CustomMetadata) Verify() error
Verify verifies whether CustomMetadata contains only "utf-8".
type Download ¶
type Download struct {
// contains filtered or unexported fields
}
Download is a download from Storj Network.
type DownloadOptions ¶
type DownloadOptions struct { // When Offset is negative it will read the suffix of the blob. // Combining negative offset and positive length is not supported. Offset int64 // When Length is negative it will read until the end of the blob. Length int64 }
DownloadOptions contains additional options for downloading.
type EncryptionKey ¶ added in v1.2.0
type EncryptionKey struct {
// contains filtered or unexported fields
}
EncryptionKey represents a key for encrypting and decrypting data.
func DeriveEncryptionKey ¶ added in v1.2.0
func DeriveEncryptionKey(passphrase string, salt []byte) (*EncryptionKey, error)
DeriveEncryptionKey derives a salted encryption key for passphrase using the salt.
This function is useful for deriving a salted encryption key for users when implementing multitenancy in a single app bucket. See the relevant section in the package documentation.
type ListBucketsOptions ¶
type ListBucketsOptions struct { // Cursor sets the starting position of the iterator. The first item listed will be the one after the cursor. Cursor string }
ListBucketsOptions defines bucket listing options.
type ListObjectsOptions ¶
type ListObjectsOptions struct { // Prefix allows to filter objects by a key prefix. // If not empty, it must end with slash. Prefix string // Cursor sets the starting position of the iterator. // The first item listed will be the one after the cursor. // Cursor is relative to Prefix. Cursor string // Recursive iterates the objects without collapsing prefixes. Recursive bool // System includes SystemMetadata in the results. System bool // Custom includes CustomMetadata in the results. Custom bool }
ListObjectsOptions defines object listing options.
type ListUploadPartsOptions ¶ added in v1.6.0
type ListUploadPartsOptions struct { // Cursor sets the starting position of the iterator. // The first item listed will be the one after the cursor. Cursor uint32 }
ListUploadPartsOptions options for listing upload parts.
type ListUploadsOptions ¶ added in v1.6.0
type ListUploadsOptions struct { // Prefix allows to filter uncommitted uploads by a key prefix. If not empty, it must end with slash. Prefix string // Cursor sets the starting position of the iterator. // The first item listed will be the one after the cursor. // Cursor is relative to Prefix. Cursor string // Recursive iterates the objects without collapsing prefixes. Recursive bool // System includes SystemMetadata in the results. System bool // Custom includes CustomMetadata in the results. Custom bool }
ListUploadsOptions options for listing uncommitted uploads.
type MoveObjectOptions ¶ added in v1.7.0
type MoveObjectOptions struct { }
MoveObjectOptions options for MoveObject method.
type Object ¶
type Object struct { Key string // IsPrefix indicates whether the Key is a prefix for other objects. IsPrefix bool System SystemMetadata Custom CustomMetadata // contains filtered or unexported fields }
Object contains information about an object.
type ObjectIterator ¶
type ObjectIterator struct {
// contains filtered or unexported fields
}
ObjectIterator is an iterator over a collection of objects or prefixes.
func (*ObjectIterator) Err ¶
func (objects *ObjectIterator) Err() error
Err returns error, if one happened during iteration.
func (*ObjectIterator) Item ¶
func (objects *ObjectIterator) Item() *Object
Item returns the current object in the iterator.
func (*ObjectIterator) Next ¶
func (objects *ObjectIterator) Next() bool
Next prepares next Object for reading. It returns false if the end of the iteration is reached and there are no more objects, or if there is an error.
type Part ¶ added in v1.6.0
type Part struct { PartNumber uint32 // Size plain size of a part. Size int64 Modified time.Time ETag []byte }
Part part metadata.
type PartIterator ¶ added in v1.6.0
type PartIterator struct {
// contains filtered or unexported fields
}
PartIterator is an iterator over a collection of parts of an upload.
func (*PartIterator) Err ¶ added in v1.6.0
func (parts *PartIterator) Err() error
Err returns error, if one happened during iteration.
func (*PartIterator) Item ¶ added in v1.6.0
func (parts *PartIterator) Item() *Part
Item returns the current entry in the iterator.
func (*PartIterator) Next ¶ added in v1.6.0
func (parts *PartIterator) Next() bool
Next prepares next entry for reading.
type PartUpload ¶ added in v1.6.0
type PartUpload struct {
// contains filtered or unexported fields
}
PartUpload is a part upload to started multipart upload.
func (*PartUpload) Abort ¶ added in v1.6.0
func (upload *PartUpload) Abort() error
Abort aborts the part upload.
Returns ErrUploadDone when either Abort or Commit has already been called.
func (*PartUpload) Commit ¶ added in v1.6.0
func (upload *PartUpload) Commit() error
Commit commits a part.
Returns ErrUploadDone when either Abort or Commit has already been called.
func (*PartUpload) Info ¶ added in v1.6.0
func (upload *PartUpload) Info() *Part
Info returns the last information about the uploaded part.
func (*PartUpload) SetETag ¶ added in v1.6.0
func (upload *PartUpload) SetETag(eTag []byte) error
SetETag sets ETag for a part.
type Permission ¶
type Permission struct { // AllowDownload gives permission to download the object's content. It // allows getting object metadata, but it does not allow listing buckets. AllowDownload bool // AllowUpload gives permission to create buckets and upload new objects. // It does not allow overwriting existing objects unless AllowDelete is // granted too. AllowUpload bool // AllowList gives permission to list buckets. It allows getting object // metadata, but it does not allow downloading the object's content. AllowList bool // AllowDelete gives permission to delete buckets and objects. Unless // either AllowDownload or AllowList is granted too, no object metadata and // no error info will be returned for deleted objects. AllowDelete bool // AllowLock gives permission for retention periods and legal holds to be // placed on and retrieved from objects. It also gives permission for // Object Lock configurations to be placed on and retrieved from buckets. AllowLock bool // NotBefore restricts when the resulting access grant is valid for. // If set, the resulting access grant will not work if the Satellite // believes the time is before NotBefore. // If set, this value should always be before NotAfter. NotBefore time.Time // NotAfter restricts when the resulting access grant is valid for. // If set, the resulting access grant will not work if the Satellite // believes the time is after NotAfter. // If set, this value should always be after NotBefore. NotAfter time.Time // MaxObjectTTL restricts the maximum time-to-live of objects. // If set, new objects are uploaded with an expiration time that reflects // the MaxObjectTTL period. // If objects are uploaded with an explicit expiration time, the upload // will be successful only if it is shorter than the MaxObjectTTL period. MaxObjectTTL *time.Duration }
Permission defines what actions can be used to share.
func FullPermission ¶
func FullPermission() Permission
FullPermission returns a Permission that allows all actions that the parent access grant already allows.
func ReadOnlyPermission ¶
func ReadOnlyPermission() Permission
ReadOnlyPermission returns a Permission that allows reading and listing (if the parent access grant already allows those things).
func WriteOnlyPermission ¶
func WriteOnlyPermission() Permission
WriteOnlyPermission returns a Permission that allows writing and deleting (if the parent access grant already allows those things).
type Project ¶
type Project struct {
// contains filtered or unexported fields
}
Project provides access to managing buckets and objects.
func OpenProject ¶
OpenProject opens a project with the specific access grant.
func (*Project) AbortUpload ¶ added in v1.6.0
AbortUpload aborts a multipart upload started with BeginUpload.
uploadID is an upload identifier returned by BeginUpload.
func (*Project) BeginUpload ¶ added in v1.6.0
func (project *Project) BeginUpload(ctx context.Context, bucket, key string, options *UploadOptions) (info UploadInfo, err error)
BeginUpload begins a new multipart upload to bucket and key.
Use UploadPart to upload individual parts.
Use CommitUpload to finish the upload.
Use AbortUpload to cancel the upload at any time.
UploadObject is a convenient way to upload single part objects.
func (*Project) CommitUpload ¶ added in v1.6.0
func (project *Project) CommitUpload(ctx context.Context, bucket, key, uploadID string, opts *CommitUploadOptions) (object *Object, err error)
CommitUpload commits a multipart upload to bucket and key started with BeginUpload.
uploadID is an upload identifier returned by BeginUpload.
func (*Project) CopyObject ¶ added in v1.9.0
func (project *Project) CopyObject(ctx context.Context, oldBucket, oldKey, newBucket, newKey string, options *CopyObjectOptions) (_ *Object, err error)
CopyObject atomically copies object to a different bucket or/and key.
func (*Project) CreateBucket ¶
func (project *Project) CreateBucket(ctx context.Context, bucket string) (created *Bucket, err error)
CreateBucket creates a new bucket.
When bucket already exists it returns a valid Bucket and ErrBucketExists.
func (*Project) DeleteBucket ¶
func (project *Project) DeleteBucket(ctx context.Context, bucket string) (deleted *Bucket, err error)
DeleteBucket deletes a bucket.
When bucket is not empty it returns ErrBucketNotEmpty.
func (*Project) DeleteBucketWithObjects ¶ added in v1.3.0
func (project *Project) DeleteBucketWithObjects(ctx context.Context, bucket string) (deleted *Bucket, err error)
DeleteBucketWithObjects deletes a bucket and all objects within that bucket.
func (*Project) DeleteObject ¶
func (project *Project) DeleteObject(ctx context.Context, bucket, key string) (deleted *Object, err error)
DeleteObject deletes the object at the specific key. Returned deleted is not nil when the access grant has read permissions and the object was deleted.
func (*Project) DownloadObject ¶
func (project *Project) DownloadObject(ctx context.Context, bucket, key string, options *DownloadOptions) (_ *Download, err error)
DownloadObject starts a download from the specific key.
func (*Project) EnsureBucket ¶
func (project *Project) EnsureBucket(ctx context.Context, bucket string) (ensured *Bucket, err error)
EnsureBucket ensures that a bucket exists or creates a new one.
When bucket already exists it returns a valid Bucket and no error.
func (*Project) ListBuckets ¶
func (project *Project) ListBuckets(ctx context.Context, options *ListBucketsOptions) *BucketIterator
ListBuckets returns an iterator over the buckets.
func (*Project) ListObjects ¶
func (project *Project) ListObjects(ctx context.Context, bucket string, options *ListObjectsOptions) *ObjectIterator
ListObjects returns an iterator over the objects.
func (*Project) ListUploadParts ¶ added in v1.6.0
func (project *Project) ListUploadParts(ctx context.Context, bucket, key, uploadID string, options *ListUploadPartsOptions) *PartIterator
ListUploadParts returns an iterator over the parts of a multipart upload started with BeginUpload.
func (*Project) ListUploads ¶ added in v1.6.0
func (project *Project) ListUploads(ctx context.Context, bucket string, options *ListUploadsOptions) *UploadIterator
ListUploads returns an iterator over the uncommitted uploads in bucket. Both multipart and regular uploads are returned. An object may not be visible through ListUploads until it has a committed part.
func (*Project) MoveObject ¶ added in v1.7.0
func (project *Project) MoveObject(ctx context.Context, oldbucket, oldkey, newbucket, newkey string, options *MoveObjectOptions) (err error)
MoveObject moves object to a different bucket or/and key.
func (*Project) RevokeAccess ¶ added in v1.2.0
RevokeAccess revokes the API key embedded in the provided access grant.
When an access grant is revoked, it will also revoke any further-restricted access grants created (via the Access.Share method) from the revoked access grant.
An access grant is authorized to revoke any further-restricted access grant created from it. An access grant cannot revoke itself. An unauthorized request will return an error.
There may be a delay between a successful revocation request and actual revocation, depending on the satellite's access caching policies.
func (*Project) StatBucket ¶
StatBucket returns information about a bucket.
func (*Project) StatObject ¶
func (project *Project) StatObject(ctx context.Context, bucket, key string) (info *Object, err error)
StatObject returns information about an object at the specific key.
func (*Project) UpdateObjectMetadata ¶ added in v1.6.0
func (project *Project) UpdateObjectMetadata(ctx context.Context, bucket, key string, newMetadata CustomMetadata, options *UploadObjectMetadataOptions) (err error)
UpdateObjectMetadata replaces the custom metadata for the object at the specific key with newMetadata. Any existing custom metadata will be deleted.
func (*Project) UploadObject ¶
func (project *Project) UploadObject(ctx context.Context, bucket, key string, options *UploadOptions) (_ *Upload, err error)
UploadObject starts an upload to the specific key.
It is not guaranteed that the uncommitted object is visible through ListUploads while uploading.
func (*Project) UploadPart ¶ added in v1.6.0
func (project *Project) UploadPart(ctx context.Context, bucket, key, uploadID string, partNumber uint32) (_ *PartUpload, err error)
UploadPart uploads a part with partNumber to a multipart upload started with BeginUpload.
uploadID is an upload identifier returned by BeginUpload.
type SharePrefix ¶
type SharePrefix struct { // // Note: that within a bucket, the hierarchical key derivation scheme is // delineated by forward slashes (/), so encryption information will be // included in the resulting access grant to decrypt any key that shares // the same prefix up until the last slash. Prefix string }
SharePrefix defines a prefix that will be shared.
type SystemMetadata ¶
SystemMetadata contains information about the object that cannot be changed directly.
type Upload ¶
type Upload struct {
// contains filtered or unexported fields
}
Upload is an upload to Storj Network.
func (*Upload) Abort ¶
Abort aborts the upload.
Returns ErrUploadDone when either Abort or Commit has already been called.
func (*Upload) Commit ¶
Commit commits data to the store.
Returns ErrUploadDone when either Abort or Commit has already been called.
func (*Upload) SetCustomMetadata ¶
func (upload *Upload) SetCustomMetadata(ctx context.Context, custom CustomMetadata) error
SetCustomMetadata updates custom metadata to be included with the object. If it is nil, it won't be modified.
type UploadInfo ¶ added in v1.6.0
type UploadInfo struct { UploadID string Key string IsPrefix bool System SystemMetadata Custom CustomMetadata }
UploadInfo contains information about an upload.
type UploadIterator ¶ added in v1.6.0
type UploadIterator struct {
// contains filtered or unexported fields
}
UploadIterator is an iterator over a collection of uncommitted uploads.
func (*UploadIterator) Err ¶ added in v1.6.0
func (uploads *UploadIterator) Err() error
Err returns error, if one happened during iteration.
func (*UploadIterator) Item ¶ added in v1.6.0
func (uploads *UploadIterator) Item() *UploadInfo
Item returns the current entry in the iterator.
func (*UploadIterator) Next ¶ added in v1.6.0
func (uploads *UploadIterator) Next() bool
Next prepares next entry for reading. It returns false if the end of the iteration is reached and there are no more uploads, or if there is an error.
type UploadObjectMetadataOptions ¶ added in v1.6.0
type UploadObjectMetadataOptions struct { }
UploadObjectMetadataOptions contains additional options for updating object's metadata. Reserved for future use.
type UploadOptions ¶
UploadOptions contains additional options for uploading.
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
Package backcomp contains utilities for handling backwards incompatible changes.
|
Package backcomp contains utilities for handling backwards incompatible changes. |
examples
|
|
internal
|
|
private
|
|