Documentation ¶
Overview ¶
Package uplink is the main entrypoint to interacting with Storj Labs' decentralized storage network.
Projects ¶
An (*Uplink) reference lets you open a *Project, which should have already been created via the web interface of one of the Storj Labs or Tardigrade network Satellites. You may be able to create or access your us-central-1 account here: https://us-central-1.tardigrade.io/
Opening a *Project requires a specific Satellite address (e.g. "us-central-1.tardigrade.io:7777") and an API key. The API key will grant specific access to certain operations and resources within a project. Projects allow you to manage and open Buckets.
Example:
ul, err := uplink.NewUplink(ctx, nil) if err != nil { return err } defer ul.Close() p, err := ul.OpenProject(ctx, "us-central-1.tardigrade.io:7777", apiKey) if err != nil { return err } defer p.Close()
API Keys ¶
An API key is a "macaroon" (see https://ai.google/research/pubs/pub41892). As such, API keys can be restricted such that users of the restricted API key only have access to a subset of what the parent API key allowed. It is possible to restrict a macaroon to specific operations, buckets, paths, path prefixes, or time windows.
If you need a valid API key, please visit your chosen Satellite's web interface.
Example:
adminKey, err := uplink.ParseAPIKey("13YqeJ3Xk4KHocypZMdQZZqfC1goMvxbYSCWWEjSmew6rVvJp3GCK") if err != nil { return "", err } readOnlyKey, err := adminKey.Restrict(macaroon.Caveat{ DisallowWrites: true, DisallowLists: true, DisallowDeletes: true, }) if err != nil { return "", err } // return a new restricted key that is read only return readOnlyKey.Serialize()
Restricting an API key to a path prefix is most easily accomplished using an EncryptionAccess, so see EncryptionAccess for more.
Buckets ¶
A bucket represents a collection of objects. You can upload, download, list, and delete objects of any size or shape. Objects within buckets are represented by keys, where keys can optionally be listed using the "/" delimiter. Objects are always end-to-end encrypted.
b, err := p.OpenBucket(ctx, "staging", access) if err != nil { return err } defer b.Close()
EncryptionAccess ¶
Where an APIKey controls what resources and operations a Satellite will allow a user to access and perform, an EncryptionAccess controls what buckets, path prefixes, and objects a user has the ability to decrypt. An EncryptionAccess is a serializable collection of hierarchically-determined encryption keys, where by default the key starts at the root.
As an example, the following code creates an encryption access context (and API key) that is restricted to objects with the prefix "/logs/" inside the staging bucket.
access := uplink.NewEncryptionAccessWithDefaultKey(defaultKey) logServerKey, logServerAccess, err := access.Restrict( readOnlyKey, uplink.EncryptionRestriction{ Bucket: "staging", Path: "/logs/", }) if err != nil { return "", err } return logServerAccess.Serialize()
The keys to decrypt data in other buckets or in other path prefixes are not contained in this new serialized encryption access context. This new encryption access context only provides the information for just what is necessary.
Objects ¶
Objects support a couple kilobytes of arbitrary key/value metadata, an arbitrary-size primary data streams, with seeking. If you want to access only a small subrange of the data you uploaded, you can download only the range of the data you need in a fast and performant way. This allows you to stream video straight out of the network with little overhead.
obj, err := b.OpenObject(ctx, "/logs/webserver.log") if err != nil { return err } defer obj.Close() reader, err := obj.DownloadRange(ctx, 0, -1) if err != nil { return err } defer reader.Close()
Example (CreateBucket) ¶
package main import ( "context" "fmt" "io" "os" "github.com/zeebo/errs" "storj.io/storj/lib/uplink" ) func CreateBucketExample(ctx context.Context, satelliteAddress, apiKey string, cfg *uplink.Config, out io.Writer) (err error) { errCatch := func(fn func() error) { err = errs.Combine(err, fn()) } // First, create an Uplink handle. ul, err := uplink.NewUplink(ctx, cfg) if err != nil { return err } defer errCatch(ul.Close) // Then, parse the API key. API keys are "macaroons" that allow you to create // new, restricted API keys. key, err := uplink.ParseAPIKey(apiKey) if err != nil { return err } // Next, open the project in question. Projects are identified by a specific // Satellite and API key p, err := ul.OpenProject(ctx, satelliteAddress, key) if err != nil { return err } defer errCatch(p.Close) // Last, create the bucket! _, err = p.CreateBucket(ctx, "testbucket", nil) if err != nil { return err } fmt.Fprintln(out, "success!") return nil } func main() { // The satellite address is the address of the satellite your API key is // valid on satelliteAddress := "us-central-1.tardigrade.io:7777" // The API key can be created in the web interface apiKey := "qPSUM3k0bZyOIyil2xrVWiSuc9HuB2yBP3qDrA2Gc" err := CreateBucketExample(context.Background(), satelliteAddress, apiKey, &uplink.Config{}, os.Stdout) if err != nil { panic(err) } }
Output:
Example (CreateEncryptionKey) ¶
package main import ( "context" "fmt" "io" "io/ioutil" "os" "strings" "github.com/zeebo/errs" "storj.io/storj/lib/uplink" ) func CreateEncryptionKeyExampleByAdmin1(ctx context.Context, satelliteAddress, apiKey string, cfg *uplink.Config, out io.Writer) ( serializedAccess string, err error) { errCatch := func(fn func() error) { err = errs.Combine(err, fn()) } // First, create an Uplink handle. ul, err := uplink.NewUplink(ctx, cfg) if err != nil { return "", err } defer errCatch(ul.Close) // Parse the API key. API keys are "macaroons" that allow you to create new, // restricted API keys. key, err := uplink.ParseAPIKey(apiKey) if err != nil { return "", err } // Open the project in question. Projects are identified by a specific // Satellite and API key p, err := ul.OpenProject(ctx, satelliteAddress, key) if err != nil { return "", err } defer errCatch(p.Close) // Make a key encKey, err := p.SaltedKeyFromPassphrase(ctx, "my secret passphrase") if err != nil { return "", err } // Make an encryption context access := uplink.NewEncryptionAccessWithDefaultKey(*encKey) // serialize it serializedAccess, err = access.Serialize() if err != nil { return "", err } // Create a bucket _, err = p.CreateBucket(ctx, "prod", nil) if err != nil { return "", err } // Open bucket bucket, err := p.OpenBucket(ctx, "prod", access) if err != nil { return "", err } defer errCatch(bucket.Close) // Upload a file err = bucket.UploadObject(ctx, "webserver/logs/log.txt", strings.NewReader("hello world"), nil) if err != nil { return "", err } fmt.Fprintln(out, "success!") return serializedAccess, nil } func CreateEncryptionKeyExampleByAdmin2(ctx context.Context, satelliteAddress, apiKey string, serializedAccess string, cfg *uplink.Config, out io.Writer) (err error) { errCatch := func(fn func() error) { err = errs.Combine(err, fn()) } // First, create an Uplink handle. ul, err := uplink.NewUplink(ctx, cfg) if err != nil { return err } defer errCatch(ul.Close) // Parse the API key. API keys are "macaroons" that allow you to create new, // restricted API keys. key, err := uplink.ParseAPIKey(apiKey) if err != nil { return err } // Open the project in question. Projects are identified by a specific // Satellite and API key p, err := ul.OpenProject(ctx, satelliteAddress, key) if err != nil { return err } defer errCatch(p.Close) // Parse the encryption context access, err := uplink.ParseEncryptionAccess(serializedAccess) if err != nil { return err } // Open bucket bucket, err := p.OpenBucket(ctx, "prod", access) if err != nil { return err } defer errCatch(bucket.Close) // Open file obj, err := bucket.OpenObject(ctx, "webserver/logs/log.txt") if err != nil { return err } defer errCatch(obj.Close) // Get a reader for the entire file r, err := obj.DownloadRange(ctx, 0, -1) if err != nil { return err } defer errCatch(r.Close) // Read the file data, err := ioutil.ReadAll(r) if err != nil { return err } // Print it! fmt.Fprintln(out, string(data)) return nil } func main() { // The satellite address is the address of the satellite your API key is // valid on satelliteAddress := "us-central-1.tardigrade.io:7777" // The API key can be created in the web interface admin1APIKey := "qPSUM3k0bZyOIyil2xrVWiSuc9HuB2yBP3qDrA2Gc" admin2APIKey := "udP0lzCC2rgwRZfdY70PcwWrXzrq9cl5usbiFaeyo" ctx := context.Background() // Admin1 is going to create an encryption context and share it access, err := CreateEncryptionKeyExampleByAdmin1(ctx, satelliteAddress, admin1APIKey, &uplink.Config{}, os.Stdout) if err != nil { panic(err) } // Admin2 is going to use the provided encryption context to load the // uploaded file err = CreateEncryptionKeyExampleByAdmin2(ctx, satelliteAddress, admin2APIKey, access, &uplink.Config{}, os.Stdout) if err != nil { panic(err) } }
Output:
Example (DeleteBucket) ¶
package main import ( "context" "fmt" "io" "os" "github.com/zeebo/errs" "storj.io/storj/lib/uplink" ) func DeleteBucketExample(ctx context.Context, satelliteAddress, apiKey string, cfg *uplink.Config, out io.Writer) (err error) { errCatch := func(fn func() error) { err = errs.Combine(err, fn()) } // First, create an Uplink handle. ul, err := uplink.NewUplink(ctx, cfg) if err != nil { return err } defer errCatch(ul.Close) // Then, parse the API key. API keys are "macaroons" that allow you to create // new, restricted API keys. key, err := uplink.ParseAPIKey(apiKey) if err != nil { return err } // Next, open the project in question. Projects are identified by a specific // Satellite and API key p, err := ul.OpenProject(ctx, satelliteAddress, key) if err != nil { return err } defer errCatch(p.Close) // Last, delete the bucket! err = p.DeleteBucket(ctx, "testbucket") if err != nil { return err } fmt.Fprintln(out, "success!") return nil } func main() { // The satellite address is the address of the satellite your API key is // valid on satelliteAddress := "us-central-1.tardigrade.io:7777" // The API key can be created in the web interface apiKey := "qPSUM3k0bZyOIyil2xrVWiSuc9HuB2yBP3qDrA2Gc" err := DeleteBucketExample(context.Background(), satelliteAddress, apiKey, &uplink.Config{}, os.Stdout) if err != nil { panic(err) } }
Output:
Example (ListBuckets) ¶
package main import ( "context" "fmt" "io" "os" "github.com/zeebo/errs" "storj.io/storj/lib/uplink" "storj.io/storj/pkg/storj" ) func ListBucketsExample(ctx context.Context, satelliteAddress, apiKey string, cfg *uplink.Config, out io.Writer) (err error) { errCatch := func(fn func() error) { err = errs.Combine(err, fn()) } // First, create an Uplink handle. ul, err := uplink.NewUplink(ctx, cfg) if err != nil { return err } defer errCatch(ul.Close) // Then, parse the API key. API keys are "macaroons" that allow you to create // new, restricted API keys. key, err := uplink.ParseAPIKey(apiKey) if err != nil { return err } // Next, open the project in question. Projects are identified by a specific // Satellite and API key p, err := ul.OpenProject(ctx, satelliteAddress, key) if err != nil { return err } defer errCatch(p.Close) // Last, list the buckets! Bucket listing is paginated, so you'll need to // use pagination. list := uplink.BucketListOptions{ Direction: storj.Forward} for { result, err := p.ListBuckets(ctx, &list) if err != nil { return err } for _, bucket := range result.Items { fmt.Fprintf(out, "Bucket: %v\n", bucket.Name) } if !result.More { break } list = list.NextPage(result) } return nil } func main() { // The satellite address is the address of the satellite your API key is // valid on satelliteAddress := "us-central-1.tardigrade.io:7777" // The API key can be created in the web interface apiKey := "qPSUM3k0bZyOIyil2xrVWiSuc9HuB2yBP3qDrA2Gc" err := ListBucketsExample(context.Background(), satelliteAddress, apiKey, &uplink.Config{}, os.Stdout) if err != nil { panic(err) } }
Output:
Example (RestrictAccess) ¶
package main import ( "context" "fmt" "io" "io/ioutil" "os" "github.com/zeebo/errs" "storj.io/storj/lib/uplink" "storj.io/storj/pkg/macaroon" ) func RestrictAccessExampleByAdmin(ctx context.Context, satelliteAddress, apiKey, adminAccess string, cfg *uplink.Config, out io.Writer) ( serializedScope string, err error) { // Parse the API key. API keys are "macaroons" that allow you to create new, // restricted API keys. key, err := uplink.ParseAPIKey(apiKey) if err != nil { return "", err } // Restrict the API key to be read only and to be for just the prod and // staging buckets for the path webserver/logs/ userAPIKey, err := key.Restrict(macaroon.Caveat{ DisallowWrites: true, DisallowDeletes: true, }) if err != nil { return "", err } // Load the existing encryption access context access, err := uplink.ParseEncryptionAccess(adminAccess) if err != nil { return "", err } // Restrict the encryption access context to just the prod and staging // buckets for webserver/logs/ userAPIKey, userAccess, err := access.Restrict(userAPIKey, uplink.EncryptionRestriction{ Bucket: "prod", PathPrefix: "webserver/logs", }, uplink.EncryptionRestriction{ Bucket: "staging", PathPrefix: "webserver/logs", }, ) if err != nil { return "", err } userScope := &uplink.Scope{ SatelliteAddr: satelliteAddress, APIKey: userAPIKey, EncryptionAccess: userAccess, } // Serialize the scope serializedScope, err = userScope.Serialize() if err != nil { return "", err } fmt.Fprintln(out, "success!") return serializedScope, nil } func RestrictAccessExampleByUser(ctx context.Context, serializedScope string, cfg *uplink.Config, out io.Writer) (err error) { errCatch := func(fn func() error) { err = errs.Combine(err, fn()) } // First, create an Uplink handle. ul, err := uplink.NewUplink(ctx, cfg) if err != nil { return err } defer errCatch(ul.Close) // Parse the scope. scope, err := uplink.ParseScope(serializedScope) if err != nil { return err } // Open the project in question. Projects are identified by a specific // Satellite and API key p, err := ul.OpenProject(ctx, scope.SatelliteAddr, scope.APIKey) if err != nil { return err } defer errCatch(p.Close) // Open bucket bucket, err := p.OpenBucket(ctx, "prod", scope.EncryptionAccess) if err != nil { return err } defer errCatch(bucket.Close) // Open file obj, err := bucket.OpenObject(ctx, "webserver/logs/log.txt") if err != nil { return err } defer errCatch(obj.Close) // Get a reader for the entire file r, err := obj.DownloadRange(ctx, 0, -1) if err != nil { return err } defer errCatch(r.Close) // Read the file data, err := ioutil.ReadAll(r) if err != nil { return err } // Print it! fmt.Fprintln(out, string(data)) return nil } func main() { // The satellite address is the address of the satellite your API key is // valid on satelliteAddress := "us-central-1.tardigrade.io:7777" // The API key can be created in the web interface adminAPIKey := "qPSUM3k0bZyOIyil2xrVWiSuc9HuB2yBP3qDrA2Gc" // The encryption access context was created using // NewEncryptionAccessWithDefaultKey and // (*Project).SaltedKeyFromPassphrase() earlier adminAccess := "HYGoqCEz43mCE40Hc5lQD3DtUYynx9Vo1GjOx75hQ" ctx := context.Background() // Admin1 is going to create a scope and share it userScope, err := RestrictAccessExampleByAdmin(ctx, satelliteAddress, adminAPIKey, adminAccess, &uplink.Config{}, os.Stdout) if err != nil { panic(err) } // Admin2 is going to use the provided scope to load the uploaded file err = RestrictAccessExampleByUser(ctx, userScope, &uplink.Config{}, os.Stdout) if err != nil { panic(err) } }
Output:
Index ¶
- Variables
- type APIKey
- type Bucket
- func (b *Bucket) Close() error
- func (b *Bucket) DeleteObject(ctx context.Context, path storj.Path) (err error)
- func (b *Bucket) Download(ctx context.Context, path storj.Path) (_ io.ReadCloser, err error)
- func (b *Bucket) DownloadRange(ctx context.Context, path storj.Path, start, limit int64) (_ io.ReadCloser, err error)
- func (b *Bucket) ListObjects(ctx context.Context, cfg *ListOptions) (list storj.ObjectList, err error)
- func (b *Bucket) NewReader(ctx context.Context, path storj.Path) (_ io.ReadCloser, err error)deprecated
- func (b *Bucket) NewWriter(ctx context.Context, path storj.Path, opts *UploadOptions) (_ io.WriteCloser, err error)
- func (b *Bucket) OpenObject(ctx context.Context, path storj.Path) (o *Object, err error)
- func (b *Bucket) UploadObject(ctx context.Context, path storj.Path, data io.Reader, opts *UploadOptions) (err error)
- type BucketConfig
- type BucketListOptions
- type Config
- type EncryptionAccess
- func (s *EncryptionAccess) Import(other *EncryptionAccess) error
- func (s *EncryptionAccess) Restrict(apiKey APIKey, restrictions ...EncryptionRestriction) (APIKey, *EncryptionAccess, error)
- func (s *EncryptionAccess) Serialize() (string, error)
- func (s *EncryptionAccess) SetDefaultKey(defaultKey storj.Key)
- func (s *EncryptionAccess) Store() *encryption.Store
- type EncryptionRestriction
- type ListOptions
- type Object
- type ObjectMeta
- type Project
- func (p *Project) Close() error
- func (p *Project) CreateBucket(ctx context.Context, name string, cfg *BucketConfig) (bucket storj.Bucket, err error)
- func (p *Project) DeleteBucket(ctx context.Context, bucket string) (err error)
- func (p *Project) GetBucketInfo(ctx context.Context, bucket string) (b storj.Bucket, bi *BucketConfig, err error)
- func (p *Project) ListBuckets(ctx context.Context, opts *BucketListOptions) (bl storj.BucketList, err error)
- func (p *Project) OpenBucket(ctx context.Context, bucketName string, access *EncryptionAccess) (b *Bucket, err error)
- func (p *Project) SaltedKeyFromPassphrase(ctx context.Context, passphrase string) (_ *storj.Key, err error)
- type Scope
- type Uplink
- type UploadOptions
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ( // Error is the toplevel class of errors for the uplink library. Error = errs.Class("libuplink") )
Functions ¶
This section is empty.
Types ¶
type APIKey ¶
type APIKey struct {
// contains filtered or unexported fields
}
APIKey represents an access credential to certain resources
type Bucket ¶
type Bucket struct { BucketConfig Name string Created time.Time // contains filtered or unexported fields }
Bucket represents operations you can perform on a bucket
func (*Bucket) DeleteObject ¶
DeleteObject removes an object, if authorized.
func (*Bucket) Download ¶ added in v0.18.0
Download creates a new reader that downloads the object data.
func (*Bucket) DownloadRange ¶ added in v0.18.0
func (b *Bucket) DownloadRange(ctx context.Context, path storj.Path, start, limit int64) (_ io.ReadCloser, err error)
DownloadRange creates a new reader that downloads the object data starting from start and upto start + limit.
func (*Bucket) ListObjects ¶
func (b *Bucket) ListObjects(ctx context.Context, cfg *ListOptions) (list storj.ObjectList, err error)
ListObjects lists objects a user is authorized to see.
func (*Bucket) NewWriter ¶ added in v0.12.0
func (b *Bucket) NewWriter(ctx context.Context, path storj.Path, opts *UploadOptions) (_ io.WriteCloser, err error)
NewWriter creates a writer which uploads the object.
func (*Bucket) OpenObject ¶
OpenObject returns an Object handle, if authorized.
type BucketConfig ¶
type BucketConfig struct { // PathCipher indicates which cipher suite is to be used for path // encryption within the new Bucket. If not set, AES-GCM encryption // will be used. PathCipher storj.CipherSuite // EncryptionParameters specifies the default encryption parameters to // be used for data encryption of new Objects in this bucket. EncryptionParameters storj.EncryptionParameters // Volatile groups config values that are likely to change semantics // or go away entirely between releases. Be careful when using them! Volatile struct { // RedundancyScheme defines the default Reed-Solomon and/or // Forward Error Correction encoding parameters to be used by // objects in this Bucket. RedundancyScheme storj.RedundancyScheme // SegmentsSize is the default segment size to use for new // objects in this Bucket. SegmentsSize memory.Size } }
BucketConfig holds information about a bucket's configuration. This is filled in by the caller for use with CreateBucket(), or filled in by the library as Bucket.Config when a bucket is returned from OpenBucket().
type BucketListOptions ¶
type BucketListOptions = storj.BucketListOptions
BucketListOptions controls options to the ListBuckets() call.
type Config ¶
type Config struct { // Volatile groups config values that are likely to change semantics // or go away entirely between releases. Be careful when using them! Volatile struct { // Log is the logger to use for uplink components Log *zap.Logger // TLS defines options that affect TLS negotiation for outbound // connections initiated by this uplink. TLS struct { // SkipPeerCAWhitelist determines whether to require all // remote hosts to have identity certificates signed by // Certificate Authorities in the default whitelist. If // set to true, the whitelist will be ignored. SkipPeerCAWhitelist bool // PeerCAWhitelistPath gives the path to a CA cert // whitelist file. It is ignored if SkipPeerCAWhitelist // is set. If empty, the internal default peer whitelist // is used. PeerCAWhitelistPath string } // PeerIDVersion is the identity versions remote peers to this node // will be supported by this node. PeerIDVersion string // MaxInlineSize determines whether the uplink will attempt to // store a new object in the satellite's metainfo. Objects at // or below this size will be marked for inline storage, and // objects above this size will not. (The satellite may reject // the inline storage and require remote storage, still.) MaxInlineSize memory.Size // MaxMemory is the default maximum amount of memory to be // allocated for read buffers while performing decodes of // objects. (This option is overrideable per Bucket if the user // so desires.) If set to zero, the library default (4 MiB) will // be used. If set to a negative value, the system will use the // smallest amount of memory it can. MaxMemory memory.Size // PartnerID is the identity given to the partner for value // attribution. // // Deprecated: prefer UserAgent PartnerID string // UserAgent for the product using the library. UserAgent string // DialTimeout is the maximum time to wait connecting to another node. // If not set, the library default (20 seconds) will be used. DialTimeout time.Duration // PBKDFConcurrency is the passphrase-based key derivation function // concurrency to use. // WARNING: changing this value fundamentally changes how keys are // derived. Keys generated with one value will not be the same keys // as generated with other values! Leaving this at the default is // highly recommended. // // Unfortunately, up to version v0.26.2, we automatically set this to the // number of CPU cores your processor had. If you are having trouble // decrypting data uploaded with v0.26.2 or older, you may need to set // this value to the number of cores your computer had at the time // you entered a passphrase. // // Otherwise, this value should be left at the default value of 0 // (which means to use the internal default). PBKDFConcurrency int } }
Config represents configuration options for an Uplink
type EncryptionAccess ¶
type EncryptionAccess struct {
// contains filtered or unexported fields
}
EncryptionAccess represents an encryption access context. It holds information about how various buckets and objects should be encrypted and decrypted.
func NewEncryptionAccess ¶ added in v0.14.5
func NewEncryptionAccess() *EncryptionAccess
NewEncryptionAccess creates an encryption access context
func NewEncryptionAccessWithDefaultKey ¶ added in v0.14.5
func NewEncryptionAccessWithDefaultKey(defaultKey storj.Key) *EncryptionAccess
NewEncryptionAccessWithDefaultKey creates an encryption access context with a default key set. Use (*Project).SaltedKeyFromPassphrase to generate a default key
func ParseEncryptionAccess ¶ added in v0.14.5
func ParseEncryptionAccess(serialized string) (*EncryptionAccess, error)
ParseEncryptionAccess parses a base58 serialized encryption access into a working one.
func (*EncryptionAccess) Import ¶ added in v0.14.5
func (s *EncryptionAccess) Import(other *EncryptionAccess) error
Import merges the other encryption access context into this one. In cases of conflicting path decryption settings (including if both accesses have a default key), the new settings are kept.
func (*EncryptionAccess) Restrict ¶ added in v0.14.5
func (s *EncryptionAccess) Restrict(apiKey APIKey, restrictions ...EncryptionRestriction) (APIKey, *EncryptionAccess, error)
Restrict creates a new EncryptionAccess with no default key, where the key material in the new access is just enough to allow someone to access all of the given restrictions but no more.
func (*EncryptionAccess) Serialize ¶ added in v0.14.5
func (s *EncryptionAccess) Serialize() (string, error)
Serialize turns an EncryptionAccess into base58
func (*EncryptionAccess) SetDefaultKey ¶ added in v0.14.5
func (s *EncryptionAccess) SetDefaultKey(defaultKey storj.Key)
SetDefaultKey sets the default key for the encryption access context. Use (*Project).SaltedKeyFromPassphrase to generate a default key
func (*EncryptionAccess) Store ¶ added in v0.14.5
func (s *EncryptionAccess) Store() *encryption.Store
Store returns the underlying encryption store for the access context.
type EncryptionRestriction ¶ added in v0.14.4
EncryptionRestriction represents a scenario where some set of objects may need to be encrypted/decrypted
type ListOptions ¶
type ListOptions = storj.ListOptions
ListOptions controls options for the ListObjects() call.
type Object ¶
type Object struct { // Meta holds the metainfo associated with the Object. Meta ObjectMeta // contains filtered or unexported fields }
An Object is a sequence of bytes with associated metadata, stored in the Storj network (or being prepared for such storage). It belongs to a specific bucket, and has a path and a size. It is comparable to a "file" in a conventional filesystem.
func (*Object) DownloadRange ¶
func (o *Object) DownloadRange(ctx context.Context, offset, length int64) (_ io.ReadCloser, err error)
DownloadRange returns an Object's data. A length of -1 will mean (Object.Size - offset).
type ObjectMeta ¶
type ObjectMeta struct { // Bucket gives the name of the bucket in which an Object is placed. Bucket string // Path is the path of the Object within the Bucket. Path components are // forward-slash-separated, like Unix file paths ("one/two/three"). Path storj.Path // IsPrefix is true if this ObjectMeta does not refer to a specific // Object, but to some arbitrary point in the path hierarchy. This would // be called a "folder" or "directory" in a typical filesystem. IsPrefix bool // ContentType, if set, gives a MIME content-type for the Object, as // set when the object was created. ContentType string // Metadata contains the additional information about an Object that was // set when the object was created. See UploadOptions.Metadata for more // information. Metadata map[string]string // Created is the time at which the Object was created. Created time.Time // Modified is the time at which the Object was last modified. Modified time.Time // Expires is the time at which the Object expires (after which it will // be automatically deleted from storage nodes). Expires time.Time // Size gives the size of the Object in bytes. Size int64 // Checksum gives a checksum of the contents of the Object. Checksum []byte // Volatile groups config values that are likely to change semantics // or go away entirely between releases. Be careful when using them! Volatile struct { // EncryptionParameters gives the encryption parameters being // used for the Object's data encryption. EncryptionParameters storj.EncryptionParameters // RedundancyScheme determines the Reed-Solomon and/or Forward // Error Correction encoding parameters to be used for this // Object. RedundancyScheme storj.RedundancyScheme // SegmentsSize gives the segment size being used for the // Object's data storage. SegmentsSize int64 } }
ObjectMeta contains metadata about a specific Object.
type Project ¶
type Project struct {
// contains filtered or unexported fields
}
Project represents a specific project access session.
func (*Project) Close ¶
Close closes the Project. Opened buckets or objects must not be used after calling Close.
func (*Project) CreateBucket ¶
func (p *Project) CreateBucket(ctx context.Context, name string, cfg *BucketConfig) (bucket storj.Bucket, err error)
CreateBucket creates a new bucket if authorized.
func (*Project) DeleteBucket ¶
DeleteBucket deletes a bucket if authorized. If the bucket contains any Objects at the time of deletion, they may be lost permanently.
func (*Project) GetBucketInfo ¶
func (p *Project) GetBucketInfo(ctx context.Context, bucket string) (b storj.Bucket, bi *BucketConfig, err error)
GetBucketInfo returns info about the requested bucket if authorized.
func (*Project) ListBuckets ¶
func (p *Project) ListBuckets(ctx context.Context, opts *BucketListOptions) (bl storj.BucketList, err error)
ListBuckets will list authorized buckets.
func (*Project) OpenBucket ¶
func (p *Project) OpenBucket(ctx context.Context, bucketName string, access *EncryptionAccess) (b *Bucket, err error)
OpenBucket returns a Bucket handle with the given EncryptionAccess information.
type Scope ¶ added in v0.15.0
type Scope struct { SatelliteAddr string APIKey APIKey EncryptionAccess *EncryptionAccess }
Scope is a serializable type that represents all of the credentials you need to open a project and some amount of buckets
func ParseScope ¶ added in v0.15.0
ParseScope unmarshals a base58 encoded scope protobuf and decodes the fields into the Scope convenience type. It will return an error if the protobuf is malformed or field validation fails.
type Uplink ¶
type Uplink struct {
// contains filtered or unexported fields
}
Uplink represents the main entrypoint to Storj V3. An Uplink connects to a specific Satellite and caches connections and resources, allowing one to create sessions delineated by specific access controls.
func NewUplink ¶
NewUplink creates a new Uplink. This is the first step to create an uplink session with a user specified config or with default config, if nil config
type UploadOptions ¶
type UploadOptions struct { // ContentType, if set, gives a MIME content-type for the Object. ContentType string // Metadata contains additional information about an Object. It can // hold arbitrary textual fields and can be retrieved together with the // Object. Field names can be at most 1024 bytes long. Field values are // not individually limited in size, but the total of all metadata // (fields and values) can not exceed 4 kiB. Metadata map[string]string // Expires is the time at which the new Object can expire (be deleted // automatically from storage nodes). Expires time.Time // Volatile groups config values that are likely to change semantics // or go away entirely between releases. Be careful when using them! Volatile struct { // EncryptionParameters determines the cipher suite to use for // the Object's data encryption. If not set, the Bucket's // defaults will be used. EncryptionParameters storj.EncryptionParameters // RedundancyScheme determines the Reed-Solomon and/or Forward // Error Correction encoding parameters to be used for this // Object. RedundancyScheme storj.RedundancyScheme } }
UploadOptions controls options about uploading a new Object, if authorized.