swift

package
v0.0.0-...-1dc401f Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 27, 2024 License: MIT Imports: 29 Imported by: 0

Documentation

Overview

Package swift provides an interface to the Swift object storage system

Index

Constants

This section is empty.

Variables

View Source
var SharedOptions = []fs.Option{{
	Name: "chunk_size",
	Help: strings.ReplaceAll(`Above this size files will be chunked.

Above this size files will be chunked into a a |`+segmentsContainerSuffix+`| container
or a |`+segmentsDirectory+`| directory. (See the |use_segments_container| option
for more info). Default for this is 5 GiB which is its maximum value, which
means only files above this size will be chunked.

Rclone uploads chunked files as dynamic large objects (DLO).
`, "|", "`"),
	Default:  defaultChunkSize,
	Advanced: true,
}, {
	Name: "no_chunk",
	Help: strings.ReplaceAll(`Don't chunk files during streaming upload.

When doing streaming uploads (e.g. using |rcat| or |mount| with
|--vfs-cache-mode off|) setting this flag will cause the swift backend
to not upload chunked files.

This will limit the maximum streamed upload size to 5 GiB. This is
useful because non chunked files are easier to deal with and have an
MD5SUM.

Rclone will still chunk files bigger than |chunk_size| when doing
normal copy operations.`, "|", "`"),
	Default:  false,
	Advanced: true,
}, {
	Name: "no_large_objects",
	Help: strings.ReplaceAll(`Disable support for static and dynamic large objects

Swift cannot transparently store files bigger than 5 GiB. There are
two schemes for chunking large files, static large objects (SLO) or
dynamic large objects (DLO), and the API does not allow rclone to
determine whether a file is a static or dynamic large object without
doing a HEAD on the object. Since these need to be treated
differently, this means rclone has to issue HEAD requests for objects
for example when reading checksums.

When |no_large_objects| is set, rclone will assume that there are no
static or dynamic large objects stored. This means it can stop doing
the extra HEAD calls which in turn increases performance greatly
especially when doing a swift to swift transfer with |--checksum| set.

Setting this option implies |no_chunk| and also that no files will be
uploaded in chunks, so files bigger than 5 GiB will just fail on
upload.

If you set this option and there **are** static or dynamic large objects,
then this will give incorrect hashes for them. Downloads will succeed,
but other operations such as Remove and Copy will fail.
`, "|", "`"),
	Default:  false,
	Advanced: true,
}, {
	Name: "use_segments_container",
	Help: strings.ReplaceAll(`Choose destination for large object segments

Swift cannot transparently store files bigger than 5 GiB and rclone
will chunk files larger than |chunk_size| (default 5 GiB) in order to
upload them.

If this value is |true| the chunks will be stored in an additional
container named the same as the destination container but with
|`+segmentsContainerSuffix+`| appended. This means that there won't be any duplicated
data in the original container but having another container may not be
acceptable.

If this value is |false| the chunks will be stored in a
|`+segmentsDirectory+`| directory in the root of the container. This
directory will be omitted when listing the container. Some
providers (eg Blomp) require this mode as creating additional
containers isn't allowed. If it is desired to see the |`+segmentsDirectory+`|
directory in the root then this flag must be set to |true|.

If this value is |unset| (the default), then rclone will choose the value
to use. It will be |false| unless rclone detects any |auth_url|s that
it knows need it to be |true|. In this case you'll see a message in
the DEBUG log.
`, "|", "`"),
	Default:  fs.Tristate{},
	Advanced: true,
}, {
	Name:     config.ConfigEncoding,
	Help:     config.ConfigEncodingHelp,
	Advanced: true,
	Default: (encoder.EncodeInvalidUtf8 |
		encoder.EncodeSlash),
}}

SharedOptions are shared between swift and backends which depend on swift

Functions

func NewFs

func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error)

NewFs constructs an Fs from the path, container:path

func NewFsWithConnection

func NewFsWithConnection(ctx context.Context, opt *Options, name, root string, c *swift.Connection, noCheckContainer bool) (fs.Fs, error)

NewFsWithConnection constructs an Fs from the path, container:path and authenticated connection.

if noCheckContainer is set then the Fs won't check the container exists before creating it.

Types

type Fs

type Fs struct {
	// contains filtered or unexported fields
}

Fs represents a remote swift server

func (*Fs) About

func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error)

About gets quota information

func (*Fs) Copy

func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error)

Copy src to this remote using server-side copy operations.

This is stored with the remote path given.

It returns the destination Object and a possible error.

Will only be called if src.Fs().Name() == f.Name()

If it isn't possible then return fs.ErrorCantCopy

func (*Fs) Features

func (f *Fs) Features() *fs.Features

Features returns the optional features of this Fs

func (*Fs) Hashes

func (f *Fs) Hashes() hash.Set

Hashes returns the supported hash sets.

func (*Fs) List

func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error)

List the objects and directories in dir into entries. The entries can be returned in any order but should be for a complete directory.

dir should be "" to list the root, and should not have trailing slashes.

This should return ErrDirNotFound if the directory isn't found.

func (*Fs) ListR

func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error)

ListR lists the objects and directories of the Fs starting from dir recursively into out.

dir should be "" to start from the root, and should not have trailing slashes.

This should return ErrDirNotFound if the directory isn't found.

It should call callback for each tranche of entries read. These need not be returned in any particular order. If callback returns an error then the listing will stop immediately.

Don't implement this unless you have a more efficient way of listing recursively than doing a directory traversal.

func (*Fs) Mkdir

func (f *Fs) Mkdir(ctx context.Context, dir string) error

Mkdir creates the container if it doesn't exist

func (*Fs) Name

func (f *Fs) Name() string

Name of the remote (as passed into NewFs)

func (*Fs) NewObject

func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error)

NewObject finds the Object at remote. If it can't be found it returns the error fs.ErrorObjectNotFound.

func (*Fs) Precision

func (f *Fs) Precision() time.Duration

Precision of the remote

func (*Fs) Purge

func (f *Fs) Purge(ctx context.Context, dir string) error

Purge deletes all the files in the directory

Implemented here so we can make sure we delete directory markers

func (*Fs) Put

func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)

Put the object into the container

Copy the reader in to the new object which is returned.

The new object may have been created if an error is returned

func (*Fs) PutStream

func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)

PutStream uploads to the remote path with the modTime given of indeterminate size

func (*Fs) Rmdir

func (f *Fs) Rmdir(ctx context.Context, dir string) error

Rmdir deletes the container if the fs is at the root

Returns an error if it isn't empty

func (*Fs) Root

func (f *Fs) Root() string

Root of the remote (as passed into NewFs)

func (*Fs) String

func (f *Fs) String() string

String converts this Fs to a string

type Object

type Object struct {
	// contains filtered or unexported fields
}

Object describes a swift object

Will definitely have info but maybe not meta

func (*Object) Fs

func (o *Object) Fs() fs.Info

Fs returns the parent Fs

func (*Object) Hash

func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error)

Hash returns the Md5sum of an object returning a lowercase hex string

func (*Object) MimeType

func (o *Object) MimeType(ctx context.Context) string

MimeType of an Object if known, "" otherwise

func (*Object) ModTime

func (o *Object) ModTime(ctx context.Context) time.Time

ModTime returns the modification time of the object

It attempts to read the objects mtime and if that isn't present the LastModified returned in the http headers

func (*Object) Open

func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error)

Open an object for read

func (*Object) Remote

func (o *Object) Remote() string

Remote returns the remote path

func (*Object) Remove

func (o *Object) Remove(ctx context.Context) (err error)

Remove an object

func (*Object) SetModTime

func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error

SetModTime sets the modification time of the local fs object

func (*Object) Size

func (o *Object) Size() int64

Size returns the size of an object in bytes

func (*Object) Storable

func (o *Object) Storable() bool

Storable returns if this object is storable

It compares the Content-Type to directoryMarkerContentType - that makes it a directory marker which is not storable.

func (*Object) String

func (o *Object) String() string

Return a string version

func (*Object) Update

func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error

Update the object with the contents of the io.Reader, modTime and size

The new object may have been created if an error is returned

type Options

type Options struct {
	EnvAuth                     bool                 `config:"env_auth"`
	User                        string               `config:"user"`
	Key                         string               `config:"key"`
	Auth                        string               `config:"auth"`
	UserID                      string               `config:"user_id"`
	Domain                      string               `config:"domain"`
	Tenant                      string               `config:"tenant"`
	TenantID                    string               `config:"tenant_id"`
	TenantDomain                string               `config:"tenant_domain"`
	Region                      string               `config:"region"`
	StorageURL                  string               `config:"storage_url"`
	AuthToken                   string               `config:"auth_token"`
	AuthVersion                 int                  `config:"auth_version"`
	ApplicationCredentialID     string               `config:"application_credential_id"`
	ApplicationCredentialName   string               `config:"application_credential_name"`
	ApplicationCredentialSecret string               `config:"application_credential_secret"`
	LeavePartsOnError           bool                 `config:"leave_parts_on_error"`
	StoragePolicy               string               `config:"storage_policy"`
	EndpointType                string               `config:"endpoint_type"`
	ChunkSize                   fs.SizeSuffix        `config:"chunk_size"`
	NoChunk                     bool                 `config:"no_chunk"`
	NoLargeObjects              bool                 `config:"no_large_objects"`
	UseSegmentsContainer        fs.Tristate          `config:"use_segments_container"`
	Enc                         encoder.MultiEncoder `config:"encoding"`
}

Options defines the configuration for this backend

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL