s3store

package
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 1, 2024 License: MIT Imports: 21 Imported by: 0

Documentation

Overview

Package s3store provides a storage backend using AWS S3 or compatible servers.

Configuration

In order to allow this backend to function properly, the user accessing the bucket must have at least following AWS IAM policy permissions for the bucket and all of its subresources:

s3:AbortMultipartUpload
s3:DeleteObject
s3:GetObject
s3:ListMultipartUploadParts
s3:PutObject

While this package uses the official AWS SDK for Go, S3Store is able to work with any S3-compatible service such as MinIO. In order to change the HTTP endpoint used for sending requests to, adjust the `BaseEndpoint` option in the AWS SDK For Go V2 (https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3#Options).

Implementation

Once a new tus upload is initiated, multiple objects in S3 are created:

First of all, a new info object is stored which contains a JSON-encoded blob of general information about the upload including its size and meta data. This kind of objects have the suffix ".info" in their key.

In addition a new multipart upload (http://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html) is created. Whenever a new chunk is uploaded to tusd using a PATCH request, a new part is pushed to the multipart upload on S3.

If meta data is associated with the upload during creation, it will be added to the multipart upload and after finishing it, the meta data will be passed to the final object. However, the metadata which will be attached to the final object can only contain ASCII characters and every non-ASCII character will be replaced by a question mark (for example, "Menü" will be "Men?"). However, this does not apply for the metadata returned by the GetInfo function since it relies on the info object for reading the metadata. Therefore, HEAD responses will always contain the unchanged metadata, Base64- encoded, even if it contains non-ASCII characters.

Once the upload is finished, the multipart upload is completed, resulting in the entire file being stored in the bucket. The info object, containing meta data is not deleted. It is recommended to copy the finished upload to another bucket to avoid it being deleted by the Termination extension.

If an upload is about to being terminated, the multipart upload is aborted which removes all of the uploaded parts from the bucket. In addition, the info object is also deleted. If the upload has been finished already, the finished object containing the entire upload is also removed.

Considerations

In order to support tus' principle of resumable upload, S3's Multipart-Uploads are internally used.

When receiving a PATCH request, its body will be temporarily stored on disk. This requirement has been made to ensure the minimum size of a single part and to allow the AWS SDK to calculate a checksum. Once the part has been uploaded to S3, the temporary file will be removed immediately. Therefore, please ensure that the server running this storage backend has enough disk space available to hold these caches.

In addition, it must be mentioned that AWS S3 only offers eventual consistency (https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel). Therefore, it is required to build additional measurements in order to prevent concurrent access to the same upload resources which may result in data corruption. See handler.LockerDataStore for more information.

Index

Constants

View Source
const TEMP_DIR_USE_MEMORY = "_memory"

Variables

This section is empty.

Functions

This section is empty.

Types

type S3API

type S3API interface {
	PutObject(ctx context.Context, input *s3.PutObjectInput, opt ...func(*s3.Options)) (*s3.PutObjectOutput, error)
	ListParts(ctx context.Context, input *s3.ListPartsInput, opt ...func(*s3.Options)) (*s3.ListPartsOutput, error)
	UploadPart(ctx context.Context, input *s3.UploadPartInput, opt ...func(*s3.Options)) (*s3.UploadPartOutput, error)
	GetObject(ctx context.Context, input *s3.GetObjectInput, opt ...func(*s3.Options)) (*s3.GetObjectOutput, error)
	HeadObject(ctx context.Context, input *s3.HeadObjectInput, opt ...func(*s3.Options)) (*s3.HeadObjectOutput, error)
	CreateMultipartUpload(ctx context.Context, input *s3.CreateMultipartUploadInput, opt ...func(*s3.Options)) (*s3.CreateMultipartUploadOutput, error)
	AbortMultipartUpload(ctx context.Context, input *s3.AbortMultipartUploadInput, opt ...func(*s3.Options)) (*s3.AbortMultipartUploadOutput, error)
	DeleteObject(ctx context.Context, input *s3.DeleteObjectInput, opt ...func(*s3.Options)) (*s3.DeleteObjectOutput, error)
	DeleteObjects(ctx context.Context, input *s3.DeleteObjectsInput, opt ...func(*s3.Options)) (*s3.DeleteObjectsOutput, error)
	CompleteMultipartUpload(ctx context.Context, input *s3.CompleteMultipartUploadInput, opt ...func(*s3.Options)) (*s3.CompleteMultipartUploadOutput, error)
	UploadPartCopy(ctx context.Context, input *s3.UploadPartCopyInput, opt ...func(*s3.Options)) (*s3.UploadPartCopyOutput, error)
}

type S3Store

type S3Store struct {
	// Bucket used to store the data in, e.g. "tusdstore.example.com"
	Bucket string
	// ObjectPrefix is prepended to the name of each S3 object that is created
	// to store uploaded files. It can be used to create a pseudo-directory
	// structure in the bucket, e.g. "path/to/my/uploads".
	ObjectPrefix string
	// MetadataObjectPrefix is prepended to the name of each .info and .part S3
	// object that is created. If it is not set, then ObjectPrefix is used.
	MetadataObjectPrefix string
	// Service specifies an interface used to communicate with the S3 backend.
	// Usually, this is an instance of github.com/aws/aws-sdk-go-v2/service/s3.Client
	// (https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3#Client).
	Service S3API
	// MaxPartSize specifies the maximum size of a single part uploaded to S3
	// in bytes. This value must be bigger than MinPartSize! In order to
	// choose the correct number, two things have to be kept in mind:
	//
	// If this value is too big and uploading the part to S3 is interrupted
	// expectedly, the entire part is discarded and the end user is required
	// to resume the upload and re-upload the entire big part. In addition, the
	// entire part must be written to disk before submitting to S3.
	//
	// If this value is too low, a lot of requests to S3 may be made, depending
	// on how fast data is coming in. This may result in an eventual overhead.
	MaxPartSize int64
	// MinPartSize specifies the minimum size of a single part uploaded to S3
	// in bytes. This number needs to match with the underlying S3 backend or else
	// uploaded parts will be reject. AWS S3, for example, uses 5MB for this value.
	MinPartSize int64
	// PreferredPartSize specifies the preferred size of a single part uploaded to
	// S3. S3Store will attempt to slice the incoming data into parts with this
	// size whenever possible. In some cases, smaller parts are necessary, so
	// not every part may reach this value. The PreferredPartSize must be inside the
	// range of MinPartSize to MaxPartSize.
	PreferredPartSize int64
	// MaxMultipartParts is the maximum number of parts an S3 multipart upload is
	// allowed to have according to AWS S3 API specifications.
	// See: http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html
	MaxMultipartParts int64
	// MaxObjectSize is the maximum size an S3 Object can have according to S3
	// API specifications. See link above.
	MaxObjectSize int64
	// MaxBufferedParts is the number of additional parts that can be received from
	// the client and stored on disk while a part is being uploaded to S3. This
	// can help improve throughput by not blocking the client while tusd is
	// communicating with the S3 API, which can have unpredictable latency.
	MaxBufferedParts int64
	// TemporaryDirectory is the path where S3Store will create temporary files
	// on disk during the upload. An empty string ("", the default value) will
	// cause S3Store to use the operating system's default temporary directory.
	TemporaryDirectory string
	// DisableContentHashes instructs the S3Store to not calculate the MD5 and SHA256
	// hashes when uploading data to S3. These hashes are used for file integrity checks
	// and for authentication. However, these hashes also consume a significant amount of
	// CPU, so it might be desirable to disable them.
	// Note that this property is experimental and might be removed in the future!
	DisableContentHashes bool
	// contains filtered or unexported fields
}

See the handler.DataStore interface for documentation about the different methods.

func New

func New(bucket string, service S3API) S3Store

New constructs a new storage using the supplied bucket and service object.

func (S3Store) AsConcatableUpload

func (store S3Store) AsConcatableUpload(upload handler.Upload) handler.ConcatableUpload

func (S3Store) AsLengthDeclarableUpload

func (store S3Store) AsLengthDeclarableUpload(upload handler.Upload) handler.LengthDeclarableUpload

func (S3Store) AsTerminatableUpload

func (store S3Store) AsTerminatableUpload(upload handler.Upload) handler.TerminatableUpload

func (S3Store) GetUpload

func (store S3Store) GetUpload(ctx context.Context, id string) (handler.Upload, error)

func (S3Store) NewUpload

func (store S3Store) NewUpload(ctx context.Context, info handler.FileInfo) (handler.Upload, error)

func (S3Store) RegisterMetrics

func (store S3Store) RegisterMetrics(registry prometheus.Registerer)

func (*S3Store) SetConcurrentPartUploads

func (store *S3Store) SetConcurrentPartUploads(limit int)

SetConcurrentPartUploads changes the limit on how many concurrent part uploads to S3 are allowed.

func (S3Store) UseIn

func (store S3Store) UseIn(composer *handler.StoreComposer)

UseIn sets this store as the core data store in the passed composer and adds all possible extension to it.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL