Documentation
¶
Index ¶
- type Backend
- func (b *Backend) Delete(repoKey, path string) error
- func (b *Backend) Finalise(repoKey string, cacheID int, parts []s.CachePart) (string, error)
- func (b *Backend) GenerateArchiveURL(scheme, host, repoKey, path string) (string, error)
- func (b *Backend) GetFilePath(key string) (string, error)
- func (b *Backend) Setup() error
- func (b *Backend) Type() string
- func (b *Backend) Write(repoKey string, cacheID int, r io.Reader, start, end int, size int64) (string, int64, error)
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Backend ¶
type Backend struct { BucketURL string Session *session.Session Client *s3.S3 // contains filtered or unexported fields }
func (*Backend) GenerateArchiveURL ¶
func (*Backend) Write ¶
func (b *Backend) Write(repoKey string, cacheID int, r io.Reader, start, end int, size int64) (string, int64, error)
S3 has UploadPartCopy to create multipart uploads from other objects. So we can upload chunks as uuid named files and store the filename in a db with start and end therefore when finalising we know the order in which to concatenate files.
The reason we don't just use a regular multipart upload is chunks can be uploaded in parallel, and you could receive a later chunk before an earlier one, multipart upload parts need an int 1-10000 and will be assembled in sorted order, as we don't have data uploaded in order, we can't reliably do this.
Write Uploads a part of a file to S3.
Click to show internal directories.
Click to hide internal directories.