streams

package
v1.13.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 14, 2024 License: MIT Imports: 28 Imported by: 1

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func DisableDeleteOnCancel added in v1.4.3

func DisableDeleteOnCancel(ctx context.Context) context.Context

DisableDeleteOnCancel is now a no-op.

Types

type EOFReader

type EOFReader struct {
	// contains filtered or unexported fields
}

EOFReader holds reader and status of EOF.

func NewEOFReader

func NewEOFReader(r io.Reader) *EOFReader

NewEOFReader keeps track of the state, has the internal reader reached EOF.

func (*EOFReader) HasError added in v1.4.6

func (r *EOFReader) HasError() bool

HasError returns true if error was returned during reading.

func (*EOFReader) IsEOF added in v1.4.6

func (r *EOFReader) IsEOF() bool

IsEOF returns true if EOF was returned during reading.

func (*EOFReader) Read

func (r *EOFReader) Read(p []byte) (n int, err error)

type Meta

type Meta struct {
	Modified   time.Time
	Expiration time.Time
	Size       int64
	Data       []byte
	Version    []byte
	Retention  *metaclient.Retention
}

Meta info about a stream.

type Metadata

type Metadata interface {
	Metadata() ([]byte, error)
}

Metadata interface returns the latest metadata for an object.

type MetainfoUpload added in v1.11.0

type MetainfoUpload interface {
	metaclient.Batcher
	RetryBeginSegmentPieces(ctx context.Context, params metaclient.RetryBeginSegmentPiecesParams) (metaclient.RetryBeginSegmentPiecesResponse, error)
	io.Closer
}

MetainfoUpload are the metainfo methods needed to upload a stream.

type Part added in v1.6.0

type Part struct {
	PartNumber uint32
	Size       int64
	Modified   time.Time
	ETag       []byte
}

Part info about a part.

type PeekThresholdReader added in v1.0.6

type PeekThresholdReader struct {
	// contains filtered or unexported fields
}

PeekThresholdReader allows a check to see if the size of a given reader exceeds the maximum inline segment size or not.

func NewPeekThresholdReader added in v1.0.6

func NewPeekThresholdReader(r io.Reader) (pt *PeekThresholdReader)

NewPeekThresholdReader creates a new instance of PeekThresholdReader.

func (*PeekThresholdReader) IsLargerThan added in v1.0.6

func (pt *PeekThresholdReader) IsLargerThan(thresholdSize int) (bool, error)

IsLargerThan returns a bool to determine whether a reader's size is larger than the given threshold or not.

func (*PeekThresholdReader) Read added in v1.0.6

func (pt *PeekThresholdReader) Read(p []byte) (n int, err error)

Read initially reads bytes from the internal buffer, then continues reading from the wrapped data reader. The number of bytes read `n` is returned.

type SizedReader added in v1.2.0

type SizedReader struct {
	// contains filtered or unexported fields
}

SizedReader allows to check the total number of bytes read so far.

func SizeReader

func SizeReader(r io.Reader) *SizedReader

SizeReader create a new instance of SizedReader.

func (*SizedReader) Read added in v1.2.0

func (r *SizedReader) Read(p []byte) (n int, err error)

Read implements io.Reader.Read.

func (*SizedReader) Size added in v1.2.0

func (r *SizedReader) Size() int64

Size returns the total number of bytes read so far.

type Store

type Store struct {
	*Uploader
	// contains filtered or unexported fields
}

Store is a store for streams. It implements typedStore as part of an ongoing migration to use typed paths. See the shim for the store that the rest of the world interacts with.

func NewStreamStore

func NewStreamStore(metainfo *metaclient.Client, ec ecclient.Client, segmentSize int64, encStore *encryption.Store, encryptionParameters storj.EncryptionParameters, inlineThreshold, longTailMargin int) (*Store, error)

NewStreamStore constructs a stream store.

func (*Store) Close added in v1.4.0

func (s *Store) Close() error

Close closes the underlying resources passed to the metainfo DB.

func (*Store) Get

func (s *Store) Get(ctx context.Context, bucket, unencryptedKey string, info metaclient.DownloadInfo, nextSegmentErrorDetection bool) (rr ranger.Ranger, err error)

Get returns a ranger that knows what the overall size is (from l/<key>) and then returns the appropriate data from segments s0/<key>, s1/<key>, ..., l/<key>.

func (*Store) Put

func (s *Store) Put(ctx context.Context, bucket, unencryptedKey string, data io.Reader, metadata Metadata, opts *metaclient.UploadOptions) (_ Meta, err error)

Put breaks up data as it comes in into s.segmentSize length pieces, then store the first piece at s0/<key>, second piece at s1/<key>, and the *last* piece at l/<key>. Store the given metadata, along with the number of segments, in a new protobuf, in the metadata of l/<key>.

If there is an error, it cleans up any uploaded segment before returning.

func (*Store) PutPart added in v1.6.0

func (s *Store) PutPart(ctx context.Context, bucket, unencryptedKey string, streamID storj.StreamID, partNumber uint32, eTagCh <-chan []byte, data io.Reader) (_ Part, err error)

PutPart uploads single part.

func (*Store) Ranger added in v1.4.0

func (s *Store) Ranger(ctx context.Context, response metaclient.DownloadSegmentWithRSResponse, errorDetection bool) (rr ranger.Ranger, err error)

Ranger creates a ranger for downloading erasure codes from piece store nodes.

type Upload added in v1.11.0

type Upload struct {
	// contains filtered or unexported fields
}

Upload represents an object or part upload and is returned by PutWriter or PutWriterPart. Data to be uploaded is written using the Write call. Either Commit or Abort must be called to either complete the upload or otherwise free up resources related to the upload.

func (*Upload) Abort added in v1.11.0

func (u *Upload) Abort() error

Abort aborts the upload. If called more than once, or after Commit, it will return an error.

func (*Upload) Commit added in v1.11.0

func (u *Upload) Commit() error

Commit commits the upload and must be called for the upload to complete successfully. It should only be called after all of the data has been written.

func (*Upload) Meta added in v1.11.0

func (u *Upload) Meta() *Meta

Meta returns the upload metadata. It should only be called after a successful Commit and will return nil otherwise.

func (*Upload) Write added in v1.11.0

func (u *Upload) Write(p []byte) (int, error)

Write uploads the object or part data.

type Uploader added in v1.11.0

type Uploader struct {
	// contains filtered or unexported fields
}

Uploader uploads object or part streams.

func NewUploader added in v1.11.0

func NewUploader(metainfo MetainfoUpload, piecePutter pieceupload.PiecePutter, segmentSize int64, encStore *encryption.Store, encryptionParameters storj.EncryptionParameters, inlineThreshold, longTailMargin int) (*Uploader, error)

NewUploader constructs a new stream putter.

func (*Uploader) Close added in v1.11.0

func (u *Uploader) Close() error

Close closes the underlying resources for the uploader.

func (*Uploader) UploadObject added in v1.11.0

func (u *Uploader) UploadObject(ctx context.Context, bucket, unencryptedKey string, metadata Metadata, sched segmentupload.Scheduler, opts *metaclient.UploadOptions) (_ *Upload, err error)

UploadObject starts an upload of an object to the given location. The object contents can be written to the returned upload, which can then be committed.

func (*Uploader) UploadPart added in v1.11.0

func (u *Uploader) UploadPart(ctx context.Context, bucket, unencryptedKey string, streamID storj.StreamID, partNumber int32, eTag <-chan []byte, sched segmentupload.Scheduler) (_ *Upload, err error)

UploadPart starts an upload of a part to the given location for the given multipart upload stream. The eTag is an optional channel is used to provide the eTag to be encrypted and included in the final segment of the part. The eTag should be sent on the channel only after the contents of the part have been fully written to the returned upload, but before calling Commit.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL