Documentation ¶
Index ¶
- func DisableDeleteOnCancel(ctx context.Context) context.Context
- type EOFReader
- type Meta
- type Metadata
- type MetainfoUpload
- type Part
- type PeekThresholdReader
- type SizedReader
- type Store
- func (s *Store) Close() error
- func (s *Store) Get(ctx context.Context, bucket, unencryptedKey string, ...) (rr ranger.Ranger, err error)
- func (s *Store) Put(ctx context.Context, bucket, unencryptedKey string, data io.Reader, ...) (_ Meta, err error)
- func (s *Store) PutPart(ctx context.Context, bucket, unencryptedKey string, streamID storj.StreamID, ...) (_ Part, err error)
- func (s *Store) Ranger(ctx context.Context, response metaclient.DownloadSegmentWithRSResponse, ...) (rr ranger.Ranger, err error)
- type Upload
- type Uploader
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type EOFReader ¶
type EOFReader struct {
// contains filtered or unexported fields
}
EOFReader holds reader and status of EOF.
func NewEOFReader ¶
NewEOFReader keeps track of the state, has the internal reader reached EOF.
func (*EOFReader) HasError ¶ added in v1.4.6
HasError returns true if error was returned during reading.
type Meta ¶
type Meta struct { Modified time.Time Expiration time.Time Size int64 Data []byte Version []byte Retention *metaclient.Retention }
Meta info about a stream.
type MetainfoUpload ¶ added in v1.11.0
type MetainfoUpload interface { metaclient.Batcher RetryBeginSegmentPieces(ctx context.Context, params metaclient.RetryBeginSegmentPiecesParams) (metaclient.RetryBeginSegmentPiecesResponse, error) io.Closer }
MetainfoUpload are the metainfo methods needed to upload a stream.
type PeekThresholdReader ¶ added in v1.0.6
type PeekThresholdReader struct {
// contains filtered or unexported fields
}
PeekThresholdReader allows a check to see if the size of a given reader exceeds the maximum inline segment size or not.
func NewPeekThresholdReader ¶ added in v1.0.6
func NewPeekThresholdReader(r io.Reader) (pt *PeekThresholdReader)
NewPeekThresholdReader creates a new instance of PeekThresholdReader.
func (*PeekThresholdReader) IsLargerThan ¶ added in v1.0.6
func (pt *PeekThresholdReader) IsLargerThan(thresholdSize int) (bool, error)
IsLargerThan returns a bool to determine whether a reader's size is larger than the given threshold or not.
type SizedReader ¶ added in v1.2.0
type SizedReader struct {
// contains filtered or unexported fields
}
SizedReader allows to check the total number of bytes read so far.
func SizeReader ¶
func SizeReader(r io.Reader) *SizedReader
SizeReader create a new instance of SizedReader.
func (*SizedReader) Read ¶ added in v1.2.0
func (r *SizedReader) Read(p []byte) (n int, err error)
Read implements io.Reader.Read.
func (*SizedReader) Size ¶ added in v1.2.0
func (r *SizedReader) Size() int64
Size returns the total number of bytes read so far.
type Store ¶
type Store struct { *Uploader // contains filtered or unexported fields }
Store is a store for streams. It implements typedStore as part of an ongoing migration to use typed paths. See the shim for the store that the rest of the world interacts with.
func NewStreamStore ¶
func NewStreamStore(metainfo *metaclient.Client, ec ecclient.Client, segmentSize int64, encStore *encryption.Store, encryptionParameters storj.EncryptionParameters, inlineThreshold, longTailMargin int) (*Store, error)
NewStreamStore constructs a stream store.
func (*Store) Close ¶ added in v1.4.0
Close closes the underlying resources passed to the metainfo DB.
func (*Store) Get ¶
func (s *Store) Get(ctx context.Context, bucket, unencryptedKey string, info metaclient.DownloadInfo, nextSegmentErrorDetection bool) (rr ranger.Ranger, err error)
Get returns a ranger that knows what the overall size is (from l/<key>) and then returns the appropriate data from segments s0/<key>, s1/<key>, ..., l/<key>.
func (*Store) Put ¶
func (s *Store) Put(ctx context.Context, bucket, unencryptedKey string, data io.Reader, metadata Metadata, opts *metaclient.UploadOptions) (_ Meta, err error)
Put breaks up data as it comes in into s.segmentSize length pieces, then store the first piece at s0/<key>, second piece at s1/<key>, and the *last* piece at l/<key>. Store the given metadata, along with the number of segments, in a new protobuf, in the metadata of l/<key>.
If there is an error, it cleans up any uploaded segment before returning.
type Upload ¶ added in v1.11.0
type Upload struct {
// contains filtered or unexported fields
}
Upload represents an object or part upload and is returned by PutWriter or PutWriterPart. Data to be uploaded is written using the Write call. Either Commit or Abort must be called to either complete the upload or otherwise free up resources related to the upload.
func (*Upload) Abort ¶ added in v1.11.0
Abort aborts the upload. If called more than once, or after Commit, it will return an error.
func (*Upload) Commit ¶ added in v1.11.0
Commit commits the upload and must be called for the upload to complete successfully. It should only be called after all of the data has been written.
type Uploader ¶ added in v1.11.0
type Uploader struct {
// contains filtered or unexported fields
}
Uploader uploads object or part streams.
func NewUploader ¶ added in v1.11.0
func NewUploader(metainfo MetainfoUpload, piecePutter pieceupload.PiecePutter, segmentSize int64, encStore *encryption.Store, encryptionParameters storj.EncryptionParameters, inlineThreshold, longTailMargin int) (*Uploader, error)
NewUploader constructs a new stream putter.
func (*Uploader) UploadObject ¶ added in v1.11.0
func (u *Uploader) UploadObject(ctx context.Context, bucket, unencryptedKey string, metadata Metadata, sched segmentupload.Scheduler, opts *metaclient.UploadOptions) (_ *Upload, err error)
UploadObject starts an upload of an object to the given location. The object contents can be written to the returned upload, which can then be committed.
func (*Uploader) UploadPart ¶ added in v1.11.0
func (u *Uploader) UploadPart(ctx context.Context, bucket, unencryptedKey string, streamID storj.StreamID, partNumber int32, eTag <-chan []byte, sched segmentupload.Scheduler) (_ *Upload, err error)
UploadPart starts an upload of a part to the given location for the given multipart upload stream. The eTag is an optional channel is used to provide the eTag to be encrypted and included in the final segment of the part. The eTag should be sent on the channel only after the contents of the part have been fully written to the returned upload, but before calling Commit.