storage

package
v1.40.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 6, 2024 License: Apache-2.0 Imports: 17 Imported by: 57

Documentation

Overview

Package storage implements a simple storage abstraction.

This is meant to abstract filesystem calls, as well as be a wrapper for in-memory or remote storage. It also provides a smaller attack vector as implementations can do verifications as to what is accessed and what is not.

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrClosed is the error returned if a bucket or object is already closed.
	ErrClosed = errors.New("already closed")
	// ErrSetExternalPathUnsupported is the error returned if a bucket does not support SetExternalPath.
	ErrSetExternalPathUnsupported = errors.New("setting the external path is unsupported for this bucket")
	// ErrSetLocalPathUnsupported is the error returned if a bucket does not support SetLocalPath.
	ErrSetLocalPathUnsupported = errors.New("setting the local path is unsupported for this bucket")
)

Functions

func AllPaths

func AllPaths(ctx context.Context, readBucket ReadBucket, prefix string) ([]string, error)

AllPaths walks the bucket and gets all the paths.

The returned paths are sorted.

func Copy

func Copy(
	ctx context.Context,
	from ReadBucket,
	to WriteBucket,
	options ...CopyOption,
) (int, error)

Copy copies the bucket at from to the bucket at to.

Copies done concurrently. Returns the number of files copied.

func CopyPath added in v1.29.0

func CopyPath(
	ctx context.Context,
	from ReadBucket,
	fromPath string,
	to WriteBucket,
	toPath string,
	options ...CopyOption,
) error

CopyPath copies the fromPath from the ReadBucket to the toPath on the WriteBucket.

func CopyReadObject

func CopyReadObject(
	ctx context.Context,
	writeBucket WriteBucket,
	readObject ReadObject,
	options ...CopyOption,
) (retErr error)

CopyReadObject copies the contents of the ReadObject into the WriteBucket at the path.

func CopyReader

func CopyReader(
	ctx context.Context,
	writeBucket WriteBucket,
	reader io.Reader,
	path string,
) (retErr error)

CopyReader copies the contents of the Reader into the WriteBucket at the path.

func Diff

func Diff(
	ctx context.Context,
	runner command.Runner,
	writer io.Writer,
	one ReadBucket,
	two ReadBucket,
	options ...DiffOption,
) error

Diff writes a diff of the ReadBuckets to the Writer.

func DiffBytes

func DiffBytes(
	ctx context.Context,
	runner command.Runner,
	one ReadBucket,
	two ReadBucket,
	options ...DiffOption,
) ([]byte, error)

DiffBytes does a diff of the ReadBuckets.

func DiffWithFilenames added in v1.33.0

func DiffWithFilenames(
	ctx context.Context,
	runner command.Runner,
	writer io.Writer,
	one ReadBucket,
	two ReadBucket,
	options ...DiffOption,
) ([]string, error)

DiffWithFilenames writes a diff of the ReadBuckets to the Writer and returns the names of any file paths that contained differences. The returned paths are in sorted (ascending) order.

Note that the returned paths are determined by comparing the before and after bytes, not just based on whether the configured diff tool reports something. This can be used to avoid re-writing files whose contents don't actually need to change.

func Exists

func Exists(ctx context.Context, readBucket ReadBucket, path string) (bool, error)

Exists returns true if the path exists, false otherwise.

func ForReadObject added in v1.32.0

func ForReadObject(ctx context.Context, readBucket ReadBucket, path string, f func(ReadObject) error) (retErr error)

ForReadObject gets a ReadObjectCloser at the given path, calls f on it, and then closes the ReadObjectCloser.

func ForWriteObject added in v1.32.0

func ForWriteObject(ctx context.Context, writeBucket WriteBucket, path string, f func(WriteObject) error, options ...PutOption) (retErr error)

ForWriteObject gets a WriteObjectCloser at the given path, calls f on it, and then closes the WriteObjectCloser.

func IsEmpty

func IsEmpty(ctx context.Context, readBucket ReadBucket, prefix string) (bool, error)

IsEmpty returns true if the bucket is empty under the prefix.

A prefix of "" or "." will check if the entire bucket is empty.

func IsExistsMultipleLocations

func IsExistsMultipleLocations(err error) bool

IsExistsMultipleLocations returns true if the error is for a path existing in multiple locations.

func IsNotExist deprecated

func IsNotExist(err error) bool

IsNotExist returns true for a error that is for a path not existing.

Deprecated: Use errors.Is(err, fs.ErrNotExist) instead.

func IsWriteLimitReached added in v1.8.0

func IsWriteLimitReached(err error) bool

IsWriteLimitReached returns true if the error is of writes exceeding the limit of the bucket.

func NewErrExistsMultipleLocations

func NewErrExistsMultipleLocations(path string, externalPaths ...string) error

NewErrExistsMultipleLocations returns a new error if a path exists in multiple locations.

func NewErrNotExist deprecated

func NewErrNotExist(path string) error

NewErrNotExist returns a new error for a path not existing.

Deprecated: use &fs.PathError{Op: "Operation", Path: path, Err: fs.ErrNotExist} instead.

func PutPath

func PutPath(ctx context.Context, writeBucket WriteBucket, path string, data []byte, options ...PutOption) (retErr error)

PutPath puts the data at the path.

func ReadPath

func ReadPath(ctx context.Context, readBucket ReadBucket, path string) (_ []byte, retErr error)

ReadPath is analogous to os.ReadFile.

Returns an error that fufills IsNotExist if the path does not exist.

func WalkReadObjects

func WalkReadObjects(
	ctx context.Context,
	readBucket ReadBucket,
	prefix string,
	f func(ReadObject) error,
) error

WalkReadObjects walks the bucket and calls get on each, closing the resulting ReadObjectCloser when done.

Types

type CopyOption

type CopyOption func(*copyOptions)

CopyOption is an option for Copy.

func CopyWithAtomic added in v1.29.0

func CopyWithAtomic() CopyOption

CopyWithAtomic returns a new CopyOption that says to set PutWithAtomic when copying each file.

See the documentation on PutWithAtomic for more details.

func CopyWithExternalAndLocalPaths added in v1.32.0

func CopyWithExternalAndLocalPaths() CopyOption

CopyWithExternalAndLocalPaths returns a new CopyOption that says to copy external and local paths.

The to WriteBucket must support setting external and local paths.

type DiffOption

type DiffOption func(*diffOptions)

DiffOption is an option for Diff.

func DiffWithExternalPathPrefixes

func DiffWithExternalPathPrefixes(
	oneExternalPathPrefix string,
	twoExternalPathPrefix string,
) DiffOption

DiffWithExternalPathPrefixes returns a new DiffOption that sets the external path prefixes for the buckets.

If a file is in one bucket but not the other, it will be assumed that the file begins with the given prefix, and this prefix should be substituted for the other prefix.

For example, if diffing the directories "test/a" and "test/b", use "test/a/" and "test/b/", and a file that is in one with path "test/a/foo.txt" will be shown as not existing as "test/b/foo.txt" in two.

Note that the prefixes are directly concatenated, so "/" should be included generally.

This option has no effect if DiffWithExternalPaths is not set. This option is not required if the prefixes are equal.

func DiffWithExternalPaths

func DiffWithExternalPaths() DiffOption

DiffWithExternalPaths returns a new DiffOption that prints diffs with external paths instead of paths.

func DiffWithSuppressCommands

func DiffWithSuppressCommands() DiffOption

DiffWithSuppressCommands returns a new DiffOption that suppresses printing of commands.

func DiffWithSuppressTimestamps

func DiffWithSuppressTimestamps() DiffOption

DiffWithSuppressCommands returns a new DiffOption that suppresses printing of timestamps.

func DiffWithTransform added in v1.0.0

func DiffWithTransform(
	transform func(side string, filename string, content []byte) []byte,
) DiffOption

DiffWithTransform returns a DiffOption that adds a transform function. The transform function will be run on each file being compared before it is diffed. transform takes the arguments:

side: one or two whether it is the first or second item in the diff
filename: the filename including path
content: the file content.

transform returns a string that is the transformed content of filename.

TODO: this needs to be refactored or removed, especially the implicit side enum. Perhaps provide a transform function for a given bucket and apply it there.

type Mapper

type Mapper interface {
	// Map maps the path to the full path.
	//
	// The path is expected to be normalized and validated.
	// The returned path is expected to be normalized and validated.
	// If the path cannot be mapped, this returns false.
	MapPath(path string) (string, bool)
	// UnmapFullPath maps the full path to the path.
	//
	// Returns false if the full path does not apply.
	// The path is expected to be normalized and validated.
	// The returned path is expected to be normalized and validated.
	UnmapFullPath(fullPath string) (string, bool, error)
	// contains filtered or unexported methods
}

Mapper is a path mapper.

This will cause a Bucket to operate as if the Mapper has all paths mapped.

func MapChain

func MapChain(mappers ...Mapper) Mapper

MapChain chains the mappers.

If any mapper does not match, this stops checking Mappers and returns an empty path and false. This is as opposed to MatchAnd, that runs every Matcher and returns the path regardless.

If the Mappers are empty, a no-op Mapper is returned. If there is more than one Mapper, the Mappers are called in order for UnmapFullPath, with the order reversed for MapPath.

That is, order these assuming you are starting with a full path and working to a path.

func MapOnPrefix

func MapOnPrefix(prefix string) Mapper

MapOnPrefix returns a Mapper that will map the Bucket as if it was created on the given prefix.

The prefix is expected to be normalized and validated.

type Matcher

type Matcher interface {
	// MatchPath returns true if the path matches.
	//
	// The path is expected to be normalized and validated.
	MatchPath(string) bool
	// contains filtered or unexported methods
}

Matcher is a path matcher.

This will cause a Bucket to operate as if it only contains matching paths.

func MatchAnd

func MatchAnd(matchers ...Matcher) Matcher

MatchAnd returns an And of the Matchers.

func MatchNot

func MatchNot(matcher Matcher) Matcher

MatchNot returns an Not of the Matcher.

func MatchOr

func MatchOr(matchers ...Matcher) Matcher

MatchOr returns an Or of the Matchers.

func MatchPathBase added in v1.8.0

func MatchPathBase(base string) Matcher

MatchPathBase returns a Matcher for the base.

func MatchPathContained

func MatchPathContained(containingDir string) Matcher

MatchPathContained returns a Matcher for the directory that matches on paths by contained by containingDir.

func MatchPathEqual

func MatchPathEqual(equalPath string) Matcher

MatchPathEqual returns a Matcher for the path.

func MatchPathEqualOrContained

func MatchPathEqualOrContained(equalOrContainingPath string) Matcher

MatchPathEqualOrContained returns a Matcher for the path that matches on paths equal or contained by equalOrContainingPath.

func MatchPathExt

func MatchPathExt(ext string) Matcher

MatchPathExt returns a Matcher for the extension.

type ObjectInfo

type ObjectInfo interface {
	// Path is the path of the object.
	//
	// This will always correspond to a path within the Bucket. For sub-buckets, this is the sub-path, but the
	// external path will include the sub-bucket path.
	//
	// This path will always be normalized, validated, and non-empty.
	Path() string
	// ExternalPath is the path that identifies the object externally.
	//
	// This path is not necessarily a file path, and should only be used to
	// uniquely identify this file as compared to other assets, to for display
	// to users.
	//
	// The path will be unnormalized, if it is a file path.
	// The path will never be empty. If a given implementation has no external path, this falls back to path.
	//
	// Example:
	//   Directory: /foo/bar
	//   Path: baz/bat.proto
	//   ExternalPath: /foo/bar/baz/bat.proto
	//
	// Example:
	//   Directory: .
	//   Path: baz/bat.proto
	//   ExternalPath: baz/bat.proto
	//
	// Example:
	//   S3 Bucket: https://s3.amazonaws.com/foo
	//   Path: baz/bat.proto
	//   ExternalPath: s3://foo/baz/bat.proto
	ExternalPath() string

	// LocalPath is the path on disk of the object, if the object originated from a local disk.
	//
	// This will be unnormalized if present.
	//
	// Will not be present if the path did not originate from disk. For example, objects that originated
	// from archives, git repositories, or object stores will not have this present.
	LocalPath() string
}

ObjectInfo contains object info.

An ObjectInfo will always be the same for a given path within a given Bucket, that is an ObjectInfo is cacheable for a given Bucket.

func AllObjectInfos added in v1.32.0

func AllObjectInfos(ctx context.Context, readBucket ReadBucket, prefix string) ([]ObjectInfo, error)

AllObjectInfos walks the bucket and gets all the ObjectInfos.

The returned ObjectInfos are sorted by path.

type PutOption added in v1.14.0

type PutOption func(*putOptions)

PutOption is an option passed when putting an object in a bucket.

func PutWithAtomic added in v1.15.0

func PutWithAtomic() PutOption

PutWithAtomic ensures that the Put fully writes the file before making it available to readers. This happens by default for some implementations, while others may need to perform a sequence of operations to ensure atomic writes.

The Put operation is complete and the path will be readable once the returned WriteObjectCloser is written and closed (without an error). Any errors will cause the Put to be skipped (no path will be created).

func PutWithSuggestedChunkSize added in v1.29.0

func PutWithSuggestedChunkSize(suggestedChunkSize int) PutOption

PutWithSuggestedChunkSize sets the given size in bytes as a suggested chunk size to use by the Bucket implementation for this Put call. Some implementations of Put allow multi-part upload, and allow customizing the chunk size of each part upload, or even disabling multi-part upload.

Setting a suggestedChunkSize of 0 says to suggest disable chunking Negative values will be ignored.

This is a suggestion, implementations may choose to ignore this option.

type PutOptions added in v1.14.0

type PutOptions interface {
	// Atomic ensures that the Put fully writes the file before making it
	// available to readers. This happens by default for some implementations,
	// while others may need to perform a sequence of operations to ensure
	// atomic writes.
	//
	// The Put operation is complete and the path will be readable once the
	// returned WriteObjectCloser is written and closed (without an error).
	// Any errors will cause the Put to be skipped (no path will be created).
	Atomic() bool
	// SuggestedDisableChunking says to suggest disable chunking entirely.
	//
	// If SuggestedChunkSize() > 0, this will always be false.
	//
	// This is a suggestion, implementations may choose to ignore this option.
	SuggestedDisableChunking() bool
	// SuggestedChunkSize sets the given size in bytes as a suggested chunk
	// size to use by the Bucket implementation for this Put call.
	// Some implementations of Put allow multi-part upload, and allow customizing the
	// chunk size of each part upload, or even disabling multi-part upload.
	//
	// This is a suggestion, implementations may choose to ignore this option.
	SuggestedChunkSize() int
	// contains filtered or unexported methods
}

PutOptions are the possible options that can be passed to a Put operation.

func NewPutOptions added in v1.29.0

func NewPutOptions(options []PutOption) PutOptions

NewPutOptions returns a new PutOptions.

This is used by Bucket implementations.

type ReadBucket

type ReadBucket interface {
	// Get gets the path.
	//
	// The behavior of concurrently Getting and Putting an object is undefined.
	// The returned ReadObjectCloser is not thread-safe.
	//
	// Returns ErrNotExist if the path does not exist, other error
	// if there is a system error.
	Get(ctx context.Context, path string) (ReadObjectCloser, error)
	// Stat gets info in the object.
	//
	// Returns ErrNotExist if the path does not exist, other error
	// if there is a system error.
	Stat(ctx context.Context, path string) (ObjectInfo, error)
	// Walk walks the bucket with the prefix, calling f on each path.
	// If the prefix doesn't exist, this is a no-op.
	//
	// Note that foo/barbaz will not be called for foo/bar, but will
	// be called for foo/bar/baz.
	//
	// Note that a prefix can also be equal to a path in the bucket, in which
	// case Walk will walk this single file. That is, if file a/b/c.txt is in
	// the bucket, Walk(ctx, "a/b/c.txt", ...) will result in a single iteration,
	// calling f on the file "a/b/c.txt".
	//
	// All paths given to f are normalized and validated.
	// If f returns error, Walk will stop short and return this error.
	// Returns other error on system error.
	Walk(ctx context.Context, prefix string, f func(ObjectInfo) error) error
}

ReadBucket is a simple read-only bucket.

All paths are regular files - Buckets do not handle directories. All paths must be relative. All paths are cleaned and ToSlash'ed by each function. Paths must not jump the bucket context, that is after clean, they cannot contain "..".

func FilterReadBucket added in v1.38.0

func FilterReadBucket(readBucket ReadBucket, matchers ...Matcher) ReadBucket

FilterReadBucket filters the ReadBucket.

If the Matchers are empty, the original ReadBucket is returned. If there is more than one Matcher, the Matchers are anded together.

func MapReadBucket

func MapReadBucket(readBucket ReadBucket, mappers ...Mapper) ReadBucket

MapReadBucket maps the ReadBucket.

If the Mappers are empty, the original ReadBucket is returned. If there is more than one Mapper, the Mappers are called in order for UnmapFullPath, with the order reversed for MapPath.

That is, order these assuming you are starting with a full path and working to a path.

func MultiReadBucket

func MultiReadBucket(readBuckets ...ReadBucket) ReadBucket

MultiReadBucket takes the union of logically-unique ReadBuckets.

This expects and validates that no paths overlap between the ReadBuckets.

If no readBuckets are given, this returns a no-op ReadBucket. If one readBucket is given, this returns the original ReadBucket. Otherwise, this returns a ReadBucket that will get from all buckets.

func OverlayReadBucket added in v1.29.0

func OverlayReadBucket(readBuckets ...ReadBucket) ReadBucket

OverlayReadBucket takes the union of the ReadBuckets, overlaying earlier ReadBuckets on top of the other.

If two ReadBuckets have the same path, the first ReadBucket with the given path will be used.

If no readBuckets are given, this returns a no-op ReadBucket. If one readBucket is given, this returns the original ReadBucket. Otherwise, this returns a ReadBucket that will get from all buckets in the order they are given.

func StripReadBucketExternalPaths added in v1.32.0

func StripReadBucketExternalPaths(readBucket ReadBucket) ReadBucket

StripReadBucketExternalPaths strips the differentiated ExternalPaths from objects returned from the ReadBucket, instead replacing them with the Paths.

This is used in situations where the ExternalPath is actually i.e. in a cache, and you don't want to expose this information to callers.

type ReadBucketCloser

type ReadBucketCloser interface {
	io.Closer
	ReadBucket
}

ReadBucketCloser is a read-only bucket that must be closed.

func FilterReadBucketCloser added in v1.38.0

func FilterReadBucketCloser(readBucketCloser ReadBucketCloser, matchers ...Matcher) ReadBucketCloser

FilterReadBucketCloser filters the ReadBucketCloser.

If the Matchers are empty, the original ReadBucketCloser is returned. If there is more than one Matcher, the Matchers are anded together.

func MapReadBucketCloser added in v1.32.0

func MapReadBucketCloser(readBucketCloser ReadBucketCloser, mappers ...Mapper) ReadBucketCloser

MapReadBucketCloser maps the ReadBucketCloser.

If the Mappers are empty, the original ReadBucketCloser is returned. If there is more than one Mapper, the Mappers are called in order for UnmapFullPath, with the order reversed for MapPath.

That is, order these assuming you are starting with a full path and working to a path.

func NopReadBucketCloser

func NopReadBucketCloser(readBucket ReadBucket) ReadBucketCloser

NopReadBucketCloser returns a ReadBucketCloser for the ReadBucket.

type ReadObject

type ReadObject interface {
	ObjectInfo
	io.Reader
}

ReadObject is an object read from a bucket.

type ReadObjectCloser

type ReadObjectCloser interface {
	ReadObject
	io.Closer
}

ReadObjectCloser is a ReadObject with a closer.

It must be closed when done.

type ReadWriteBucket

type ReadWriteBucket interface {
	ReadBucket
	WriteBucket
}

ReadWriteBucket is a simple read/write bucket.

func MapReadWriteBucket

func MapReadWriteBucket(readWriteBucket ReadWriteBucket, mappers ...Mapper) ReadWriteBucket

MapReadWriteBucket maps the ReadWriteBucket.

If the Mappers are empty, the original ReadWriteBucket is returned. If there is more than one Mapper, the Mappers are called in order for UnmapFullPath, with the order reversed for MapPath.

That is, order these assuming you are starting with a full path and working to a path.

type ReadWriteBucketCloser

type ReadWriteBucketCloser interface {
	io.Closer
	ReadWriteBucket
}

ReadWriteBucketCloser is a read/write bucket that must be closed.

func MapReadWriteBucketCloser added in v1.32.0

func MapReadWriteBucketCloser(readWriteBucketCloser ReadWriteBucketCloser, mappers ...Mapper) ReadWriteBucketCloser

MapReadWriteBucketCloser maps the ReadWriteBucketCloser.

If the Mappers are empty, the original ReadWriteBucketCloser is returned. If there is more than one Mapper, the Mappers are called in order for UnmapFullPath, with the order reversed for MapPath.

That is, order these assuming you are starting with a full path and working to a path.

func NopReadWriteBucketCloser

func NopReadWriteBucketCloser(readWriteBucket ReadWriteBucket) ReadWriteBucketCloser

NopReadWriteBucketCloser returns a ReadWriteBucketCloser for the ReadWriteBucket.

type WriteBucket

type WriteBucket interface {
	// Put returns a WriteObjectCloser to write to the path.
	//
	// The path is truncated on close.
	// The behavior of concurrently Getting and Putting an object is undefined.
	// The returned WriteObjectCloser is not thread-safe.
	//
	// Note that an object may appear via Get and Stat calls before the WriteObjectCloser
	// has closed. To guarantee that an object will only appear once the WriteObjectCloser
	// is closed, pass PutWithAtomic.
	//
	// Returns error on system error.
	Put(ctx context.Context, path string, options ...PutOption) (WriteObjectCloser, error)
	// Delete deletes the object at the path.
	//
	// Returns ErrNotExist if the path does not exist, other error
	// if there is a system error.
	Delete(ctx context.Context, path string) error
	// DeleteAll deletes all objects with the prefix.
	// If the prefix doesn't exist, this is a no-op.
	//
	// Note that the prefix is used as a filepath prefix, and
	// NOT a string prefix. For example, the prefix "foo/bar"
	// will delete "foo/bar/baz", but NOT "foo/barbaz".
	DeleteAll(ctx context.Context, prefix string) error
	// SetExternalAndLocalPathsSupported returns true if SetExternalPath and SetLocalPath are supported.
	//
	// For example, in-memory buckets may choose to return true so that object sources
	// are preserved, but filesystem buckets may choose to return false as they have
	// their own external paths.
	SetExternalAndLocalPathsSupported() bool
}

WriteBucket is a write-only bucket.

func LimitWriteBucket added in v1.8.0

func LimitWriteBucket(writeBucket WriteBucket, limit int) WriteBucket

LimitWriteBucket returns a WriteBucket that writes to [writeBucket] but stops with an error after [limit] bytes are written.

The error can be checked using IsWriteLimitReached.

A negative [limit] is same as 0 limit.

func MapWriteBucket

func MapWriteBucket(writeBucket WriteBucket, mappers ...Mapper) WriteBucket

MapWriteBucket maps the WriteBucket.

If the Mappers are empty, the original WriteBucket is returned. If there is more than one Mapper, the Mappers are called in order for UnmapFullPath, with the order reversed for MapPath.

That is, order these assuming you are starting with a full path and working to a path.

If a path that does not match is called for Put, an error is returned.

type WriteBucketCloser

type WriteBucketCloser interface {
	io.Closer
	WriteBucket
}

WriteBucketCloser is a write-only bucket that must be closed.

func MapWriteBucketCloser added in v1.32.0

func MapWriteBucketCloser(writeBucketCloser WriteBucketCloser, mappers ...Mapper) WriteBucketCloser

MapWriteBucketCloser maps the WriteBucketCloser.

If the Mappers are empty, the original WriteBucketCloser is returned. If there is more than one Mapper, the Mappers are called in order for UnmapFullPath, with the order reversed for MapPath.

That is, order these assuming you are starting with a full path and working to a path.

If a path that does not match is called for Put, an error is returned.

func NopWriteBucketCloser

func NopWriteBucketCloser(writeBucket WriteBucket) WriteBucketCloser

NopWriteBucketCloser returns a WriteBucketCloser for the WriteBucket.

type WriteObject

type WriteObject interface {
	io.Writer

	// SetExternalPath attempts to explicitly set the external path for the new object.
	//
	// If SetExternalAndLocalPathsSupported returns false, this returns error.
	SetExternalPath(externalPath string) error
	// SetLocalPath attempts to explicitly set the local path for the new object.
	//
	// If SetExternalAndLocalPathsSupported  returns false, this returns  error.
	SetLocalPath(localPath string) error
}

WriteObject object written to a bucket.

type WriteObjectCloser

type WriteObjectCloser interface {
	WriteObject
	io.Closer
}

WriteObjectCloser is a WriteObject with a closer.

It must be closed when done.

Directories

Path Synopsis
cmd
ddiff
Package main implements the ddiff command that diffs two directories.
Package main implements the ddiff command that diffs two directories.
Package storagearchive implements archive utilities.
Package storagearchive implements archive utilities.
Package storagemem implements an in-memory storage Bucket.
Package storagemem implements an in-memory storage Bucket.
internal
Package internal splits out ImmutableObject into a separate package from storagemem to make it impossible to modify ImmutableObject via direct field access.
Package internal splits out ImmutableObject into a separate package from storagemem to make it impossible to modify ImmutableObject via direct field access.
Package storageos implements an os-backed storage Bucket.
Package storageos implements an os-backed storage Bucket.
Package storagetesting implements testing utilities and integration tests for storage.
Package storagetesting implements testing utilities and integration tests for storage.
Package storageutil provides helpers for storage implementations.
Package storageutil provides helpers for storage implementations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL