content

package
v1.2.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 23, 2024 License: Apache-2.0 Imports: 29 Imported by: 50

Documentation

Overview

Copyright The ORAS Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright The ORAS Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright The ORAS Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright The ORAS Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Index

Constants

View Source
const (
	// DefaultBlobMediaType specifies the default blob media type
	DefaultBlobMediaType = ocispec.MediaTypeImageLayer
	// DefaultBlobDirMediaType specifies the default blob directory media type
	DefaultBlobDirMediaType = ocispec.MediaTypeImageLayerGzip
)
View Source
const (
	// AnnotationDigest is the annotation key for the digest of the uncompressed content
	AnnotationDigest = "io.deis.oras.content.digest"
	// AnnotationUnpack is the annotation key for indication of unpacking
	AnnotationUnpack = "io.deis.oras.content.unpack"
)
View Source
const (
	// what you get for a blank digest
	BlankHash = digest.Digest("sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855")
)
View Source
const (
	// DefaultBlocksize default size of each slice of bytes read in each write through in gunzipand untar.
	// Simply uses the same size as io.Copy()
	DefaultBlocksize = 32768
)
View Source
const (
	// OCIImageIndexFile is the file name of the index from the OCI Image Layout Specification
	// Reference: https://github.com/opencontainers/image-spec/blob/master/image-layout.md#indexjson-file
	OCIImageIndexFile = "index.json"
)
View Source
const (
	// TempFilePattern specifies the pattern to create temporary files
	TempFilePattern = "oras"
)

Variables

View Source
var (
	ErrNotFound           = errors.New("not_found")
	ErrNoName             = errors.New("no_name")
	ErrUnsupportedSize    = errors.New("unsupported_size")
	ErrUnsupportedVersion = errors.New("unsupported_version")
	ErrInvalidReference   = errors.New("invalid_reference")
)

Common errors

View Source
var (
	ErrPathTraversalDisallowed = errors.New("path_traversal_disallowed")
	ErrOverwriteDisallowed     = errors.New("overwrite_disallowed")
)

FileStore errors

Functions

func GenerateConfig added in v0.5.0

func GenerateConfig(annotations map[string]string) ([]byte, ocispec.Descriptor, error)

GenerateConfig generates a blank config with optional annotations.

func GenerateManifest added in v0.5.0

func GenerateManifest(config *ocispec.Descriptor, annotations map[string]string, descs ...ocispec.Descriptor) ([]byte, ocispec.Descriptor, error)

GenerateManifest generates a manifest. The manifest will include the provided config, and descs as layers. Raw bytes will be returned.

func GenerateManifestAndConfig added in v0.5.0

func GenerateManifestAndConfig(manifestAnnotations map[string]string, configAnnotations map[string]string, descs ...ocispec.Descriptor) (manifest []byte, manifestDesc ocispec.Descriptor, config []byte, configDesc ocispec.Descriptor, err error)

GenerateManifestAndConfig generates a config and then a manifest. Raw bytes will be returned.

func NewGunzipWriter

func NewGunzipWriter(writer content.Writer, opts ...WriterOpt) content.Writer

NewGunzipWriter wrap a writer with a gunzip, so that the stream is gunzipped

By default, it calculates the hash when writing. If the option `skipHash` is true, it will skip doing the hash. Skipping the hash is intended to be used only if you are confident about the validity of the data being passed to the writer, and wish to save on the hashing time.

func NewIoContentWriter

func NewIoContentWriter(writer io.Writer, opts ...WriterOpt) content.Writer

NewIoContentWriter create a new IoContentWriter.

By default, it calculates the hash when writing. If the option `skipHash` is true, it will skip doing the hash. Skipping the hash is intended to be used only if you are confident about the validity of the data being passed to the writer, and wish to save on the hashing time.

func NewPassthroughMultiWriter

func NewPassthroughMultiWriter(writers func(name string) (content.Writer, error), f func(r io.Reader, getwriter func(name string) io.Writer, done chan<- error), opts ...WriterOpt) content.Writer

func NewPassthroughWriter

func NewPassthroughWriter(writer content.Writer, f func(r io.Reader, w io.Writer, done chan<- error), opts ...WriterOpt) content.Writer

NewPassthroughWriter creates a pass-through writer that allows for processing the content via an arbitrary function. The function should do whatever processing it wants, reading from the Reader to the Writer. When done, it must indicate via sending an error or nil to the Done

func NewUntarWriter

func NewUntarWriter(writer content.Writer, opts ...WriterOpt) content.Writer

NewUntarWriter wrap a writer with an untar, so that the stream is untarred

By default, it calculates the hash when writing. If the option `skipHash` is true, it will skip doing the hash. Skipping the hash is intended to be used only if you are confident about the validity of the data being passed to the writer, and wish to save on the hashing time.

func NewUntarWriterByName

func NewUntarWriterByName(writers func(string) (content.Writer, error), opts ...WriterOpt) content.Writer

NewUntarWriterByName wrap multiple writers with an untar, so that the stream is untarred and passed to the appropriate writer, based on the filename. If a filename is not found, it is up to the called func to determine how to process it.

func NopCloserAt added in v0.5.0

func NopCloserAt(r io.ReaderAt) nopCloserAt

func ResolveName

func ResolveName(desc ocispec.Descriptor) (string, bool)

ResolveName resolves name from descriptor

Types

type Decompress added in v0.5.0

type Decompress struct {
	// contains filtered or unexported fields
}

Decompress store to decompress content and extract from tar, if needed, wrapping another store. By default, a FileStore will simply take each artifact and write it to a file, as a MemoryStore will do into memory. If the artifact is gzipped or tarred, you might want to store the actual object inside tar or gzip. Wrap your Store with Decompress, and it will check the media-type and, if relevant, gunzip and/or untar.

For example:

fileStore := NewFileStore(rootPath)
Decompress := store.NewDecompress(fileStore, WithBlocksize(blocksize))

The above example works if there is no tar, i.e. each artifact is just a single file, perhaps gzipped, or if there is only one file in each tar archive. In other words, when each content.Writer has only one target output stream. However, if you have multiple files in each tar archive, each archive of which is an artifact layer, then you need a way to select how to handle each file in the tar archive. In other words, when each content.Writer has more than one target output stream. In that case, use the following example:

multiStore := NewMultiStore(rootPath) // some store that can handle different filenames
Decompress := store.NewDecompress(multiStore, WithBlocksize(blocksize), WithMultiWriterIngester())

func NewDecompress added in v0.5.0

func NewDecompress(pusher remotes.Pusher, opts ...WriterOpt) Decompress

func (Decompress) Push added in v0.5.0

Push get a content.Writer

type File added in v0.5.0

type File struct {
	DisableOverwrite          bool
	AllowPathTraversalOnWrite bool

	// Reproducible enables stripping times from added files
	Reproducible bool
	// contains filtered or unexported fields
}

File provides content via files from the file system

func NewFile added in v0.5.0

func NewFile(rootPath string, opts ...WriterOpt) *File

NewFile creats a new file target. It represents a single root reference and all of its components.

func (*File) Add added in v0.5.0

func (s *File) Add(name, mediaType, path string) (ocispec.Descriptor, error)

Add adds a file reference from a path, either directory or single file, and returns the reference descriptor.

func (*File) Close added in v0.5.0

func (s *File) Close() error

Close frees up resources used by the file store

func (*File) Fetch added in v0.5.0

func (s *File) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)

Fetch get an io.ReadCloser for the specific content

func (*File) Fetcher added in v0.5.0

func (s *File) Fetcher(ctx context.Context, ref string) (remotes.Fetcher, error)

func (*File) Load added in v0.5.0

func (s *File) Load(desc ocispec.Descriptor, data []byte) error

Load is a lower-level memory-only version of Add. Rather than taking a path, generating a descriptor and creating a reference, it takes raw data and a descriptor that describes that data and stores it in memory. It will disappear at process termination.

It is especially useful for adding ephemeral data, such as config, that must exist in order to walk a manifest.

func (*File) MapPath added in v0.5.0

func (s *File) MapPath(name, path string) string

MapPath maps name to path

func (*File) Pusher added in v0.5.0

func (s *File) Pusher(ctx context.Context, ref string) (remotes.Pusher, error)

func (*File) Ref added in v0.5.0

func (s *File) Ref(ref string) (ocispec.Descriptor, []byte, error)

Ref gets a reference's descriptor and content

func (*File) Resolve added in v0.5.0

func (s *File) Resolve(ctx context.Context, ref string) (name string, desc ocispec.Descriptor, err error)

func (*File) ResolvePath added in v0.5.0

func (s *File) ResolvePath(name string) string

ResolvePath returns the path by name

func (*File) Resolver added in v0.5.0

func (s *File) Resolver() remotes.Resolver

func (*File) StoreManifest added in v0.5.0

func (s *File) StoreManifest(ref string, desc ocispec.Descriptor, manifest []byte) error

StoreManifest stores a manifest linked to by the provided ref. The children of the manifest, such as layers and config, should already exist in the file store, either as files linked via Add(), or via Load(). If they do not exist, then a typical Fetcher that walks the manifest will hit an unresolved hash.

StoreManifest does *not* validate their presence.

type IoContentWriter

type IoContentWriter struct {
	// contains filtered or unexported fields
}

IoContentWriter writer that wraps an io.Writer, so the results can be streamed to an open io.Writer. For example, can be used to pull a layer and write it to a file, or device.

func (*IoContentWriter) Close

func (w *IoContentWriter) Close() error

func (*IoContentWriter) Commit

func (w *IoContentWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error

Commit commits the blob (but no roll-back is guaranteed on an error). size and expected can be zero-value when unknown. Commit always closes the writer, even on error. ErrAlreadyExists aborts the writer.

func (*IoContentWriter) Digest

func (w *IoContentWriter) Digest() digest.Digest

Digest may return empty digest or panics until committed.

func (*IoContentWriter) Status

func (w *IoContentWriter) Status() (content.Status, error)

Status returns the current state of write

func (*IoContentWriter) Truncate

func (w *IoContentWriter) Truncate(size int64) error

Truncate updates the size of the target blob

func (*IoContentWriter) Write

func (w *IoContentWriter) Write(p []byte) (n int, err error)

type Memory added in v0.5.0

type Memory struct {
	// contains filtered or unexported fields
}

Memory provides content from the memory

func NewMemory added in v0.5.0

func NewMemory() *Memory

NewMemory creats a new memory store

func (*Memory) Add added in v0.5.0

func (s *Memory) Add(name, mediaType string, content []byte) (ocispec.Descriptor, error)

Add adds content, generating a descriptor and returning it.

func (*Memory) Fetch added in v0.5.0

func (s *Memory) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)

Fetch get an io.ReadCloser for the specific content

func (*Memory) Fetcher added in v0.5.0

func (s *Memory) Fetcher(ctx context.Context, ref string) (remotes.Fetcher, error)

func (*Memory) Get added in v0.5.0

func (s *Memory) Get(desc ocispec.Descriptor) (ocispec.Descriptor, []byte, bool)

Get finds the content from the store

func (*Memory) GetByName added in v0.5.0

func (s *Memory) GetByName(name string) (ocispec.Descriptor, []byte, bool)

GetByName finds the content from the store by name (i.e. AnnotationTitle)

func (*Memory) Pusher added in v0.5.0

func (s *Memory) Pusher(ctx context.Context, ref string) (remotes.Pusher, error)

func (*Memory) Resolve added in v0.5.0

func (s *Memory) Resolve(ctx context.Context, ref string) (name string, desc ocispec.Descriptor, err error)

func (*Memory) Resolver added in v0.5.0

func (s *Memory) Resolver() remotes.Resolver

func (*Memory) Set added in v0.5.0

func (s *Memory) Set(desc ocispec.Descriptor, content []byte)

Set adds the content to the store

func (*Memory) StoreManifest added in v0.5.0

func (s *Memory) StoreManifest(ref string, desc ocispec.Descriptor, manifest []byte) error

StoreManifest stores a manifest linked to by the provided ref. The children of the manifest, such as layers and config, should already exist in the file store, either as files linked via Add(), or via Set(). If they do not exist, then a typical Fetcher that walks the manifest will hit an unresolved hash.

StoreManifest does *not* validate their presence.

type MultiReader

type MultiReader struct {
	// contains filtered or unexported fields
}

MultiReader store to read content from multiple stores. It finds the content by asking each underlying store to find the content, which it does based on the hash.

Example:

fileStore := NewFileStore(rootPath)
memoryStore := NewMemoryStore()
// load up content in fileStore and memoryStore
multiStore := MultiReader([]content.Provider{fileStore, memoryStore})

You now can use multiStore anywhere that content.Provider is accepted

func (*MultiReader) AddStore

func (m *MultiReader) AddStore(store ...remotes.Fetcher)

AddStore add a store to read from

func (MultiReader) Fetch added in v0.5.0

ReaderAt get a reader

type MultiWriterIngester

type MultiWriterIngester interface {
	ctrcontent.Ingester
	Writers(ctx context.Context, opts ...ctrcontent.WriterOpt) (func(string) (ctrcontent.Writer, error), error)
}

MultiWriterIngester an ingester that can provide a single writer or multiple writers for a single descriptor. Useful when the target of a descriptor can have multiple items within it, e.g. a layer that is a tar file with multiple files, each of which should go to a different stream, some of which should not be handled at all.

type MultiWriterPusher added in v0.5.0

type MultiWriterPusher interface {
	remotes.Pusher
	Pushers(ctx context.Context, desc ocispec.Descriptor) (func(string) (ctrcontent.Writer, error), error)
}

MultiWriterPusher a pusher that can provide a single writer or multiple writers for a single descriptor. Useful when the target of a descriptor can have multiple items within it, e.g. a layer that is a tar file with multiple files, each of which should go to a different stream, some of which should not be handled at all.

type OCI added in v0.5.0

type OCI struct {
	content.Store
	// contains filtered or unexported fields
}

OCI provides content from the file system with the OCI-Image layout. Reference: https://github.com/opencontainers/image-spec/blob/master/image-layout.md

func NewOCI added in v0.5.0

func NewOCI(rootPath string) (*OCI, error)

NewOCI creates a new OCI store

func (*OCI) Abort added in v0.5.0

func (s *OCI) Abort(ctx context.Context, ref string) error

TODO: implement (needed to create a content.Store) Abort completely cancels the ingest operation targeted by ref.

func (*OCI) AddReference added in v0.5.0

func (s *OCI) AddReference(name string, desc ocispec.Descriptor)

AddReference adds or updates an reference to index.

func (*OCI) Delete added in v0.5.0

func (s *OCI) Delete(ctx context.Context, dgst digest.Digest) error

Delete removes the content from the store.

func (*OCI) DeleteReference added in v0.5.0

func (s *OCI) DeleteReference(name string)

DeleteReference deletes an reference from index.

func (*OCI) Fetch added in v0.5.0

func (s *OCI) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)

Fetch get an io.ReadCloser for the specific content

func (*OCI) Fetcher added in v0.5.0

func (s *OCI) Fetcher(ctx context.Context, ref string) (remotes.Fetcher, error)

func (*OCI) Info added in v0.5.0

func (s *OCI) Info(ctx context.Context, dgst digest.Digest) (content.Info, error)

Info will return metadata about content available in the content store. Abort completely cancels the ingest operation targeted by ref.

func (*OCI) ListReferences added in v0.5.0

func (s *OCI) ListReferences() map[string]ocispec.Descriptor

ListReferences lists all references in index.

func (*OCI) ListStatuses added in v0.5.0

func (s *OCI) ListStatuses(ctx context.Context, filters ...string) ([]content.Status, error)

TODO: implement (needed to create a content.Store) ListStatuses returns the status of any active ingestions whose ref match the provided regular expression. If empty, all active ingestions will be returned.

func (*OCI) LoadIndex added in v0.5.0

func (s *OCI) LoadIndex() error

LoadIndex reads the index.json from the file system

func (*OCI) Pusher added in v0.5.0

func (s *OCI) Pusher(ctx context.Context, ref string) (remotes.Pusher, error)

Pusher get a remotes.Pusher for the given ref

func (*OCI) ReaderAt added in v0.5.0

func (s *OCI) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.ReaderAt, error)

ReaderAt provides contents

func (*OCI) Resolve added in v0.5.0

func (s *OCI) Resolve(ctx context.Context, ref string) (name string, desc ocispec.Descriptor, err error)

func (*OCI) Resolver added in v0.5.0

func (s *OCI) Resolver() remotes.Resolver

func (*OCI) SaveIndex added in v0.5.0

func (s *OCI) SaveIndex() error

SaveIndex writes the index.json to the file system

func (*OCI) Status added in v0.5.0

func (s *OCI) Status(ctx context.Context, ref string) (content.Status, error)

TODO: implement (needed to create a content.Store)

func (*OCI) Update added in v0.5.0

func (s *OCI) Update(ctx context.Context, info content.Info, fieldpaths ...string) (content.Info, error)

TODO: implement (needed to create a content.Store) Update updates mutable information related to content. If one or more fieldpaths are provided, only those fields will be updated. Mutable fields:

labels.*

func (*OCI) Walk added in v0.5.0

func (s *OCI) Walk(ctx context.Context, fn content.WalkFunc, filters ...string) error

TODO: implement (needed to create a content.Store) Walk will call fn for each item in the content store which match the provided filters. If no filters are given all items will be walked.

type PassthroughMultiWriter

type PassthroughMultiWriter struct {
	// contains filtered or unexported fields
}

PassthroughMultiWriter single writer that passes through to multiple writers, allowing the passthrough function to select which writer to use.

func (*PassthroughMultiWriter) Close

func (pmw *PassthroughMultiWriter) Close() error

func (*PassthroughMultiWriter) Commit

func (pmw *PassthroughMultiWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error

Commit commits the blob (but no roll-back is guaranteed on an error). size and expected can be zero-value when unknown. Commit always closes the writer, even on error. ErrAlreadyExists aborts the writer.

func (*PassthroughMultiWriter) Digest

func (pmw *PassthroughMultiWriter) Digest() digest.Digest

Digest may return empty digest or panics until committed.

func (*PassthroughMultiWriter) Status

func (pmw *PassthroughMultiWriter) Status() (content.Status, error)

Status returns the current state of write

func (*PassthroughMultiWriter) Truncate

func (pmw *PassthroughMultiWriter) Truncate(size int64) error

Truncate updates the size of the target blob, but cannot do anything with a multiwriter

func (*PassthroughMultiWriter) Write

func (pmw *PassthroughMultiWriter) Write(p []byte) (n int, err error)

type PassthroughWriter

type PassthroughWriter struct {
	// contains filtered or unexported fields
}

PassthroughWriter takes an input stream and passes it through to an underlying writer, while providing the ability to manipulate the stream before it gets passed through

func (*PassthroughWriter) Close

func (pw *PassthroughWriter) Close() error

func (*PassthroughWriter) Commit

func (pw *PassthroughWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error

Commit commits the blob (but no roll-back is guaranteed on an error). size and expected can be zero-value when unknown. Commit always closes the writer, even on error. ErrAlreadyExists aborts the writer.

func (*PassthroughWriter) Digest

func (pw *PassthroughWriter) Digest() digest.Digest

Digest may return empty digest or panics until committed.

func (*PassthroughWriter) Status

func (pw *PassthroughWriter) Status() (content.Status, error)

Status returns the current state of write

func (*PassthroughWriter) Truncate

func (pw *PassthroughWriter) Truncate(size int64) error

Truncate updates the size of the target blob

func (*PassthroughWriter) Write

func (pw *PassthroughWriter) Write(p []byte) (n int, err error)

type ReaderAtWrapper added in v0.5.0

type ReaderAtWrapper struct {
	// contains filtered or unexported fields
}

readerAtWrapper wraps a ReaderAt to give a Reader

func NewReaderAtWrapper added in v0.5.0

func NewReaderAtWrapper(readerAt io.ReaderAt) *ReaderAtWrapper

func (*ReaderAtWrapper) Read added in v0.5.0

func (r *ReaderAtWrapper) Read(p []byte) (n int, err error)

type Registry added in v0.5.0

type Registry struct {
	remotes.Resolver
}

Registry provides content from a spec-compliant registry. Create an use a new one for each registry with unique configuration of RegistryOptions.

func NewRegistry added in v0.5.0

func NewRegistry(opts RegistryOptions) (*Registry, error)

NewRegistry creates a new Registry store

type RegistryOptions added in v0.5.0

type RegistryOptions struct {
	Configs   []string
	Username  string
	Password  string
	Insecure  bool
	PlainHTTP bool
}

RegistryOptions provide configuration options to a Registry

type Store added in v0.5.0

type Store interface {
	remotes.Pusher
	remotes.Fetcher
}

ProvideIngester is the interface that groups the basic Read and Write methods.

type WriterOpt

type WriterOpt func(*WriterOpts) error

func WithBlocksize

func WithBlocksize(blocksize int) WriterOpt

WithBlocksize set the blocksize used by the processor of data. The default is DefaultBlocksize, which is the same as that used by io.Copy. Includes a safety check to ensure the caller doesn't actively set it to <= 0.

func WithErrorOnNoName

func WithErrorOnNoName() WriterOpt

WithErrorOnNoName some ingesters, when creating a Writer, do not return an error if the descriptor does not have a valid name on the descriptor. Passing WithErrorOnNoName tells the writer to return an error instead of passing the data to a nil writer.

func WithIgnoreNoName deprecated

func WithIgnoreNoName() WriterOpt

WithIgnoreNoName some ingesters, when creating a Writer, return an error if the descriptor does not have a valid name on the descriptor. Passing WithIgnoreNoName tells the writer not to return an error, but rather to pass the data to a nil writer.

Deprecated: Use WithErrorOnNoName

func WithInputHash

func WithInputHash(hash digest.Digest) WriterOpt

WithInputHash provide the expected input hash to a writer. Writers may suppress their own calculation of a hash on the stream, taking this hash instead. If the Writer processes the data before passing it on to another Writer layer, this is the hash of the *input* stream.

To have a blank hash, use WithInputHash(BlankHash).

func WithMultiWriterIngester

func WithMultiWriterIngester() WriterOpt

WithMultiWriterIngester the passed ingester also implements MultiWriter and should be used as such. If this is set to true, but the ingester does not implement MultiWriter, calling Writer should return an error.

func WithOutputHash

func WithOutputHash(hash digest.Digest) WriterOpt

WithOutputHash provide the expected output hash to a writer. Writers may suppress their own calculation of a hash on the stream, taking this hash instead. If the Writer processes the data before passing it on to another Writer layer, this is the hash of the *output* stream.

To have a blank hash, use WithInputHash(BlankHash).

type WriterOpts

type WriterOpts struct {
	InputHash           *digest.Digest
	OutputHash          *digest.Digest
	Blocksize           int
	MultiWriterIngester bool
	IgnoreNoName        bool
}

func DefaultWriterOpts

func DefaultWriterOpts() WriterOpts

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL