archiver

package module
v4.0.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 18, 2024 License: MIT Imports: 37 Imported by: 3

README

archiver Go Reference Ubuntu-latest Macos-latest Windows-latest

Introducing Archiver 4.0 - a cross-platform, multi-format archive utility and Go library. A powerful and flexible library meets an elegant CLI in this generic replacement for several platform-specific or format-specific archive utilities.

⚠ v4 is in ALPHA. The core library APIs work pretty well but the command has not been implemented yet, nor have most automated tests. If you need the arc command, stick with v3 for now.

Features

  • Stream-oriented APIs
  • Automatically identify archive and compression formats:
    • By file name
    • By header
  • Traverse directories, archive files, and any other file uniformly as io/fs file systems:
  • Compress and decompress files
  • Create and extract archive files
  • Walk or traverse into archive files
  • Extract only specific files from archives
  • Insert (append) into .tar and .zip archives
  • Read from password-protected 7-Zip files
  • Numerous archive and compression formats supported
  • Extensible (add more formats just by registering them)
  • Cross-platform, static binary
  • Pure Go (no cgo)
  • Multithreaded Gzip
  • Adjust compression levels
  • Automatically add compressed files to zip archives without re-compressing
  • Open password-protected RAR archives
Supported compression formats
  • brotli (.br)
  • bzip2 (.bz2)
  • flate (.zip)
  • gzip (.gz)
  • lz4 (.lz4)
  • lzip (.lz)
  • snappy (.sz)
  • xz (.xz)
  • zlib (.zz)
  • zstandard (.zst)
Supported archive formats
  • .zip
  • .tar (including any compressed variants like .tar.gz)
  • .rar (read-only)
  • .7z (read-only)

Tar files can optionally be compressed using any compression format.

Command use

Coming soon for v4. See the last v3 docs.

Library use

$ go get github.com/mholt/archiver/v4
Create archive

Creating archives can be done entirely without needing a real disk or storage device since all you need is a list of File structs to pass in.

However, creating archives from files on disk is very common, so you can use the FilesFromDisk() function to help you map filenames on disk to their paths in the archive. Then create and customize the format type.

In this example, we add 4 files and a directory (which includes its contents recursively) to a .tar.gz file:

// map files on disk to their paths in the archive
files, err := archiver.FilesFromDisk(nil, map[string]string{
	"/path/on/disk/file1.txt": "file1.txt",
	"/path/on/disk/file2.txt": "subfolder/file2.txt",
	"/path/on/disk/file3.txt": "",              // put in root of archive as file3.txt
	"/path/on/disk/file4.txt": "subfolder/",    // put in subfolder as file4.txt
	"/path/on/disk/folder":    "Custom Folder", // contents added recursively
})
if err != nil {
	return err
}

// create the output file we'll write to
out, err := os.Create("example.tar.gz")
if err != nil {
	return err
}
defer out.Close()

// we can use the CompressedArchive type to gzip a tarball
// (compression is not required; you could use Tar directly)
format := archiver.CompressedArchive{
	Compression: archiver.Gz{},
	Archival:    archiver.Tar{},
}

// create the archive
err = format.Archive(context.Background(), out, files)
if err != nil {
	return err
}

The first parameter to FilesFromDisk() is an optional options struct, allowing you to customize how files are added.

Extract archive

Extracting an archive, extracting from an archive, and walking an archive are all the same function.

Simply use your format type (e.g. Zip) to call Extract(). You'll pass in a context (for cancellation), the input stream, the list of files you want out of the archive, and a callback function to handle each file.

If you want all the files, pass in a nil list of file paths.

// the type that will be used to read the input stream
format := archiver.Zip{}

// the list of files we want out of the archive; any
// directories will include all their contents unless
// we return fs.SkipDir from our handler
// (leave this nil to walk ALL files from the archive)
fileList := []string{"file1.txt", "subfolder"}

handler := func(ctx context.Context, f archiver.File) error {
	// do something with the file
	return nil
}

err := format.Extract(ctx, input, fileList, handler)
if err != nil {
	return err
}
Identifying formats

Have an input stream with unknown contents? No problem, archiver can identify it for you. It will try matching based on filename and/or the header (which peeks at the stream):

format, input, err := archiver.Identify("filename.tar.zst", input)
if err != nil {
	return err
}
// you can now type-assert format to whatever you need;
// be sure to use returned stream to re-read consumed bytes during Identify()

// want to extract something?
if ex, ok := format.(archiver.Extractor); ok {
	// ... proceed to extract
}

// or maybe it's compressed and you want to decompress it?
if decom, ok := format.(archiver.Decompressor); ok {
	rc, err := decom.OpenReader(unknownFile)
	if err != nil {
		return err
	}
	defer rc.Close()

	// read from rc to get decompressed data
}

Identify() works by reading an arbitrary number of bytes from the beginning of the stream (just enough to check for file headers). It buffers them and returns a new reader that lets you re-read them anew.

Virtual file systems

This is my favorite feature.

Let's say you have a file. It could be a real directory on disk, an archive, a compressed archive, or any other regular file. You don't really care; you just want to use it uniformly no matter what it is.

Use archiver to simply create a file system:

// filename could be:
// - a folder ("/home/you/Desktop")
// - an archive ("example.zip")
// - a compressed archive ("example.tar.gz")
// - a regular file ("example.txt")
// - a compressed regular file ("example.txt.gz")
fsys, err := archiver.FileSystem(filename)
if err != nil {
	return err
}

This is a fully-featured fs.FS, so you can open files and read directories, no matter what kind of file the input was.

For example, to open a specific file:

f, err := fsys.Open("file")
if err != nil {
	return err
}
defer f.Close()

If you opened a regular file, you can read from it. If it's a compressed file, reads are automatically decompressed.

If you opened a directory, you can list its contents:

if dir, ok := f.(fs.ReadDirFile); ok {
	// 0 gets all entries, but you can pass > 0 to paginate
	entries, err := dir.ReadDir(0)
	if err != nil {
		return err
	}
	for _, e := range entries {
		fmt.Println(e.Name())
	}
}

Or get a directory listing this way:

entries, err := fsys.ReadDir("Playlists")
if err != nil {
	return err
}
for _, e := range entries {
	fmt.Println(e.Name())
}

Or maybe you want to walk all or part of the file system, but skip a folder named .git:

err := fs.WalkDir(fsys, ".", func(path string, d fs.DirEntry, err error) error {
	if err != nil {
		return err
	}
	if path == ".git" {
		return fs.SkipDir
	}
	fmt.Println("Walking:", path, "Dir?", d.IsDir())
	return nil
})
if err != nil {
	return err
}
Use with http.FileServer

It can be used with http.FileServer to browse archives and directories in a browser. However, due to how http.FileServer works, don't directly use http.FileServer with compressed files; instead wrap it like following:

fileServer := http.FileServer(http.FS(archiveFS))
http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
	// disable range request
	writer.Header().Set("Accept-Ranges", "none")
	request.Header.Del("Range")
	
	// disable content-type sniffing
	ctype := mime.TypeByExtension(filepath.Ext(request.URL.Path))
	writer.Header()["Content-Type"] = nil
	if ctype != "" {
		writer.Header().Set("Content-Type", ctype)
	}
	fileServer.ServeHTTP(writer, request)
})

http.FileServer will try to sniff the Content-Type by default if it can't be inferred from file name. To do this, the http package will try to read from the file and then Seek back to file start, which the libray can't achieve currently. The same goes with Range requests. Seeking in archives is not currently supported by archiver due to limitations in dependencies.

If content-type is desirable, you can register it yourself.

Compress data

Compression formats let you open writers to compress data:

// wrap underlying writer w
compressor, err := archiver.Zstd{}.OpenWriter(w)
if err != nil {
	return err
}
defer compressor.Close()

// writes to compressor will be compressed
Decompress data

Similarly, compression formats let you open readers to decompress data:

// wrap underlying reader r
decompressor, err := archiver.Brotli{}.OpenReader(r)
if err != nil {
	return err
}
defer decompressor.Close()

// reads from decompressor will be decompressed
Append to tarball and zip archives

Tar and Zip archives can be appended to without creating a whole new archive by calling Insert() on a tar or zip stream. However, for tarballs, this requires that the tarball is not compressed (due to complexities with modifying compression dictionaries).

Here is an example that appends a file to a tarball on disk:

tarball, err := os.OpenFile("example.tar", os.O_RDWR, 0644)
if err != nil {
	return err
}
defer tarball.Close()

// prepare a text file for the root of the archive
files, err := archiver.FilesFromDisk(nil, map[string]string{
	"/home/you/lastminute.txt": "",
})

err := archiver.Tar{}.Insert(context.Background(), tarball, files)
if err != nil {
	return err
}

The code is similar for inserting into a Zip archive, except you'll call Insert() on the Zip type instead.

Documentation

Index

Constants

View Source
const (
	ZipMethodBzip2 = 12
	// TODO: LZMA: Disabled - because 7z isn't able to unpack ZIP+LZMA ZIP+LZMA2 archives made this way - and vice versa.
	// ZipMethodLzma     = 14
	ZipMethodZstd = 93
	ZipMethodXz   = 95
)

Additional compression methods not offered by archive/zip. See https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT section 4.4.5.

Variables

View Source
var ErrNoMatch = fmt.Errorf("no formats matched")

ErrNoMatch is returned if there are no matching formats.

Functions

func FileSystem

func FileSystem(ctx context.Context, root string) (fs.FS, error)

FileSystem opens the file at root as a read-only file system. The root may be a path to a directory, archive file, compressed archive file, compressed file, or any other file on disk.

If root is a directory, its contents are accessed directly from the disk's file system. If root is an archive file, its contents can be accessed like a normal directory; compressed archive files are transparently decompressed as contents are accessed. And if root is any other file, it is the only file in the file system; if the file is compressed, it is transparently decompressed when read from.

This method essentially offers uniform read access to various kinds of files: directories, archives, compressed archives, and individual files are all treated the same way.

Except for zip files, the returned FS values are guaranteed to be fs.ReadDirFS and fs.StatFS types, and may also be fs.SubFS.

func RegisterFormat

func RegisterFormat(format Format)

RegisterFormat registers a format. It should be called during init. Duplicate formats by name are not allowed and will panic.

func TopDirOpen

func TopDirOpen(fsys fs.FS, name string) (fs.File, error)

TopDirOpen is a special Open() function that may be useful if a file system root was created by extracting an archive.

It first tries the file name as given, but if that returns an error, it tries the name without the first element of the path. In other words, if "a/b/c" returns an error, then "b/c" will be tried instead.

Consider an archive that contains a file "a/b/c". When the archive is extracted, the contents may be created without a new parent/root folder to contain them, and the path of the same file outside the archive may be lacking an exclusive root or parent container. Thus it is likely for a file system created for the same files extracted to disk to be rooted at one of the top-level files/folders from the archive instead of a parent folder. For example, the file known as "a/b/c" when rooted at the archive becomes "b/c" after extraction when rooted at "a" on disk (because no new, exclusive top-level folder was created). This difference in paths can make it difficult to use archives and directories uniformly. Hence these TopDir* functions which attempt to smooth over the difference.

Some extraction utilities do create a container folder for archive contents when extracting, in which case the user may give that path as the root. In that case, these TopDir* functions are not necessary (but aren't harmful either). They are primarily useful if you are not sure whether the root is an archive file or is an extracted archive file, as they will work with the same filename/path inputs regardless of the presence of a top-level directory.

func TopDirReadDir

func TopDirReadDir(fsys fs.FS, name string) ([]fs.DirEntry, error)

TopDirReadDir is like TopDirOpen but for ReadDir.

func TopDirStat

func TopDirStat(fsys fs.FS, name string) (fs.FileInfo, error)

TopDirStat is like TopDirOpen but for Stat.

Types

type Archival

type Archival interface {
	Format
	Archiver
	Extractor
}

Archival is an archival format with both archive and extract methods.

type ArchiveAsyncJob

type ArchiveAsyncJob struct {
	File   File
	Result chan<- error
}

ArchiveAsyncJob contains a File to be archived and a channel that the result of the archiving should be returned on.

type ArchiveFS

type ArchiveFS struct {
	// set one of these
	Path   string            // path to the archive file on disk, or...
	Stream *io.SectionReader // ...stream from which to read archive

	Format  Archival        // the archive format
	Prefix  string          // optional subdirectory in which to root the fs
	Context context.Context // optional
}

ArchiveFS allows accessing an archive (or a compressed archive) using a consistent file system interface. Essentially, it allows traversal and reading of archive contents the same way as any normal directory on disk. The contents of compressed archives are transparently decompressed.

A valid ArchiveFS value must set either Path or Stream. If Path is set, a literal file will be opened from the disk. If Stream is set, new SectionReaders will be implicitly created to access the stream, enabling safe, concurrent access.

NOTE: Due to Go's file system APIs (see package io/fs), the performance of ArchiveFS when used with fs.WalkDir() is poor for archives with lots of files (see issue #326). The fs.WalkDir() API requires listing each directory's contents in turn, and the only way to ensure we return the complete list of folder contents is to traverse the whole archive and build a slice; so if this is done for the root of an archive with many files, performance tends toward O(n^2) as the entire archive is walked for every folder that is enumerated (WalkDir calls ReadDir recursively). If you do not need each directory's contents walked in order, please prefer calling Extract() from an archive type directly; this will perform a O(n) walk of the contents in archive order, rather than the slower directory tree order.

func (ArchiveFS) Open

func (f ArchiveFS) Open(name string) (fs.File, error)

Open opens the named file from within the archive. If name is "." then the archive file itself will be opened as a directory file.

func (ArchiveFS) ReadDir

func (f ArchiveFS) ReadDir(name string) ([]fs.DirEntry, error)

ReadDir reads the named directory from within the archive.

func (ArchiveFS) Stat

func (f ArchiveFS) Stat(name string) (fs.FileInfo, error)

Stat stats the named file from within the archive. If name is "." then the archive file itself is statted and treated as a directory file.

func (*ArchiveFS) Sub

func (f *ArchiveFS) Sub(dir string) (fs.FS, error)

Sub returns an FS corresponding to the subtree rooted at dir.

type Archiver

type Archiver interface {
	// Archive writes an archive file to output with the given files.
	//
	// Context cancellation must be honored.
	Archive(ctx context.Context, output io.Writer, files []File) error
}

Archiver can create a new archive.

type ArchiverAsync

type ArchiverAsync interface {
	Archiver

	// Use ArchiveAsync if you can't pre-assemble a list of all
	// the files for the archive. Close the jobs channel after
	// all the files have been sent.
	//
	// This won't return until the channel is closed.
	ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error
}

ArchiverAsync is an Archiver that can also create archives asynchronously by pumping files into a channel as they are discovered.

type Brotli

type Brotli struct {
	Quality int
}

Brotli facilitates brotli compression.

func (Brotli) Match

func (br Brotli) Match(filename string, stream io.Reader) (MatchResult, error)

func (Brotli) Name

func (Brotli) Name() string

func (Brotli) OpenReader

func (Brotli) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Brotli) OpenWriter

func (br Brotli) OpenWriter(w io.Writer) (io.WriteCloser, error)

type Bz2

type Bz2 struct {
	CompressionLevel int
}

Bz2 facilitates bzip2 compression.

func (Bz2) Match

func (bz Bz2) Match(filename string, stream io.Reader) (MatchResult, error)

func (Bz2) Name

func (Bz2) Name() string

func (Bz2) OpenReader

func (Bz2) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Bz2) OpenWriter

func (bz Bz2) OpenWriter(w io.Writer) (io.WriteCloser, error)

type CompressedArchive

type CompressedArchive struct {
	Compression
	Archival
}

CompressedArchive combines a compression format on top of an archive format (e.g. "tar.gz") and provides both functionalities in a single type. It ensures that archive functions are wrapped by compressors and decompressors. However, compressed archives have some limitations; for example, files cannot be inserted/appended because of complexities with modifying existing compression state (perhaps this could be overcome, but I'm not about to try it).

As this type is intended to compose compression and archive formats, both must be specified in order for this value to be valid, or its methods will return errors.

func (CompressedArchive) Archive

func (caf CompressedArchive) Archive(ctx context.Context, output io.Writer, files []File) error

Archive adds files to the output archive while compressing the result.

func (CompressedArchive) ArchiveAsync

func (caf CompressedArchive) ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error

ArchiveAsync adds files to the output archive while compressing the result asynchronously.

func (CompressedArchive) Extract

func (caf CompressedArchive) Extract(ctx context.Context, sourceArchive io.Reader, pathsInArchive []string, handleFile FileHandler) error

Extract reads files out of an archive while decompressing the results. If Extract is not called from ArchiveFS.Open, then the FileHandler passed in must close all opened files by the time the Extract walk finishes.

func (CompressedArchive) Match

func (caf CompressedArchive) Match(filename string, stream io.Reader) (MatchResult, error)

Match matches if the input matches both the compression and archive format.

func (CompressedArchive) Name

func (caf CompressedArchive) Name() string

Name returns a concatenation of the archive format name and the compression format name.

type Compression

type Compression interface {
	Format
	Compressor
	Decompressor
}

Compression is a compression format with both compress and decompress methods.

type Compressor

type Compressor interface {
	// OpenWriter wraps w with a new writer that compresses what is written.
	// The writer must be closed when writing is finished.
	OpenWriter(w io.Writer) (io.WriteCloser, error)
}

Compressor can compress data by wrapping a writer.

type Decompressor

type Decompressor interface {
	// OpenReader wraps r with a new reader that decompresses what is read.
	// The reader must be closed when reading is finished.
	OpenReader(r io.Reader) (io.ReadCloser, error)
}

Decompressor can decompress data by wrapping a reader.

type DirFS

type DirFS string

DirFS allows accessing a directory on disk with a consistent file system interface. It is almost the same as os.DirFS, except for some reason os.DirFS only implements Open() and Stat(), but we also need ReadDir(). Seems like an obvious miss (as of Go 1.17) and I have questions: https://twitter.com/mholt6/status/1476058551432876032

func (DirFS) Open

func (f DirFS) Open(name string) (fs.File, error)

Open opens the named file.

func (DirFS) ReadDir

func (f DirFS) ReadDir(name string) ([]fs.DirEntry, error)

ReadDir returns a listing of all the files in the named directory.

func (DirFS) Stat

func (f DirFS) Stat(name string) (fs.FileInfo, error)

Stat returns info about the named file.

func (DirFS) Sub

func (f DirFS) Sub(dir string) (fs.FS, error)

Sub returns an FS corresponding to the subtree rooted at dir.

type Extractor

type Extractor interface {
	// Extract reads the files at pathsInArchive from sourceArchive.
	// If pathsInArchive is nil, all files are extracted without discretion.
	// If pathsInArchive is empty, no files are extracted.
	// If a path refers to a directory, all files within it are extracted.
	// Extracted files are passed to the handleFile callback for handling.
	//
	// Context cancellation must be honored.
	Extract(ctx context.Context, sourceArchive io.Reader, pathsInArchive []string, handleFile FileHandler) error
}

Extractor can extract files from an archive.

type File

type File struct {
	fs.FileInfo

	// The file header as used/provided by the archive format.
	// Typically, you do not need to set this field when creating
	// an archive.
	Header interface{}

	// The path of the file as it appears in the archive.
	// This is equivalent to Header.Name (for most Header
	// types). We require it to be specified here because
	// it is such a common field and we want to preserve
	// format-agnosticism (no type assertions) for basic
	// operations.
	//
	// EXPERIMENTAL: If inserting a file into an archive,
	// and this is left blank, the implementation of the
	// archive format can default to using the file's base
	// name.
	NameInArchive string

	// For symbolic and hard links, the target of the link.
	// Not supported by all archive formats.
	LinkTarget string

	// A callback function that opens the file to read its
	// contents. The file must be closed when reading is
	// complete. Nil for files that don't have content
	// (such as directories and links).
	Open func() (io.ReadCloser, error)
}

File is a virtualized, generalized file abstraction for interacting with archives.

func FilesFromDisk

func FilesFromDisk(options *FromDiskOptions, filenames map[string]string) ([]File, error)

FilesFromDisk returns a list of files by walking the directories in the given filenames map. The keys are the names on disk, and the values are their associated names in the archive.

Map keys that specify directories on disk will be walked and added to the archive recursively, rooted at the named directory. They should use the platform's path separator (backslash on Windows; slash on everything else). For convenience, map keys that end in a separator ('/', or '\' on Windows) will enumerate contents only without adding the folder itself to the archive.

Map values should typically use slash ('/') as the separator regardless of the platform, as most archive formats standardize on that rune as the directory separator for filenames within an archive. For convenience, map values that are empty string are interpreted as the base name of the file (sans path) in the root of the archive; and map values that end in a slash will use the base name of the file in that folder of the archive.

File gathering will adhere to the settings specified in options.

This function is used primarily when preparing a list of files to add to an archive.

func (File) Stat

func (f File) Stat() (fs.FileInfo, error)

type FileFS

type FileFS struct {
	// The path to the file on disk.
	Path string

	// If file is compressed, setting this field will
	// transparently decompress reads.
	Compression Decompressor
}

FileFS allows accessing a file on disk using a consistent file system interface. The value should be the path to a regular file, not a directory. This file will be the only entry in the file system and will be at its root. It can be accessed within the file system by the name of "." or the filename.

If the file is compressed, set the Compression field so that reads from the file will be transparently decompressed.

func (FileFS) Open

func (f FileFS) Open(name string) (fs.File, error)

Open opens the named file, which must be the file used to create the file system.

func (FileFS) ReadDir

func (f FileFS) ReadDir(name string) ([]fs.DirEntry, error)

ReadDir returns a directory listing with the file as the singular entry.

func (FileFS) Stat

func (f FileFS) Stat(name string) (fs.FileInfo, error)

Stat stats the named file, which must be the file used to create the file system.

type FileHandler

type FileHandler func(ctx context.Context, f File) error

FileHandler is a callback function that is used to handle files as they are read from an archive; it is kind of like fs.WalkDirFunc. Handler functions that open their files must not overlap or run concurrently, as files may be read from the same sequential stream; always close the file before returning.

If the special error value fs.SkipDir is returned, the directory of the file (or the file itself if it is a directory) will not be walked. Note that because archive contents are not necessarily ordered, skipping directories requires memory, and skipping lots of directories may run up your memory bill.

Any other returned error will terminate a walk.

type Format

type Format interface {
	// Name returns the name of the format.
	Name() string

	// Match returns true if the given name/stream is recognized.
	// One of the arguments is optional: filename might be empty
	// if working with an unnamed stream, or stream might be
	// empty if only working with a filename. The filename should
	// consist only of the base name, not a path component, and is
	// typically used for matching by file extension. However,
	// matching by reading the stream is preferred. Match reads
	// only as many bytes as needed to determine a match. To
	// preserve the stream through matching, you should either
	// buffer what is read by Match, or seek to the last position
	// before Match was called.
	Match(filename string, stream io.Reader) (MatchResult, error)
}

Format represents either an archive or compression format.

func Identify

func Identify(filename string, stream io.Reader) (Format, io.Reader, error)

Identify iterates the registered formats and returns the one that matches the given filename and/or stream. It is capable of identifying compressed files (.gz, .xz...), archive files (.tar, .zip...), and compressed archive files (tar.gz, tar.bz2...). The returned Format value can be type-asserted to ascertain its capabilities.

If no matching formats were found, special error ErrNoMatch is returned.

If stream is nil then it will only match on file name and the returned io.Reader will be nil.

If stream is non-nil then the returned io.Reader will always be non-nil and will read from the same point as the reader which was passed in; it should be used in place of the input stream after calling Identify() because it preserves and re-reads the bytes that were already read during the identification process.

type FromDiskOptions

type FromDiskOptions struct {
	// If true, symbolic links will be dereferenced, meaning that
	// the link will not be added as a link, but what the link
	// points to will be added as a file.
	FollowSymlinks bool

	// If true, some file attributes will not be preserved.
	// Name, size, type, and permissions will still be preserved.
	ClearAttributes bool
}

FromDiskOptions specifies various options for gathering files from disk.

type Gz

type Gz struct {
	// Gzip compression level. See https://pkg.go.dev/compress/flate#pkg-constants
	// for some predefined constants. If 0, DefaultCompression is assumed rather
	// than no compression.
	CompressionLevel int

	// DisableMultistream controls whether the reader supports multistream files.
	// See https://pkg.go.dev/compress/gzip#example-Reader.Multistream
	DisableMultistream bool

	// Use a fast parallel Gzip implementation. This is only
	// effective for large streams (about 1 MB or greater).
	Multithreaded bool
}

Gz facilitates gzip compression.

func (Gz) Match

func (gz Gz) Match(filename string, stream io.Reader) (MatchResult, error)

func (Gz) Name

func (Gz) Name() string

func (Gz) OpenReader

func (gz Gz) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Gz) OpenWriter

func (gz Gz) OpenWriter(w io.Writer) (io.WriteCloser, error)

type Inserter

type Inserter interface {
	// Insert inserts the files into archive.
	//
	// Context cancellation must be honored.
	Insert(ctx context.Context, archive io.ReadWriteSeeker, files []File) error
}

Inserter can insert files into an existing archive. EXPERIMENTAL: This API is subject to change.

type Lz4

type Lz4 struct {
	CompressionLevel int
}

Lz4 facilitates LZ4 compression.

func (Lz4) Match

func (lz Lz4) Match(filename string, stream io.Reader) (MatchResult, error)

func (Lz4) Name

func (Lz4) Name() string

func (Lz4) OpenReader

func (Lz4) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Lz4) OpenWriter

func (lz Lz4) OpenWriter(w io.Writer) (io.WriteCloser, error)

type Lzip added in v4.0.3

type Lzip struct{}

Lzip facilitates lzip compression.

func (Lzip) Match added in v4.0.3

func (lz Lzip) Match(filename string, stream io.Reader) (MatchResult, error)

func (Lzip) Name added in v4.0.3

func (Lzip) Name() string

func (Lzip) OpenReader added in v4.0.3

func (Lzip) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Lzip) OpenWriter added in v4.0.3

func (Lzip) OpenWriter(w io.Writer) (io.WriteCloser, error)

type MatchResult

type MatchResult struct {
	ByName, ByStream bool
}

MatchResult returns true if the format was matched either by name, stream, or both. Name usually refers to matching by file extension, and stream usually refers to reading the first few bytes of the stream (its header). A stream match is generally stronger, as filenames are not always indicative of their contents if they even exist at all.

func (MatchResult) Matched

func (mr MatchResult) Matched() bool

Matched returns true if a match was made by either name or stream.

type Rar

type Rar struct {
	// If true, errors encountered during reading or writing
	// a file within an archive will be logged and the
	// operation will continue on remaining files.
	ContinueOnError bool

	// Password to open archives.
	Password string
}

func (Rar) AnotherNames

func (Rar) AnotherNames() []string

func (Rar) Archive

func (r Rar) Archive(_ context.Context, _ io.Writer, _ []File) error

Archive is not implemented for RAR, but the method exists so that Rar satisfies the ArchiveFormat interface.

func (Rar) Extract

func (r Rar) Extract(ctx context.Context, sourceArchive io.Reader, pathsInArchive []string, handleFile FileHandler) error

func (Rar) LsAllFile

func (r Rar) LsAllFile(ctx context.Context, sourceArchive io.Reader, handleFile FileHandler) error

func (Rar) Match

func (r Rar) Match(filename string, stream io.Reader) (MatchResult, error)

func (Rar) Name

func (Rar) Name() string

type SevenZip

type SevenZip struct {
	// If true, errors encountered during reading or writing
	// a file within an archive will be logged and the
	// operation will continue on remaining files.
	ContinueOnError bool

	// The password, if dealing with an encrypted archive.
	Password string
}

func (SevenZip) Archive

func (z SevenZip) Archive(_ context.Context, _ io.Writer, _ []File) error

Archive is not implemented for 7z, but the method exists so that SevenZip satisfies the ArchiveFormat interface.

func (SevenZip) Extract

func (z SevenZip) Extract(ctx context.Context, sourceArchive io.Reader, pathsInArchive []string, handleFile FileHandler) error

Extract extracts files from z, implementing the Extractor interface. Uniquely, however, sourceArchive must be an io.ReaderAt and io.Seeker, which are oddly disjoint interfaces from io.Reader which is what the method signature requires. We chose this signature for the interface because we figure you can Read() from anything you can ReadAt() or Seek() with. Due to the nature of the zip archive format, if sourceArchive is not an io.Seeker and io.ReaderAt, an error is returned.

func (SevenZip) Match

func (z SevenZip) Match(filename string, stream io.Reader) (MatchResult, error)

func (SevenZip) Name

func (z SevenZip) Name() string

type Sz

type Sz struct{}

Sz facilitates Snappy compression.

func (Sz) Match

func (sz Sz) Match(filename string, stream io.Reader) (MatchResult, error)

func (Sz) Name

func (sz Sz) Name() string

func (Sz) OpenReader

func (Sz) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Sz) OpenWriter

func (Sz) OpenWriter(w io.Writer) (io.WriteCloser, error)

type Tar

type Tar struct {
	// If true, preserve only numeric user and group id
	NumericUIDGID bool

	// If true, errors encountered during reading or writing
	// a file within an archive will be logged and the
	// operation will continue on remaining files.
	ContinueOnError bool
}

func (Tar) Archive

func (t Tar) Archive(ctx context.Context, output io.Writer, files []File) error

func (Tar) ArchiveAsync

func (t Tar) ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error

func (Tar) Extract

func (t Tar) Extract(ctx context.Context, sourceArchive io.Reader, pathsInArchive []string, handleFile FileHandler) error

func (Tar) Insert

func (t Tar) Insert(ctx context.Context, into io.ReadWriteSeeker, files []File) error

func (Tar) Match

func (t Tar) Match(filename string, stream io.Reader) (MatchResult, error)

func (Tar) Name

func (Tar) Name() string

type Xz

type Xz struct{}

Xz facilitates xz compression.

func (Xz) Match

func (x Xz) Match(filename string, stream io.Reader) (MatchResult, error)

func (Xz) Name

func (Xz) Name() string

func (Xz) OpenReader

func (Xz) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Xz) OpenWriter

func (Xz) OpenWriter(w io.Writer) (io.WriteCloser, error)

type Zip

type Zip struct {
	// Only compress files which are not already in a
	// compressed format (determined simply by examining
	// file extension).
	SelectiveCompression bool

	// The method or algorithm for compressing stored files.
	Compression uint16

	// If true, errors encountered during reading or writing
	// a file within an archive will be logged and the
	// operation will continue on remaining files.
	ContinueOnError bool

	// For files in zip archives that do not have UTF-8
	// encoded filenames and comments, specify the character
	// encoding here.
	TextEncoding string
}

func (Zip) AnotherNames

func (z Zip) AnotherNames() []string

func (Zip) Archive

func (z Zip) Archive(ctx context.Context, output io.Writer, files []File) error

func (Zip) ArchiveAsync

func (z Zip) ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error

func (Zip) Extract

func (z Zip) Extract(ctx context.Context, sourceArchive io.Reader, pathsInArchive []string, handleFile FileHandler) error

Extract extracts files from z, implementing the Extractor interface. Uniquely, however, sourceArchive must be an io.ReaderAt and io.Seeker, which are oddly disjoint interfaces from io.Reader which is what the method signature requires. We chose this signature for the interface because we figure you can Read() from anything you can ReadAt() or Seek() with. Due to the nature of the zip archive format, if sourceArchive is not an io.Seeker and io.ReaderAt, an error is returned.

func (Zip) Insert added in v4.0.3

func (z Zip) Insert(ctx context.Context, into io.ReadWriteSeeker, files []File) error

Insert appends the listed files into the provided Zip archive stream.

func (Zip) LsAllFile

func (z Zip) LsAllFile(ctx context.Context, sourceArchive io.Reader, handleFile FileHandler) (reader *zip.Reader, err error)

func (Zip) Match

func (z Zip) Match(filename string, stream io.Reader) (MatchResult, error)

func (Zip) Name

func (z Zip) Name() string

type Zlib

type Zlib struct {
	CompressionLevel int
}

Zlib facilitates zlib compression.

func (Zlib) Match

func (zz Zlib) Match(filename string, stream io.Reader) (MatchResult, error)

func (Zlib) Name

func (Zlib) Name() string

func (Zlib) OpenReader

func (Zlib) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Zlib) OpenWriter

func (zz Zlib) OpenWriter(w io.Writer) (io.WriteCloser, error)

type Zstd

type Zstd struct {
	EncoderOptions []zstd.EOption
	DecoderOptions []zstd.DOption
}

Zstd facilitates Zstandard compression.

func (Zstd) Match

func (zs Zstd) Match(filename string, stream io.Reader) (MatchResult, error)

func (Zstd) Name

func (Zstd) Name() string

func (Zstd) OpenReader

func (zs Zstd) OpenReader(r io.Reader) (io.ReadCloser, error)

func (Zstd) OpenWriter

func (zs Zstd) OpenWriter(w io.Writer) (io.WriteCloser, error)

Directories

Path Synopsis
cmd
arc

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL