Documentation ¶
Index ¶
- Constants
- Variables
- func FileSystem(ctx context.Context, filename string, stream ReaderAtSeeker) (fs.FS, error)
- func RegisterFormat(format Format)
- func TopDirOpen(fsys fs.FS, name string) (fs.File, error)
- func TopDirReadDir(fsys fs.FS, name string) ([]fs.DirEntry, error)
- func TopDirStat(fsys fs.FS, name string) (fs.FileInfo, error)
- type Archival
- type Archive
- func (ar Archive) Archive(ctx context.Context, output io.Writer, files []FileInfo) error
- func (ar Archive) ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error
- func (ar Archive) Extension() string
- func (ar Archive) Extract(ctx context.Context, sourceArchive io.Reader, handleFile FileHandler) error
- func (ar Archive) Match(ctx context.Context, filename string, stream io.Reader) (MatchResult, error)
- type ArchiveAsyncJob
- type ArchiveFS
- type Archiver
- type ArchiverAsync
- type Brotli
- type Bz2
- type Compression
- type Compressor
- type Decompressor
- type Extraction
- type Extractor
- type FileFS
- type FileHandler
- type FileInfo
- type Format
- type FromDiskOptions
- type Gz
- type Inserter
- type Lz4
- type Lzip
- type MatchResult
- type Rar
- type ReaderAtSeeker
- type SevenZip
- type Sz
- type Tar
- func (t Tar) Archive(ctx context.Context, output io.Writer, files []FileInfo) error
- func (t Tar) ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error
- func (Tar) Extension() string
- func (t Tar) Extract(ctx context.Context, sourceArchive io.Reader, handleFile FileHandler) error
- func (t Tar) Insert(ctx context.Context, into io.ReadWriteSeeker, files []FileInfo) error
- func (t Tar) Match(_ context.Context, filename string, stream io.Reader) (MatchResult, error)
- type Xz
- type Zip
- func (z Zip) Archive(ctx context.Context, output io.Writer, files []FileInfo) error
- func (z Zip) ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error
- func (z Zip) Extension() string
- func (z Zip) Extract(ctx context.Context, sourceArchive io.Reader, handleFile FileHandler) error
- func (z Zip) Insert(ctx context.Context, into io.ReadWriteSeeker, files []FileInfo) error
- func (z Zip) Match(_ context.Context, filename string, stream io.Reader) (MatchResult, error)
- type Zlib
- type Zstd
Constants ¶
const ( ZipMethodBzip2 = 12 // TODO: LZMA: Disabled - because 7z isn't able to unpack ZIP+LZMA ZIP+LZMA2 archives made this way - and vice versa. // ZipMethodLzma = 14 ZipMethodZstd = 93 ZipMethodXz = 95 )
Additional compression methods not offered by archive/zip. See https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT section 4.4.5.
Variables ¶
var NoMatch = fmt.Errorf("no formats matched")
NoMatch is a special error returned if there are no matching formats.
Functions ¶
func FileSystem ¶
FileSystem identifies the format of the input and returns a read-only file system. The input can be a filename, stream, or both.
If only a filename is specified, it may be a path to a directory, archive file, compressed archive file, compressed regular file, or any other regular file on disk. If the filename is a directory, its contents are accessed directly from the device's file system. If the filename is an archive file, the contents can be accessed like a normal directory; compressed archive files are transparently decompressed as contents are accessed. And if the filename is any other file, it is the only file in the returned file system; if the file is compressed, it is transparently decompressed when read from.
If a stream is specified, the filename (if available) is used as a hint to help identify its format. Streams of archive files must be able to be made into an io.SectionReader (for safe concurrency) which requires io.ReaderAt and io.Seeker (to efficiently determine size). The automatic format identification requires io.Reader and will use io.Seeker if supported to avoid buffering.
Whether the data comes from disk or a stream, it is peeked at to automatically detect which format to use.
This function essentially offers uniform read access to various kinds of files: directories, archives, compressed archives, individual files, and file streams are all treated the same way.
NOTE: The performance of compressed tar archives is not great due to overhead with decompression. However, the fs.WalkDir() use case has been optimized to create an index on first call to ReadDir().
func RegisterFormat ¶
func RegisterFormat(format Format)
RegisterFormat registers a format. It should be called during init. Duplicate formats by name are not allowed and will panic.
func TopDirOpen ¶
TopDirOpen is a special Open() function that may be useful if a file system root was created by extracting an archive.
It first tries the file name as given, but if that returns an error, it tries the name without the first element of the path. In other words, if "a/b/c" returns an error, then "b/c" will be tried instead.
Consider an archive that contains a file "a/b/c". When the archive is extracted, the contents may be created without a new parent/root folder to contain them, and the path of the same file outside the archive may be lacking an exclusive root or parent container. Thus it is likely for a file system created for the same files extracted to disk to be rooted at one of the top-level files/folders from the archive instead of a parent folder. For example, the file known as "a/b/c" when rooted at the archive becomes "b/c" after extraction when rooted at "a" on disk (because no new, exclusive top-level folder was created). This difference in paths can make it difficult to use archives and directories uniformly. Hence these TopDir* functions which attempt to smooth over the difference.
Some extraction utilities do create a container folder for archive contents when extracting, in which case the user may give that path as the root. In that case, these TopDir* functions are not necessary (but aren't harmful either). They are primarily useful if you are not sure whether the root is an archive file or is an extracted archive file, as they will work with the same filename/path inputs regardless of the presence of a top-level directory.
func TopDirReadDir ¶
TopDirReadDir is like TopDirOpen but for ReadDir.
Types ¶
type Archive ¶
type Archive struct { Compression Archival Extraction }
Archive represents an archive which may be compressed at the outer layer. It combines a compression format on top of an archive/extraction format (e.g. ".tar.gz") and provides both functionalities in a single type. It ensures that archival functions are wrapped by compressors and decompressors. However, compressed archives have some limitations; for example, files cannot be inserted/appended because of complexities with modifying existing compression state (perhaps this could be overcome, but I'm not about to try it).
The embedded Archival and Extraction values are used for writing and reading, respectively. Compression is optional and is only needed if the format is compressed externally (for example, tar archives).
func (Archive) ArchiveAsync ¶
func (ar Archive) ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error
ArchiveAsync adds files to the output archive while compressing the result asynchronously.
func (Archive) Extension ¶
Name returns a concatenation of the archive and compression format extensions.
type ArchiveAsyncJob ¶
ArchiveAsyncJob contains a File to be archived and a channel that the result of the archiving should be returned on.
type ArchiveFS ¶
type ArchiveFS struct { // set one of these Path string // path to the archive file on disk, or... Stream *io.SectionReader // ...stream from which to read archive Format Extractor // the archive format Prefix string // optional subdirectory in which to root the fs Context context.Context // optional; mainly for cancellation // contains filtered or unexported fields }
ArchiveFS allows reading an archive (or a compressed archive) using a consistent file system interface. Essentially, it allows traversal and reading of archive contents the same way as any normal directory on disk. The contents of compressed archives are transparently decompressed.
A valid ArchiveFS value must set either Path or Stream, but not both. If Path is set, a literal file will be opened from the disk. If Stream is set, new SectionReaders will be implicitly created to access the stream, enabling safe, concurrent access.
NOTE: Due to Go's file system APIs (see package io/fs), the performance of ArchiveFS can suffer when using fs.WalkDir(). To mitigate this, an optimized fs.ReadDirFS has been implemented that indexes the entire archive on the first call to ReadDir() (since the entire archive needs to be walked for every call to ReadDir() anyway, as archive contents are often unordered). The first call to ReadDir(), i.e. near the start of the walk, will be slow for large archives, but should be instantaneous after. If you don't care about walking a file system in directory order, consider calling Extract() on the underlying archive format type directly, which walks the archive in entry order, without needing to do any sorting.
Note that fs.FS implementations, including this one, reject paths starting with "./". This can be problematic sometimes, as it is not uncommon for tarballs to contain a top-level/root directory literally named ".", which can happen if a tarball is created in the same directory it is archiving. The underlying Extract() calls are faithful to entries with this name, but file systems have certain semantics around "." that restrict its use. For example, a file named "." cannot be created on a real file system because it is a special name that means "current directory".
We had to decide whether to honor the true name in the archive, or honor file system semantics. Given that this is a virtual file system and other code using the fs.FS APIs will trip over a literal directory named ".", we choose to honor file system semantics. Files named "." are ignored; directories with this name are effectively transparent; their contents get promoted up a directory/level. This means a file at "./x" where "." is a literal directory name, its name will be passed in as "x" in WalkDir callbacks. If you need the raw, uninterpeted values from an archive, use the formats' Extract() method directly. See https://github.com/golang/go/issues/70155 for a little more background.
This does have one negative edge case... a tar containing contents like [x . ./x] will have a conflict on the file named "x" because "./x" will also be accessed with the name of "x".
func (ArchiveFS) Open ¶
Open opens the named file from within the archive. If name is "." then the archive file itself will be opened as a directory file.
func (*ArchiveFS) ReadDir ¶
ReadDir reads the named directory from within the archive. If name is "." then the root of the archive content is listed.
type Archiver ¶
type Archiver interface { // Archive writes an archive file to output with the given files. // // Context cancellation must be honored. Archive(ctx context.Context, output io.Writer, files []FileInfo) error }
Archiver can create a new archive.
type ArchiverAsync ¶
type ArchiverAsync interface { Archiver // Use ArchiveAsync if you can't pre-assemble a list of all // the files for the archive. Close the jobs channel after // all the files have been sent. // // This won't return until the channel is closed. ArchiveAsync(ctx context.Context, output io.Writer, jobs <-chan ArchiveAsyncJob) error }
ArchiverAsync is an Archiver that can also create archives asynchronously by pumping files into a channel as they are discovered.
type Brotli ¶
type Brotli struct {
Quality int
}
Brotli facilitates brotli compression.
func (Brotli) OpenReader ¶
func (Brotli) OpenWriter ¶
type Bz2 ¶
type Bz2 struct {
CompressionLevel int
}
Bz2 facilitates bzip2 compression.
func (Bz2) OpenReader ¶
func (Bz2) OpenWriter ¶
type Compression ¶
type Compression interface { Format Compressor Decompressor }
Compression is a compression format with both compress and decompress methods.
type Compressor ¶
type Compressor interface { // OpenWriter wraps w with a new writer that compresses what is written. // The writer must be closed when writing is finished. OpenWriter(w io.Writer) (io.WriteCloser, error) }
Compressor can compress data by wrapping a writer.
type Decompressor ¶
type Decompressor interface { // OpenReader wraps r with a new reader that decompresses what is read. // The reader must be closed when reading is finished. OpenReader(r io.Reader) (io.ReadCloser, error) }
Decompressor can decompress data by wrapping a reader.
type Extraction ¶
Extraction is an archival format that extract from (read) archives.
type Extractor ¶
type Extractor interface { // Extract walks entries in the archive and calls handleFile for each // entry in the archive. // // Any files opened in the FileHandler should be closed when it returns, // as there is no guarantee the files can be read outside the handler // or after the walk has proceeded to the next file. // // Context cancellation must be honored. Extract(ctx context.Context, archive io.Reader, handleFile FileHandler) error }
Extractor can extract files from an archive.
type FileFS ¶
type FileFS struct { // The path to the file on disk. Path string // If file is compressed, setting this field will // transparently decompress reads. Compression Decompressor }
FileFS allows accessing a file on disk using a consistent file system interface. The value should be the path to a regular file, not a directory. This file will be the only entry in the file system and will be at its root. It can be accessed within the file system by the name of "." or the filename.
If the file is compressed, set the Compression field so that reads from the file will be transparently decompressed.
func (FileFS) Open ¶
Open opens the named file, which must be the file used to create the file system.
type FileHandler ¶
FileHandler is a callback function that is used to handle files as they are read from an archive; it is kind of like fs.WalkDirFunc. Handler functions that open their files must not overlap or run concurrently, as files may be read from the same sequential stream; always close the file before returning.
If the special error value fs.SkipDir is returned, the directory of the file (or the file itself if it is a directory) will not be walked. Note that because archive contents are not necessarily ordered, skipping directories requires memory, and skipping lots of directories may run up your memory bill.
Any other returned error will terminate a walk and be returned to the caller.
type FileInfo ¶
type FileInfo struct { fs.FileInfo // The file header as used/provided by the archive format. // Typically, you do not need to set this field when creating // an archive. Header any // The path of the file as it appears in the archive. // This is equivalent to Header.Name (for most Header // types). We require it to be specified here because // it is such a common field and we want to preserve // format-agnosticism (no type assertions) for basic // operations. // // When extracting, this name or path may not have // been sanitized; it should not be trusted at face // value. Consider using path.Clean() before using. // // EXPERIMENTAL: If inserting a file into an archive, // and this is left blank, the implementation of the // archive format can default to using the file's base // name. NameInArchive string // For symbolic and hard links, the target of the link. // Not supported by all archive formats. LinkTarget string // A callback function that opens the file to read its // contents. The file must be closed when reading is // complete. Open func() (fs.File, error) }
FileInfo is a virtualized, generalized file abstraction for interacting with archives.
func FilesFromDisk ¶
func FilesFromDisk(options *FromDiskOptions, filenames map[string]string) ([]FileInfo, error)
FilesFromDisk returns a list of files by walking the directories in the given filenames map. The keys are the names on disk, and the values are their associated names in the archive.
Map keys that specify directories on disk will be walked and added to the archive recursively, rooted at the named directory. They should use the platform's path separator (backslash on Windows; slash on everything else). For convenience, map keys that end in a separator ('/', or '\' on Windows) will enumerate contents only without adding the folder itself to the archive.
Map values should typically use slash ('/') as the separator regardless of the platform, as most archive formats standardize on that rune as the directory separator for filenames within an archive. For convenience, map values that are empty string are interpreted as the base name of the file (sans path) in the root of the archive; and map values that end in a slash will use the base name of the file in that folder of the archive.
File gathering will adhere to the settings specified in options.
This function is used primarily when preparing a list of files to add to an archive.
type Format ¶
type Format interface { // Extension returns the conventional file extension for this // format. Extension() string // Match returns true if the given name/stream is recognized. // One of the arguments is optional: filename might be empty // if working with an unnamed stream, or stream might be // empty if only working with a filename. The filename should // consist only of the base name, not a path component, and is // typically used for matching by file extension. However, // matching by reading the stream is preferred. Match reads // only as many bytes as needed to determine a match. To // preserve the stream through matching, you should either // buffer what is read by Match, or seek to the last position // before Match was called. Match(ctx context.Context, filename string, stream io.Reader) (MatchResult, error) }
Format represents a way of getting data out of something else. A format usually represents compression or an archive (or both).
func Identify ¶
Identify iterates the registered formats and returns the one that matches the given filename and/or stream. It is capable of identifying compressed files (.gz, .xz...), archive files (.tar, .zip...), and compressed archive files (tar.gz, tar.bz2...). The returned Format value can be type-asserted to ascertain its capabilities.
If no matching formats were found, special error ErrNoMatch is returned.
If stream is nil then it will only match on file name and the returned io.Reader will be nil.
If stream is non-nil then the returned io.Reader will always be non-nil and will read from the same point as the reader which was passed in. If the input stream is not an io.Seeker, the returned io.Reader value should be used in place of the input stream after calling Identify() because it preserves and re-reads the bytes that were already read during the identification process.
If the input stream is an io.Seeker, Seek() must work, and the original input value will be returned instead of a wrapper value.
type FromDiskOptions ¶
type FromDiskOptions struct { // If true, symbolic links will be dereferenced, meaning that // the link will not be added as a link, but what the link // points to will be added as a file. FollowSymlinks bool // If true, some file attributes will not be preserved. // Name, size, type, and permissions will still be preserved. ClearAttributes bool }
FromDiskOptions specifies various options for gathering files from disk.
type Gz ¶
type Gz struct { // Gzip compression level. See https://pkg.go.dev/compress/flate#pkg-constants // for some predefined constants. If 0, DefaultCompression is assumed rather // than no compression. CompressionLevel int // DisableMultistream controls whether the reader supports multistream files. // See https://pkg.go.dev/compress/gzip#example-Reader.Multistream DisableMultistream bool // Use a fast parallel Gzip implementation. This is only // effective for large streams (about 1 MB or greater). Multithreaded bool }
Gz facilitates gzip compression.
func (Gz) OpenReader ¶
func (Gz) OpenWriter ¶
type Inserter ¶
type Inserter interface { // Insert inserts the files into archive. // // Context cancellation must be honored. Insert(ctx context.Context, archive io.ReadWriteSeeker, files []FileInfo) error }
Inserter can insert files into an existing archive. EXPERIMENTAL: This API is subject to change.
type Lz4 ¶
type Lz4 struct {
CompressionLevel int
}
Lz4 facilitates LZ4 compression.
func (Lz4) OpenReader ¶
func (Lz4) OpenWriter ¶
type Lzip ¶
type Lzip struct{}
Lzip facilitates lzip compression.
func (Lzip) OpenReader ¶
func (Lzip) OpenWriter ¶
type MatchResult ¶
type MatchResult struct {
ByName, ByStream bool
}
MatchResult returns true if the format was matched either by name, stream, or both. Name usually refers to matching by file extension, and stream usually refers to reading the first few bytes of the stream (its header). A stream match is generally stronger, as filenames are not always indicative of their contents if they even exist at all.
func (MatchResult) Matched ¶
func (mr MatchResult) Matched() bool
Matched returns true if a match was made by either name or stream.
type Rar ¶
type Rar struct { // If true, errors encountered during reading or writing // a file within an archive will be logged and the // operation will continue on remaining files. ContinueOnError bool // Password to open archives. Password string }
type ReaderAtSeeker ¶
ReaderAtSeeker is a type that can read, read at, and seek. os.File and io.SectionReader both implement this interface.
type SevenZip ¶
type SevenZip struct { // If true, errors encountered during reading or writing // a file within an archive will be logged and the // operation will continue on remaining files. ContinueOnError bool // The password, if dealing with an encrypted archive. Password string }
func (SevenZip) Extract ¶
func (z SevenZip) Extract(ctx context.Context, sourceArchive io.Reader, handleFile FileHandler) error
Extract extracts files from z, implementing the Extractor interface. Uniquely, however, sourceArchive must be an io.ReaderAt and io.Seeker, which are oddly disjoint interfaces from io.Reader which is what the method signature requires. We chose this signature for the interface because we figure you can Read() from anything you can ReadAt() or Seek() with. Due to the nature of the zip archive format, if sourceArchive is not an io.Seeker and io.ReaderAt, an error is returned.
type Sz ¶
type Sz struct{}
Sz facilitates Snappy compression.
func (Sz) OpenReader ¶
func (Sz) OpenWriter ¶
type Tar ¶
type Tar struct { // If true, preserve only numeric user and group id NumericUIDGID bool // If true, errors encountered during reading or writing // a file within an archive will be logged and the // operation will continue on remaining files. ContinueOnError bool }
func (Tar) ArchiveAsync ¶
type Xz ¶
type Xz struct{}
Xz facilitates xz compression.
func (Xz) OpenReader ¶
func (Xz) OpenWriter ¶
type Zip ¶
type Zip struct { // Only compress files which are not already in a // compressed format (determined simply by examining // file extension). SelectiveCompression bool // The method or algorithm for compressing stored files. Compression uint16 // If true, errors encountered during reading or writing // a file within an archive will be logged and the // operation will continue on remaining files. ContinueOnError bool // For files in zip archives that do not have UTF-8 // encoded filenames and comments, specify the character // encoding here. TextEncoding string }
func (Zip) ArchiveAsync ¶
func (Zip) Extract ¶
Extract extracts files from z, implementing the Extractor interface. Uniquely, however, sourceArchive must be an io.ReaderAt and io.Seeker, which are oddly disjoint interfaces from io.Reader which is what the method signature requires. We chose this signature for the interface because we figure you can Read() from anything you can ReadAt() or Seek() with. Due to the nature of the zip archive format, if sourceArchive is not an io.Seeker and io.ReaderAt, an error is returned.
type Zlib ¶
type Zlib struct {
CompressionLevel int
}
Zlib facilitates zlib compression.
func (Zlib) OpenReader ¶
func (Zlib) OpenWriter ¶
type Zstd ¶
Zstd facilitates Zstandard compression.