archive

package
v1.12.12 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 24, 2019 License: Apache-2.0 Imports: 31 Imported by: 359

README

This code provides helper functions for dealing with archive files.

Documentation

Index

Constants

View Source
const (
	// ChangeModify represents the modify operation.
	ChangeModify = iota
	// ChangeAdd represents the add operation.
	ChangeAdd
	// ChangeDelete represents the delete operation.
	ChangeDelete
)
View Source
const (
	// HeaderSize is the size in bytes of a tar header
	HeaderSize = 512
)
View Source
const WhiteoutLinkDir = WhiteoutMetaPrefix + "plnk"

WhiteoutLinkDir is a directory AUFS uses for storing hardlink links to other layers. Normally these should not go into exported archives and all changed hardlinks should be copied to the top layer.

View Source
const WhiteoutMetaPrefix = WhiteoutPrefix + WhiteoutPrefix

WhiteoutMetaPrefix prefix means whiteout has a special meaning and is not for removing an actual file. Normally these files are excluded from exported archives.

View Source
const WhiteoutOpaqueDir = WhiteoutMetaPrefix + ".opq"

WhiteoutOpaqueDir file means directory has been made opaque - meaning readdir calls to this directory do not follow to lower layers.

View Source
const WhiteoutPrefix = ".wh."

WhiteoutPrefix prefix means file is a whiteout. If this is followed by a filename this means that file has been removed from the base layer.

Variables

View Source
var (
	ErrNotDirectory      = errors.New("not a directory")
	ErrDirNotExists      = errors.New("no such directory")
	ErrCannotCopyDir     = errors.New("cannot copy directory")
	ErrInvalidCopySource = errors.New("invalid copy source content")
)

Errors used or returned by this file.

Functions

func ApplyLayer

func ApplyLayer(dest string, layer io.Reader) (int64, error)

ApplyLayer parses a diff in the standard layer format from `layer`, and applies it to the directory `dest`. The stream `layer` can be compressed or uncompressed. Returns the size in bytes of the contents of the layer.

func ApplyUncompressedLayer

func ApplyUncompressedLayer(dest string, layer io.Reader, options *TarOptions) (int64, error)

ApplyUncompressedLayer parses a diff in the standard layer format from `layer`, and applies it to the directory `dest`. The stream `layer` can only be uncompressed. Returns the size in bytes of the contents of the layer.

func CanonicalTarNameForPath

func CanonicalTarNameForPath(p string) (string, error)

CanonicalTarNameForPath returns platform-specific filepath to canonical posix-style path for tar archival. p is relative path.

func ChangesSize

func ChangesSize(newDir string, changes []Change) int64

ChangesSize calculates the size in bytes of the provided changes, based on newDir.

func CompressStream

func CompressStream(dest io.Writer, compression Compression) (io.WriteCloser, error)

CompressStream compresses the dest with specified compression algorithm.

func CopyFileWithTarAndChown

func CopyFileWithTarAndChown(chownOpts *idtools.IDPair, hasher io.Writer, uidmap []idtools.IDMap, gidmap []idtools.IDMap) func(src, dest string) error

CopyFileWithTarAndChown returns a function which copies a single file from outside of any container into our working container, mapping permissions using the container's ID maps, possibly overridden using the passed-in chownOpts

func CopyResource

func CopyResource(srcPath, dstPath string, followLink bool) error

CopyResource performs an archive copy from the given source path to the given destination path. The source path MUST exist and the destination path's parent directory must exist.

func CopyTo

func CopyTo(content io.Reader, srcInfo CopyInfo, dstPath string) error

CopyTo handles extracting the given content whose entries should be sourced from srcInfo to dstPath.

func CopyWithTarAndChown

func CopyWithTarAndChown(chownOpts *idtools.IDPair, hasher io.Writer, uidmap []idtools.IDMap, gidmap []idtools.IDMap) func(src, dest string) error

CopyWithTarAndChown returns a function which copies a directory tree from outside of any container into our working container, mapping permissions using the container's ID maps, possibly overridden using the passed-in chownOpts

func DecompressStream

func DecompressStream(archive io.Reader) (io.ReadCloser, error)

DecompressStream decompresses the archive and returns a ReaderCloser with the decompressed archive.

func ExportChanges

func ExportChanges(dir string, changes []Change, uidMaps, gidMaps []idtools.IDMap) (io.ReadCloser, error)

ExportChanges produces an Archive from the provided changes, relative to dir.

func FileInfoHeader

func FileInfoHeader(name string, fi os.FileInfo, link string) (*tar.Header, error)

FileInfoHeader creates a populated Header from fi. Compared to archive pkg this function fills in more information. Also, regardless of Go version, this function fills file type bits (e.g. hdr.Mode |= modeISDIR), which have been deleted since Go 1.9 archive/tar.

func Generate

func Generate(input ...string) (io.Reader, error)

Generate generates a new archive from the content provided as input.

`files` is a sequence of path/content pairs. A new file is added to the archive for each pair. If the last pair is incomplete, the file is created with an empty content. For example:

Generate("foo.txt", "hello world", "emptyfile")

The above call will return an archive with 2 files:

  • ./foo.txt with content "hello world"
  • ./empty with empty content

FIXME: stream content instead of buffering FIXME: specify permissions and other archive metadata

func GetRebaseName

func GetRebaseName(path, resolvedPath string) (string, string)

GetRebaseName normalizes and compares path and resolvedPath, return completed resolved path and rebased file name

func IsArchive

func IsArchive(header []byte) bool

IsArchive checks for the magic bytes of a tar or any supported compression algorithm.

func IsArchivePath

func IsArchivePath(path string) bool

IsArchivePath checks if the (possibly compressed) file at the given path starts with a tar file header.

func PrepareArchiveCopy

func PrepareArchiveCopy(srcContent io.Reader, srcInfo, dstInfo CopyInfo) (dstDir string, content io.ReadCloser, err error)

PrepareArchiveCopy prepares the given srcContent archive, which should contain the archived resource described by srcInfo, to the destination described by dstInfo. Returns the possibly modified content archive along with the path to the destination directory which it should be extracted to.

func PreserveTrailingDotOrSeparator

func PreserveTrailingDotOrSeparator(cleanedPath, originalPath string) string

PreserveTrailingDotOrSeparator returns the given cleaned path (after processing using any utility functions from the path or filepath stdlib packages) and appends a trailing `/.` or `/` if its corresponding original path (from before being processed by utility functions from the path or filepath stdlib packages) ends with a trailing `/.` or `/`. If the cleaned path already ends in a `.` path segment, then another is not added. If the clean path already ends in a path separator, then another is not added.

func ReadSecurityXattrToTarHeader

func ReadSecurityXattrToTarHeader(path string, hdr *tar.Header) error

ReadSecurityXattrToTarHeader reads security.capability xattr from filesystem to a tar header

func RebaseArchiveEntries

func RebaseArchiveEntries(srcContent io.Reader, oldBase, newBase string) io.ReadCloser

RebaseArchiveEntries rewrites the given srcContent archive replacing an occurrence of oldBase with newBase at the beginning of entry names.

func ReplaceFileTarWrapper

func ReplaceFileTarWrapper(inputTarStream io.ReadCloser, mods map[string]TarModifierFunc) io.ReadCloser

ReplaceFileTarWrapper converts inputTarStream to a new tar stream. Files in the tar stream are modified if they match any of the keys in mods.

func ResolveHostSourcePath

func ResolveHostSourcePath(path string, followLink bool) (resolvedPath, rebaseName string, err error)

ResolveHostSourcePath decides real path need to be copied with parameters such as whether to follow symbol link or not, if followLink is true, resolvedPath will return link target of any symbol link file, else it will only resolve symlink of directory but return symbol link file itself without resolving.

func SplitPathDirEntry

func SplitPathDirEntry(path string) (dir, base string)

SplitPathDirEntry splits the given path between its directory name and its basename by first cleaning the path but preserves a trailing "." if the original path specified the current directory.

func Tar

func Tar(path string, compression Compression) (io.ReadCloser, error)

Tar creates an archive from the directory at `path`, and returns it as a stream of bytes.

func TarPath

func TarPath(uidmap []idtools.IDMap, gidmap []idtools.IDMap) func(path string) (io.ReadCloser, error)

TarPath returns a function which creates an archive of a specified location in the container's filesystem, mapping permissions using the container's ID maps

func TarResource

func TarResource(sourceInfo CopyInfo) (content io.ReadCloser, err error)

TarResource archives the resource described by the given CopyInfo to a Tar archive. A non-nil error is returned if sourcePath does not exist or is asserted to be a directory but exists as another type of file.

This function acts as a convenient wrapper around TarWithOptions, which requires a directory as the source path. TarResource accepts either a directory or a file path and correctly sets the Tar options.

func TarResourceRebase

func TarResourceRebase(sourcePath, rebaseName string) (content io.ReadCloser, err error)

TarResourceRebase is like TarResource but renames the first path element of items in the resulting tar archive to match the given rebaseName if not "".

func TarWithOptions

func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error)

TarWithOptions creates an archive from the directory at `path`, only including files whose relative paths are included in `options.IncludeFiles` (if non-nil) or not in `options.ExcludePatterns`.

func Unpack

func Unpack(decompressedArchive io.Reader, dest string, options *TarOptions) error

Unpack unpacks the decompressedArchive to dest with options.

func UnpackLayer

func UnpackLayer(dest string, layer io.Reader, options *TarOptions) (size int64, err error)

UnpackLayer unpack `layer` to a `dest`. The stream `layer` can be compressed or uncompressed. Returns the size in bytes of the contents of the layer.

func Untar

func Untar(tarArchive io.Reader, dest string, options *TarOptions) error

Untar reads a stream of bytes from `archive`, parses it as a tar archive, and unpacks it into the directory at `dest`. The archive may be compressed with one of the following algorithms:

identity (uncompressed), gzip, bzip2, xz.

FIXME: specify behavior when target path exists vs. doesn't exist.

func UntarPath

func UntarPath(src, dst string) error

UntarPath is a convenience function which looks for an archive at filesystem path `src`, and unpacks it at `dst`.

func UntarPathAndChown

func UntarPathAndChown(chownOpts *idtools.IDPair, hasher io.Writer, uidmap []idtools.IDMap, gidmap []idtools.IDMap) func(src, dest string) error

UntarPathAndChown returns a function which extracts an archive in a specified location into our working container, mapping permissions using the container's ID maps, possibly overridden using the passed-in chownOpts

func UntarUncompressed

func UntarUncompressed(tarArchive io.Reader, dest string, options *TarOptions) error

UntarUncompressed reads a stream of bytes from `archive`, parses it as a tar archive, and unpacks it into the directory at `dest`. The archive must be an uncompressed stream.

Types

type Archiver

type Archiver struct {
	Untar           func(io.Reader, string, *TarOptions) error
	TarIDMappings   *idtools.IDMappings
	ChownOpts       *idtools.IDPair
	UntarIDMappings *idtools.IDMappings
}

Archiver allows the reuse of most utility functions of this package with a pluggable Untar function. To facilitate the passing of specific id mappings for untar, an archiver can be created with maps which will then be passed to Untar operations. If ChownOpts is set, its values are mapped using UntarIDMappings before being used to create files and directories on disk.

func NewArchiver

func NewArchiver(idMappings *idtools.IDMappings) *Archiver

NewArchiver returns a new Archiver

func NewArchiverWithChown

func NewArchiverWithChown(tarIDMappings *idtools.IDMappings, chownOpts *idtools.IDPair, untarIDMappings *idtools.IDMappings) *Archiver

NewArchiverWithChown returns a new Archiver which uses Untar and the provided ID mapping configuration on both ends

func NewDefaultArchiver

func NewDefaultArchiver() *Archiver

NewDefaultArchiver returns a new Archiver without any IDMappings

func (*Archiver) CopyFileWithTar

func (archiver *Archiver) CopyFileWithTar(src, dst string) (err error)

CopyFileWithTar emulates the behavior of the 'cp' command-line for a single file. It copies a regular file from path `src` to path `dst`, and preserves all its metadata.

func (*Archiver) CopyWithTar

func (archiver *Archiver) CopyWithTar(src, dst string) error

CopyWithTar creates a tar archive of filesystem path `src`, and unpacks it at filesystem path `dst`. The archive is streamed directly with fixed buffering and no intermediary disk IO.

func (*Archiver) MarshalJSON

func (j *Archiver) MarshalJSON() ([]byte, error)

MarshalJSON marshal bytes to json - template

func (*Archiver) MarshalJSONBuf

func (j *Archiver) MarshalJSONBuf(buf fflib.EncodingBuffer) error

MarshalJSONBuf marshal buff to json - template

func (*Archiver) TarUntar

func (archiver *Archiver) TarUntar(src, dst string) error

TarUntar is a convenience function which calls Tar and Untar, with the output of one piped into the other. If either Tar or Untar fails, TarUntar aborts and returns the error.

func (*Archiver) UnmarshalJSON

func (j *Archiver) UnmarshalJSON(input []byte) error

UnmarshalJSON umarshall json - template of ffjson

func (*Archiver) UnmarshalJSONFFLexer

func (j *Archiver) UnmarshalJSONFFLexer(fs *fflib.FFLexer, state fflib.FFParseState) error

UnmarshalJSONFFLexer fast json unmarshall - template ffjson

func (*Archiver) UntarPath

func (archiver *Archiver) UntarPath(src, dst string) error

UntarPath untar a file from path to a destination, src is the source tar file path.

type Change

type Change struct {
	Path string
	Kind ChangeType
}

Change represents a change, it wraps the change type and path. It describes changes of the files in the path respect to the parent layers. The change could be modify, add, delete. This is used for layer diff.

func Changes

func Changes(layers []string, rw string) ([]Change, error)

Changes walks the path rw and determines changes for the files in the path, with respect to the parent layers

func ChangesDirs

func ChangesDirs(newDir string, newMappings *idtools.IDMappings, oldDir string, oldMappings *idtools.IDMappings) ([]Change, error)

ChangesDirs compares two directories and generates an array of Change objects describing the changes. If oldDir is "", then all files in newDir will be Add-Changes.

func OverlayChanges

func OverlayChanges(layers []string, rw string) ([]Change, error)

OverlayChanges walks the path rw and determines changes for the files in the path, with respect to the parent layers

func (*Change) String

func (change *Change) String() string

type ChangeType

type ChangeType int

ChangeType represents the change type.

func (ChangeType) String

func (c ChangeType) String() string

type Compression

type Compression int

Compression is the state represents if compressed or not.

const (
	// Uncompressed represents the uncompressed.
	Uncompressed Compression = iota
	// Bzip2 is bzip2 compression algorithm.
	Bzip2
	// Gzip is gzip compression algorithm.
	Gzip
	// Xz is xz compression algorithm.
	Xz
	// Zstd is zstd compression algorithm.
	Zstd
)

func DetectCompression

func DetectCompression(source []byte) Compression

DetectCompression detects the compression algorithm of the source.

func (*Compression) Extension

func (compression *Compression) Extension() string

Extension returns the extension of a file that uses the specified compression algorithm.

type CopyInfo

type CopyInfo struct {
	Path       string
	Exists     bool
	IsDir      bool
	RebaseName string
}

CopyInfo holds basic info about the source or destination path of a copy operation.

func CopyInfoDestinationPath

func CopyInfoDestinationPath(path string) (info CopyInfo, err error)

CopyInfoDestinationPath stats the given path to create a CopyInfo struct representing that resource for the destination of an archive copy operation. The given path should be an absolute local path.

func CopyInfoSourcePath

func CopyInfoSourcePath(path string, followLink bool) (CopyInfo, error)

CopyInfoSourcePath stats the given path to create a CopyInfo struct representing that resource for the source of an archive copy operation. The given path should be an absolute local path. A source path has all symlinks evaluated that appear before the last path separator ("/" on Unix). As it is to be a copy source, the path must exist.

type FileInfo

type FileInfo struct {
	// contains filtered or unexported fields
}

FileInfo describes the information of a file.

func (*FileInfo) Changes

func (info *FileInfo) Changes(oldInfo *FileInfo) []Change

Changes add changes to file information.

func (*FileInfo) LookUp

func (info *FileInfo) LookUp(path string) *FileInfo

LookUp looks up the file information of a file.

type TarModifierFunc

type TarModifierFunc func(path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error)

TarModifierFunc is a function that can be passed to ReplaceFileTarWrapper to modify the contents or header of an entry in the archive. If the file already exists in the archive the TarModifierFunc will be called with the Header and a reader which will return the files content. If the file does not exist both header and content will be nil.

type TarOptions

type TarOptions struct {
	IncludeFiles     []string
	ExcludePatterns  []string
	Compression      Compression
	NoLchown         bool
	UIDMaps          []idtools.IDMap
	GIDMaps          []idtools.IDMap
	ChownOpts        *idtools.IDPair
	IncludeSourceDir bool
	// WhiteoutFormat is the expected on disk format for whiteout files.
	// This format will be converted to the standard format on pack
	// and from the standard format on unpack.
	WhiteoutFormat WhiteoutFormat
	// This is additional data to be used by the converter.  It will
	// not survive a round trip through JSON, so it's primarily
	// intended for generating archives (i.e., converting writes).
	WhiteoutData interface{}
	// When unpacking, specifies whether overwriting a directory with a
	// non-directory is allowed and vice versa.
	NoOverwriteDirNonDir bool
	// For each include when creating an archive, the included name will be
	// replaced with the matching name from this map.
	RebaseNames map[string]string
	InUserNS    bool
	// CopyPass indicates that the contents of any archive we're creating
	// will instantly be extracted and written to disk, so we can deviate
	// from the traditional behavior/format to get features like subsecond
	// precision in timestamps.
	CopyPass bool
}

TarOptions wraps the tar options.

func (*TarOptions) MarshalJSON

func (j *TarOptions) MarshalJSON() ([]byte, error)

MarshalJSON marshal bytes to json - template

func (*TarOptions) MarshalJSONBuf

func (j *TarOptions) MarshalJSONBuf(buf fflib.EncodingBuffer) error

MarshalJSONBuf marshal buff to json - template

func (*TarOptions) UnmarshalJSON

func (j *TarOptions) UnmarshalJSON(input []byte) error

UnmarshalJSON umarshall json - template of ffjson

func (*TarOptions) UnmarshalJSONFFLexer

func (j *TarOptions) UnmarshalJSONFFLexer(fs *fflib.FFLexer, state fflib.FFParseState) error

UnmarshalJSONFFLexer fast json unmarshall - template ffjson

type TempArchive

type TempArchive struct {
	*os.File
	Size int64 // Pre-computed from Stat().Size() as a convenience
	// contains filtered or unexported fields
}

TempArchive is a temporary archive. The archive can only be read once - as soon as reading completes, the file will be deleted.

func NewTempArchive

func NewTempArchive(src io.Reader, dir string) (*TempArchive, error)

NewTempArchive reads the content of src into a temporary file, and returns the contents of that file as an archive. The archive can only be read once - as soon as reading completes, the file will be deleted.

func (*TempArchive) Close

func (archive *TempArchive) Close() error

Close closes the underlying file if it's still open, or does a no-op to allow callers to try to close the TempArchive multiple times safely.

func (*TempArchive) MarshalJSON

func (j *TempArchive) MarshalJSON() ([]byte, error)

MarshalJSON marshal bytes to json - template

func (*TempArchive) MarshalJSONBuf

func (j *TempArchive) MarshalJSONBuf(buf fflib.EncodingBuffer) error

MarshalJSONBuf marshal buff to json - template

func (*TempArchive) Read

func (archive *TempArchive) Read(data []byte) (int, error)

func (*TempArchive) UnmarshalJSON

func (j *TempArchive) UnmarshalJSON(input []byte) error

UnmarshalJSON umarshall json - template of ffjson

func (*TempArchive) UnmarshalJSONFFLexer

func (j *TempArchive) UnmarshalJSONFFLexer(fs *fflib.FFLexer, state fflib.FFParseState) error

UnmarshalJSONFFLexer fast json unmarshall - template ffjson

type WhiteoutFormat

type WhiteoutFormat int

WhiteoutFormat is the format of whiteouts unpacked

const (
	// AUFSWhiteoutFormat is the default format for whiteouts
	AUFSWhiteoutFormat WhiteoutFormat = iota
	// OverlayWhiteoutFormat formats whiteout according to the overlay
	// standard.
	OverlayWhiteoutFormat
)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL