sgzip

package module
v0.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 30, 2021 License: MIT Imports: 11 Imported by: 7

Documentation

Overview

Package sgzip implements a seekable version of gzip format compressed files, compliant with RFC 1952.

This is a drop in replacement for "compress/gzip". This will split compression into blocks that are compressed in parallel. This can be useful for compressing big amounts of data. The gzip decompression has not been modified, but remains in the package, so you can use it as a complete replacement for "compress/gzip".

See more at https://github.com/klauspost/pgzip

Index

Constants

View Source
const (
	NoCompression       = flate.NoCompression
	BestSpeed           = flate.BestSpeed
	BestCompression     = flate.BestCompression
	DefaultCompression  = flate.DefaultCompression
	ConstantCompression = flate.ConstantCompression
	HuffmanOnly         = flate.HuffmanOnly
)

These constants are copied from the flate package, so that code that imports "compress/gzip" does not also have to import "compress/flate".

Variables

View Source
var (
	// ErrUnsupported is returned when atempting an unsupported operation.
	ErrUnsupported = errors.New("gzip: unsupported operation")
	// ErrChecksum is returned when reading GZIP data that has an invalid checksum.
	ErrChecksum = errors.New("gzip: invalid checksum")
	// ErrHeader is returned when reading GZIP data that has an invalid header.
	ErrHeader = errors.New("gzip: invalid header")
	// ErrInvalidSeek is returned when attempting to seek to negative position or beyond the file size.
	ErrInvalidSeek = errors.New("gzip: invalid seek position")
)

Functions

This section is empty.

Types

type GzipMetadata

type GzipMetadata struct {
	BlockSize int
	Size      int64
	BlockData []uint32
}

GzipMetadata stores the Metadata necessary to seek in the compressed file

type Header struct {
	Comment string    // comment
	Extra   []byte    // "extra data"
	ModTime time.Time // modification time
	Name    string    // file name
	OS      byte      // operating system type
}

The gzip file stores a header giving metadata about the compressed file. That header is exposed as the fields of the Writer and Reader structs.

type Reader

type Reader struct {
	Header
	// contains filtered or unexported fields
}

A Reader is an io.Reader that can be read to retrieve uncompressed data from a gzip-format compressed file.

In general, a gzip file can be a concatenation of gzip files, each with its own header. Reads from the Reader return the concatenation of the uncompressed data of each. Only the first header is recorded in the Reader fields.

Gzip files store a length and checksum of the uncompressed data. The Reader will return a ErrChecksum when Read reaches the end of the uncompressed data if it does not have the expected length or checksum. Clients should treat data returned by Read as tentative until they receive the io.EOF marking the end of the data.

func NewReader

func NewReader(r io.Reader) (*Reader, error)

NewReader creates a new Reader reading the given reader. The implementation buffers input and may read more data than necessary from r. It is the caller's responsibility to call Close on the Reader when done.

func NewReaderAt

func NewReaderAt(r io.ReadSeeker, meta *GzipMetadata, pos int64) (*Reader, error)

NewReaderAt creates a new Reader reading the given reader. This is a special reader that starts at an offset and allows seeking in the compressed file using the supplied metadata. It is the caller's responsibility to call Close on the Reader when done.

func NewReaderN

func NewReaderN(r io.Reader, blockSize, blocks int) (*Reader, error)

NewReaderN creates a new Reader reading the given reader. The implementation buffers input and may read more data than necessary from r. It is the caller's responsibility to call Close on the Reader when done.

With this you can control the approximate size of your blocks, as well as how many blocks you want to have prefetched.

Default values for this is blockSize = 250000, blocks = 16, meaning up to 16 blocks of maximum 250000 bytes will be prefetched.

func NewSeekingReader

func NewSeekingReader(r io.ReadSeeker, meta *GzipMetadata) (*Reader, error)

NewSeekingReader creates a new Reader reading the given reader. This is a special reader that allows seeking in the compressed file using the supplied metadata. It is the caller's responsibility to call Close on the Reader when done.

func (*Reader) Close

func (z *Reader) Close() error

Close closes the Reader. It does not close the underlying io.Reader.

func (*Reader) Multistream

func (z *Reader) Multistream(ok bool)

Multistream controls whether the reader supports multistream files.

If enabled (the default), the Reader expects the input to be a sequence of individually gzipped data streams, each with its own header and trailer, ending at EOF. The effect is that the concatenation of a sequence of gzipped files is treated as equivalent to the gzip of the concatenation of the sequence. This is standard behavior for gzip readers.

Calling Multistream(false) disables this behavior; disabling the behavior can be useful when reading file formats that distinguish individual gzip data streams or mix gzip data streams with other data streams. In this mode, when the Reader reaches the end of the data stream, Read returns io.EOF. If the underlying reader implements io.ByteReader, it will be left positioned just after the gzip stream. To start the next stream, call z.Reset(r) followed by z.Multistream(false). If there is no next stream, z.Reset(r) will return io.EOF.

func (*Reader) Read

func (z *Reader) Read(p []byte) (n int, err error)

func (*Reader) Reset

func (z *Reader) Reset(r io.Reader) error

Reset discards the Reader z's state and makes it equivalent to the result of its original state from NewReader, but reading from r instead. This permits reusing a Reader rather than allocating a new one.

func (*Reader) Seek

func (z *Reader) Seek(offset int64, whence int) (int64, error)

Seek ...

func (*Reader) WriteTo

func (z *Reader) WriteTo(w io.Writer) (n int64, err error)

WriteTo writes data to w until the buffer is drained or an error occurs. The return value n is the number of bytes written; it always fits into an int, but it is int64 to match the io.WriterTo interface. Any error encountered during the write is also returned.

type Writer

type Writer struct {
	Header
	// contains filtered or unexported fields
}

A Writer is an io.WriteCloser. Writes to a Writer are compressed and written to w.

func NewWriter

func NewWriter(w io.Writer) *Writer

NewWriter returns a new Writer. Writes to the returned writer are compressed and written to w.

It is the caller's responsibility to call Close on the WriteCloser when done. Writes may be buffered and not flushed until Close.

Callers that wish to set the fields in Writer.Header must do so before the first call to Write or Close. The Comment and Name header fields are UTF-8 strings in Go, but the underlying format requires NUL-terminated ISO 8859-1 (Latin-1). NUL or non-Latin-1 runes in those strings will lead to an error on Write.

func NewWriterLevel

func NewWriterLevel(w io.Writer, level int) (*Writer, error)

NewWriterLevel is like NewWriter but specifies the compression level instead of assuming DefaultCompression.

The compression level can be DefaultCompression, NoCompression, or any integer value between BestSpeed and BestCompression inclusive. The error returned will be nil if the level is valid.

func (*Writer) BlockData

func (z *Writer) BlockData() []uint32

BlockData returns blockdata

func (*Writer) Close

func (z *Writer) Close() error

Close closes the Writer, flushing any unwritten data to the underlying io.Writer, but does not close the underlying io.Writer.

func (*Writer) Flush

func (z *Writer) Flush() error

Flush flushes any pending compressed data to the underlying writer.

It is useful mainly in compressed network protocols, to ensure that a remote reader has enough data to reconstruct a packet. Flush does not return until the data has been written. If the underlying writer returns an error, Flush returns that error.

In the terminology of the zlib library, Flush is equivalent to Z_SYNC_FLUSH.

func (*Writer) MetaData

func (z *Writer) MetaData() GzipMetadata

MetaData returns gzip metadata

func (*Writer) Reset

func (z *Writer) Reset(w io.Writer)

Reset discards the Writer z's state and makes it equivalent to the result of its original state from NewWriter or NewWriterLevel, but writing to w instead. This permits reusing a Writer rather than allocating a new one.

func (*Writer) SetConcurrency

func (z *Writer) SetConcurrency(blockSize, blocks int) error

Use SetConcurrency to finetune the concurrency level if needed.

With this you can control the approximate size of your blocks, as well as how many you want to be processing in parallel.

Default values for this is SetConcurrency(defaultBlockSize, runtime.GOMAXPROCS(0)), meaning blocks are split at 1 MB and up to the number of CPU threads can be processing at once before the writer blocks.

func (*Writer) UncompressedSize

func (z *Writer) UncompressedSize() int64

UncompressedSize will return the number of bytes written. pgzip only, not a function in the official gzip package.

func (*Writer) Write

func (z *Writer) Write(p []byte) (int, error)

Write writes a compressed form of p to the underlying io.Writer. The compressed bytes are not necessarily flushed to output until the Writer is closed or Flush() is called.

The function will return quickly, if there are unused buffers. The sent slice (p) is copied, and the caller is free to re-use the buffer when the function returns.

Errors that occur during compression will be reported later, and a nil error does not signify that the compression succeeded (since it is most likely still running) That means that the call that returns an error may not be the call that caused it. Only Flush and Close functions are guaranteed to return any errors up to that point.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL