huff0

package
v1.7.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 15, 2019 License: BSD-3-Clause Imports: 8 Imported by: 0

README

Huff0 entropy compression

This package provides Huff0 encoding and decoding as used in zstd.

Huff0, a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU (Arithmetic Logic Unit), achieving extremely fast compression and decompression speeds.

This can be used for compressing input with a lot of similar input values to the smallest number of bytes. This does not perform any multi-byte dictionary coding as LZ coders, but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding.

THIS PACKAGE IS NOT CONSIDERED STABLE AND API OR ENCODING MAY CHANGE IN THE FUTURE.

News

  • Mar 2018: First implementation released. Consider this beta software for now.

Usage

This package provides a low level interface that allows to compress single independent blocks.

Each block is separate, and there is no built in integrity checks. This means that the caller should keep track of block sizes and also do checksums if needed.

Compressing a block is done via the Compress1X and Compress4X functions. You must provide input and will receive the output and maybe an error.

These error values can be returned:

Error Description
<nil> Everything ok, output is returned
ErrIncompressible Returned when input is judged to be too hard to compress
ErrUseRLE Returned from the compressor when the input is a single byte value repeated
ErrTooBig Returned if the input block exceeds the maximum allowed size (128 Kib)
(error) An internal error occurred.

As can be seen above some of there are errors that will be returned even under normal operation so it is important to handle these.

To reduce allocations you can provide a Scratch object that can be re-used for successive calls. Both compression and decompression accepts a Scratch object, and the same object can be used for both.

Be aware, that when re-using a Scratch object that the output buffer is also re-used, so if you are still using this you must set the Out field in the scratch to nil. The same buffer is used for compression and decompression output.

The Scratch object will retain state that allows to re-use previous tables for encoding and decoding.

Tables and re-use

Huff0 allows for reusing tables from the previous block to save space if that is expected to give better/faster results.

The Scratch object allows you to set a ReusePolicy that controls this behaviour. See the documentation for details. This can be altered between each block.

Do however note that this information is not stored in the output block and it is up to the users of the package to record whether ReadTable should be called, based on the boolean reported back from the CompressXX call.

If you want to store the table separate from the data, you can access them as OutData and OutTable on the Scratch object.

Decompressing

The first part of decoding is to initialize the decoding table through ReadTable. This will initialize the decoding tables. You can supply the complete block to ReadTable and it will return the data part of the block which can be given to the decompressor.

Decompressing is done by calling the Decompress1X or Decompress4X function.

You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back your input was likely corrupted.

It is important to note that a successful decoding does not mean your output matches your original input. There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid.

Contributing

Contributions are always welcome. Be aware that adding public functions will require good justification and breaking changes will likely not be accepted. If in doubt open an issue before writing the PR.

Documentation

Overview

Package huff0 provides fast huffman encoding as used in zstd.

See README.md at https://github.com/klauspost/compress/tree/master/huff0 for details.

Index

Constants

View Source
const (

	// BlockSizeMax is maximum input size for a single block uncompressed.
	BlockSizeMax = 1<<18 - 1
)

Variables

View Source
var (
	// ErrIncompressible is returned when input is judged to be too hard to compress.
	ErrIncompressible = errors.New("input is not compressible")

	// ErrUseRLE is returned from the compressor when the input is a single byte value repeated.
	ErrUseRLE = errors.New("input is single value repeated")

	// ErrTooBig is return if input is too large for a single block.
	ErrTooBig = errors.New("input too big")
)

Functions

func Compress1X

func Compress1X(in []byte, s *Scratch) (out []byte, reUsed bool, err error)

Compress1X will compress the input. The output can be decoded using Decompress1X. Supply a Scratch object. The scratch object contains state about re-use, So when sharing across independent encodes, be sure to set the re-use policy.

func Compress4X

func Compress4X(in []byte, s *Scratch) (out []byte, reUsed bool, err error)

Compress4X will compress the input. The input is split into 4 independent blocks and compressed similar to Compress1X. The output can be decoded using Decompress4X. Supply a Scratch object. The scratch object contains state about re-use, So when sharing across independent encodes, be sure to set the re-use policy.

Types

type ReusePolicy

type ReusePolicy uint8
const (
	// ReusePolicyAllow will allow reuse if it produces smaller output.
	ReusePolicyAllow ReusePolicy = iota

	// ReusePolicyPrefer will re-use aggressively if possible.
	// This will not check if a new table will produce smaller output,
	// except if the current table is impossible to use or
	// compressed output is bigger than input.
	ReusePolicyPrefer

	// ReusePolicyNone will disable re-use of tables.
	// This is slightly faster than ReusePolicyAllow but may produce larger output.
	ReusePolicyNone
)

type Scratch

type Scratch struct {

	// Out is output buffer.
	// If the scratch is re-used before the caller is done processing the output,
	// set this field to nil.
	// Otherwise the output buffer will be re-used for next Compression/Decompression step
	// and allocation will be avoided.
	Out []byte

	// OutTable will contain the table data only, if a new table has been generated.
	// Slice of the returned data.
	OutTable []byte

	// OutData will contain the compressed data.
	// Slice of the returned data.
	OutData []byte

	// MaxSymbolValue will override the maximum symbol value of the next block.
	MaxSymbolValue uint8

	// TableLog will attempt to override the tablelog for the next block.
	// Must be <= 11.
	TableLog uint8

	// Reuse will specify the reuse policy
	Reuse ReusePolicy
	// contains filtered or unexported fields
}

func ReadTable

func ReadTable(in []byte, s *Scratch) (s2 *Scratch, remain []byte, err error)

ReadTable will read a table from the input. The size of the input may be larger than the table definition. Any content remaining after the table definition will be returned. If no Scratch is provided a new one is allocated. The returned Scratch can be used for decoding input using this table.

func (*Scratch) Decompress1X

func (s *Scratch) Decompress1X(in []byte) (out []byte, err error)

Decompress1X will decompress a 1X encoded stream. The length of the supplied input must match the end of a block exactly. Before this is called, the table must be initialized with ReadTable unless the encoder re-used the table.

func (*Scratch) Decompress4X

func (s *Scratch) Decompress4X(in []byte, dstSize int) (out []byte, err error)

Decompress4X will decompress a 4X encoded stream. Before this is called, the table must be initialized with ReadTable unless the encoder re-used the table. The length of the supplied input must match the end of a block exactly. The destination size of the uncompressed data must be known and provided.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL