zstd

package module
v1.5.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 30, 2024 License: BSD-3-Clause Imports: 9 Imported by: 224

README

Zstd Go Wrapper

CircleCI GoDoc

C Zstd Homepage

The current headers and C files are from v1.5.6 (Commit 794ea1b).

Usage

There are two main APIs:

  • simple Compress/Decompress
  • streaming API (io.Reader/io.Writer)

The compress/decompress APIs mirror that of lz4, while the streaming API was designed to be a drop-in replacement for zlib.

Building against an external libzstd

By default, zstd source code is vendored in this repository and the binding will be built with the vendored source code bundled.

If you want to build this binding against an external static or shared libzstd library, you can use the external_libzstd build tag. This will look for the libzstd pkg-config file and extract build and linking parameters from that pkg-config file.

Note that it requires at least libzstd 1.4.0.

go build -tags external_libzstd
Simple Compress/Decompress
// Compress compresses the byte array given in src and writes it to dst.
// If you already have a buffer allocated, you can pass it to prevent allocation
// If not, you can pass nil as dst.
// If the buffer is too small, it will be reallocated, resized, and returned bu the function
// If dst is nil, this will allocate the worst case size (CompressBound(src))
Compress(dst, src []byte) ([]byte, error)
// CompressLevel is the same as Compress but you can pass another compression level
CompressLevel(dst, src []byte, level int) ([]byte, error)
// Decompress will decompress your payload into dst.
// If you already have a buffer allocated, you can pass it to prevent allocation
// If not, you can pass nil as dst (allocates a 4*src size as default).
// If the buffer is too small, it will retry 3 times by doubling the dst size
// After max retries, it will switch to the slower stream API to be sure to be able
// to decompress. Currently switches if compression ratio > 4*2**3=32.
Decompress(dst, src []byte) ([]byte, error)
Stream API
// NewWriter creates a new object that can optionally be initialized with
// a precomputed dictionary. If dict is nil, compress without a dictionary.
// The dictionary array should not be changed during the use of this object.
// You MUST CALL Close() to write the last bytes of a zstd stream and free C objects.
NewWriter(w io.Writer) *Writer
NewWriterLevel(w io.Writer, level int) *Writer
NewWriterLevelDict(w io.Writer, level int, dict []byte) *Writer

// Write compresses the input data and write it to the underlying writer
(w *Writer) Write(p []byte) (int, error)

// Flush writes any unwritten data to the underlying writer
(w *Writer) Flush() error

// Close flushes the buffer and frees C zstd objects
(w *Writer) Close() error
// NewReader returns a new io.ReadCloser that will decompress data from the
// underlying reader.  If a dictionary is provided to NewReaderDict, it must
// not be modified until Close is called.  It is the caller's responsibility
// to call Close, which frees up C objects.
NewReader(r io.Reader) io.ReadCloser
NewReaderDict(r io.Reader, dict []byte) io.ReadCloser
Benchmarks (benchmarked with v0.5.0)

The author of Zstd also wrote lz4. Zstd is intended to occupy a speed/ratio level similar to what zlib currently provides. In our tests, the can always be made to be better than zlib by chosing an appropriate level while still keeping compression and decompression time faster than zlib.

You can run the benchmarks against your own payloads by using the Go benchmarks tool. Just export your payload filepath as the PAYLOAD environment variable and run the benchmarks:

go test -bench .

Compression of a 7Mb pdf zstd (this wrapper) vs czlib:

BenchmarkCompression               5     221056624 ns/op      67.34 MB/s
BenchmarkDecompression           100      18370416 ns/op     810.32 MB/s

BenchmarkFzlibCompress             2     610156603 ns/op      24.40 MB/s
BenchmarkFzlibDecompress          20      81195246 ns/op     183.33 MB/s

Ratio is also better by a margin of ~20%. Compression speed is always better than zlib on all the payloads we tested; However, czlib has optimisations that make it faster at decompressiong small payloads:

Testing with size: 11... czlib: 8.97 MB/s, zstd: 3.26 MB/s
Testing with size: 27... czlib: 23.3 MB/s, zstd: 8.22 MB/s
Testing with size: 62... czlib: 31.6 MB/s, zstd: 19.49 MB/s
Testing with size: 141... czlib: 74.54 MB/s, zstd: 42.55 MB/s
Testing with size: 323... czlib: 155.14 MB/s, zstd: 99.39 MB/s
Testing with size: 739... czlib: 235.9 MB/s, zstd: 216.45 MB/s
Testing with size: 1689... czlib: 116.45 MB/s, zstd: 345.64 MB/s
Testing with size: 3858... czlib: 176.39 MB/s, zstd: 617.56 MB/s
Testing with size: 8811... czlib: 254.11 MB/s, zstd: 824.34 MB/s
Testing with size: 20121... czlib: 197.43 MB/s, zstd: 1339.11 MB/s
Testing with size: 45951... czlib: 201.62 MB/s, zstd: 1951.57 MB/s

zstd starts to shine with payloads > 1KB

Stability - Current state: STABLE

The C library seems to be pretty stable and according to the author has been tested and fuzzed.

For the Go wrapper, the test cover most usual cases and we have succesfully tested it on all staging and prod data.

Documentation

Index

Constants

View Source
const (
	BestSpeed          = 1
	BestCompression    = 20
	DefaultCompression = 5
)

Defines best and standard values for zstd cli

Variables

View Source
var (
	// ErrEmptyDictionary is returned when the given dictionary is empty
	ErrEmptyDictionary = errors.New("Dictionary is empty")
	// ErrBadDictionary is returned when cannot load the given dictionary
	ErrBadDictionary = errors.New("Cannot load dictionary")
)
View Source
var (
	// ErrEmptySlice is returned when there is nothing to compress
	ErrEmptySlice = errors.New("Bytes slice is empty")
)
View Source
var ErrNoParallelSupport = errors.New("No parallel support")

Functions

func Compress

func Compress(dst, src []byte) ([]byte, error)

Compress src into dst. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

func CompressBound

func CompressBound(srcSize int) int

CompressBound returns the worst case size needed for a destination buffer, which can be used to preallocate a destination buffer or select a previously allocated buffer from a pool. See zstd.h to mirror implementation of ZSTD_COMPRESSBOUND

func CompressLevel

func CompressLevel(dst, src []byte, level int) ([]byte, error)

CompressLevel is the same as Compress but you can pass a compression level

func Decompress

func Decompress(dst, src []byte) ([]byte, error)

Decompress src into dst. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

func DecompressInto added in v1.5.6

func DecompressInto(dst, src []byte) (int, error)

DecompressInto decompresses src into dst. Unlike Decompress, DecompressInto requires that dst be sufficiently large to hold the decompressed payload. DecompressInto may be used when the caller knows the size of the decompressed payload before attempting decompression.

It returns the number of bytes copied and an error if any is encountered. If dst is too small, DecompressInto errors.

func IsDstSizeTooSmallError added in v1.3.4

func IsDstSizeTooSmallError(e error) bool

IsDstSizeTooSmallError returns whether the error correspond to zstd standard sDstSizeTooSmall error

func NewReader

func NewReader(r io.Reader) io.ReadCloser

NewReader creates a new io.ReadCloser. Reads from the returned ReadCloser read and decompress data from r. It is the caller's responsibility to call Close on the ReadCloser when done. If this is not done, underlying objects in the zstd library will not be freed.

func NewReaderDict

func NewReaderDict(r io.Reader, dict []byte) io.ReadCloser

NewReaderDict is like NewReader but uses a preset dictionary. NewReaderDict ignores the dictionary if it is nil.

Types

type BulkProcessor added in v1.5.2

type BulkProcessor struct {
	// contains filtered or unexported fields
}

BulkProcessor implements Bulk processing dictionary API. When compressing multiple messages or blocks using the same dictionary, it's recommended to digest the dictionary only once, since it's a costly operation. NewBulkProcessor() will create a state from digesting a dictionary. The resulting state can be used for future compression/decompression operations with very limited startup cost. BulkProcessor can be created once and shared by multiple threads concurrently, since its usage is read-only. The state will be freed when gc cleans up BulkProcessor.

func NewBulkProcessor added in v1.5.2

func NewBulkProcessor(dictionary []byte, compressionLevel int) (*BulkProcessor, error)

NewBulkProcessor creates a new BulkProcessor with a pre-trained dictionary and compression level

func (*BulkProcessor) Compress added in v1.5.2

func (p *BulkProcessor) Compress(dst, src []byte) ([]byte, error)

Compress compresses `src` into `dst` with the dictionary given when creating the BulkProcessor. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

func (*BulkProcessor) Decompress added in v1.5.2

func (p *BulkProcessor) Decompress(dst, src []byte) ([]byte, error)

Decompress decompresses `src` into `dst` with the dictionary given when creating the BulkProcessor. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

type Ctx added in v1.4.8

type Ctx interface {
	// Compress src into dst.  If you have a buffer to use, you can pass it to
	// prevent allocation.  If it is too small, or if nil is passed, a new buffer
	// will be allocated and returned.
	Compress(dst, src []byte) ([]byte, error)

	// CompressLevel is the same as Compress but you can pass a compression level
	CompressLevel(dst, src []byte, level int) ([]byte, error)

	// Decompress src into dst.  If you have a buffer to use, you can pass it to
	// prevent allocation.  If it is too small, or if nil is passed, a new buffer
	// will be allocated and returned.
	Decompress(dst, src []byte) ([]byte, error)
}

func NewCtx added in v1.4.8

func NewCtx() Ctx

Create a new ZStd Context.

When compressing/decompressing many times, it is recommended to allocate a
context just once, and re-use it for each successive compression operation.
This will make workload friendlier for system's memory.
Note : re-using context is just a speed / resource optimization.
       It doesn't change the compression ratio, which remains identical.
Note 2 : In multi-threaded environments,
       use one different context per thread for parallel execution.

type ErrorCode

type ErrorCode int

ErrorCode is an error returned by the zstd library.

func (ErrorCode) Error

func (e ErrorCode) Error() string

Error returns the error string given by zstd

type Writer

type Writer struct {
	CompressionLevel int
	// contains filtered or unexported fields
}

Writer is an io.WriteCloser that zstd-compresses its input.

func NewWriter

func NewWriter(w io.Writer) *Writer

NewWriter creates a new Writer with default compression options. Writes to the writer will be written in compressed form to w.

func NewWriterLevel

func NewWriterLevel(w io.Writer, level int) *Writer

NewWriterLevel is like NewWriter but specifies the compression level instead of assuming default compression.

The level can be DefaultCompression or any integer value between BestSpeed and BestCompression inclusive.

func NewWriterLevelDict

func NewWriterLevelDict(w io.Writer, level int, dict []byte) *Writer

NewWriterLevelDict is like NewWriterLevel but specifies a dictionary to compress with. If the dictionary is empty or nil it is ignored. The dictionary should not be modified until the writer is closed.

func (*Writer) Close

func (w *Writer) Close() error

Close closes the Writer, flushing any unwritten data to the underlying io.Writer and freeing objects, but does not close the underlying io.Writer.

func (*Writer) Flush added in v1.4.8

func (w *Writer) Flush() error

Flush writes any unwritten data to the underlying io.Writer.

func (*Writer) SetNbWorkers added in v1.5.5

func (w *Writer) SetNbWorkers(n int) error

Set the number of workers to run the compression in parallel using multiple threads If > 1, the Write() call will become asynchronous. This means data will be buffered until processed. If you call Write() too fast, you might incur a memory buffer up to as large as your input. Consider calling Flush() periodically if you need to compress a very large file that would not fit all in memory. By default only one worker is used.

func (*Writer) Write

func (w *Writer) Write(p []byte) (int, error)

Write writes a compressed form of p to the underlying io.Writer.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL