Documentation ¶
Index ¶
- Constants
- Variables
- func Compress(dst, src []byte) ([]byte, error)
- func CompressBound(srcSize int) int
- func CompressLevel(dst, src []byte, level int) ([]byte, error)
- func Decompress(dst, src []byte) ([]byte, error)
- func DecompressInto(dst, src []byte) (int, error)
- func IsDstSizeTooSmallError(e error) bool
- func NewReader(r io.Reader) io.ReadCloser
- func NewReaderDict(r io.Reader, dict []byte) io.ReadCloser
- type BulkProcessor
- type Ctx
- type ErrorCode
- type Writer
Constants ¶
const ( BestSpeed = 1 BestCompression = 20 DefaultCompression = 5 )
Defines best and standard values for zstd cli
Variables ¶
var ( // ErrEmptyDictionary is returned when the given dictionary is empty ErrEmptyDictionary = errors.New("Dictionary is empty") // ErrBadDictionary is returned when cannot load the given dictionary ErrBadDictionary = errors.New("Cannot load dictionary") )
var ( // ErrEmptySlice is returned when there is nothing to compress ErrEmptySlice = errors.New("Bytes slice is empty") )
var ErrNoParallelSupport = errors.New("No parallel support")
Functions ¶
func Compress ¶
Compress src into dst. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.
func CompressBound ¶
CompressBound returns the worst case size needed for a destination buffer, which can be used to preallocate a destination buffer or select a previously allocated buffer from a pool. See zstd.h to mirror implementation of ZSTD_COMPRESSBOUND
func CompressLevel ¶
CompressLevel is the same as Compress but you can pass a compression level
func Decompress ¶
Decompress src into dst. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.
func DecompressInto ¶ added in v1.5.6
DecompressInto decompresses src into dst. Unlike Decompress, DecompressInto requires that dst be sufficiently large to hold the decompressed payload. DecompressInto may be used when the caller knows the size of the decompressed payload before attempting decompression.
It returns the number of bytes copied and an error if any is encountered. If dst is too small, DecompressInto errors.
func IsDstSizeTooSmallError ¶ added in v1.3.4
IsDstSizeTooSmallError returns whether the error correspond to zstd standard sDstSizeTooSmall error
func NewReader ¶
func NewReader(r io.Reader) io.ReadCloser
NewReader creates a new io.ReadCloser. Reads from the returned ReadCloser read and decompress data from r. It is the caller's responsibility to call Close on the ReadCloser when done. If this is not done, underlying objects in the zstd library will not be freed.
func NewReaderDict ¶
func NewReaderDict(r io.Reader, dict []byte) io.ReadCloser
NewReaderDict is like NewReader but uses a preset dictionary. NewReaderDict ignores the dictionary if it is nil.
Types ¶
type BulkProcessor ¶ added in v1.5.2
type BulkProcessor struct {
// contains filtered or unexported fields
}
BulkProcessor implements Bulk processing dictionary API. When compressing multiple messages or blocks using the same dictionary, it's recommended to digest the dictionary only once, since it's a costly operation. NewBulkProcessor() will create a state from digesting a dictionary. The resulting state can be used for future compression/decompression operations with very limited startup cost. BulkProcessor can be created once and shared by multiple threads concurrently, since its usage is read-only. The state will be freed when gc cleans up BulkProcessor.
func NewBulkProcessor ¶ added in v1.5.2
func NewBulkProcessor(dictionary []byte, compressionLevel int) (*BulkProcessor, error)
NewBulkProcessor creates a new BulkProcessor with a pre-trained dictionary and compression level
func (*BulkProcessor) Compress ¶ added in v1.5.2
func (p *BulkProcessor) Compress(dst, src []byte) ([]byte, error)
Compress compresses `src` into `dst` with the dictionary given when creating the BulkProcessor. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.
func (*BulkProcessor) Decompress ¶ added in v1.5.2
func (p *BulkProcessor) Decompress(dst, src []byte) ([]byte, error)
Decompress decompresses `src` into `dst` with the dictionary given when creating the BulkProcessor. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.
type Ctx ¶ added in v1.4.8
type Ctx interface { // Compress src into dst. If you have a buffer to use, you can pass it to // prevent allocation. If it is too small, or if nil is passed, a new buffer // will be allocated and returned. Compress(dst, src []byte) ([]byte, error) // CompressLevel is the same as Compress but you can pass a compression level CompressLevel(dst, src []byte, level int) ([]byte, error) // Decompress src into dst. If you have a buffer to use, you can pass it to // prevent allocation. If it is too small, or if nil is passed, a new buffer // will be allocated and returned. Decompress(dst, src []byte) ([]byte, error) }
func NewCtx ¶ added in v1.4.8
func NewCtx() Ctx
Create a new ZStd Context.
When compressing/decompressing many times, it is recommended to allocate a context just once, and re-use it for each successive compression operation. This will make workload friendlier for system's memory. Note : re-using context is just a speed / resource optimization. It doesn't change the compression ratio, which remains identical. Note 2 : In multi-threaded environments, use one different context per thread for parallel execution.
type Writer ¶
type Writer struct { CompressionLevel int // contains filtered or unexported fields }
Writer is an io.WriteCloser that zstd-compresses its input.
func NewWriter ¶
NewWriter creates a new Writer with default compression options. Writes to the writer will be written in compressed form to w.
func NewWriterLevel ¶
NewWriterLevel is like NewWriter but specifies the compression level instead of assuming default compression.
The level can be DefaultCompression or any integer value between BestSpeed and BestCompression inclusive.
func NewWriterLevelDict ¶
NewWriterLevelDict is like NewWriterLevel but specifies a dictionary to compress with. If the dictionary is empty or nil it is ignored. The dictionary should not be modified until the writer is closed.
func (*Writer) Close ¶
Close closes the Writer, flushing any unwritten data to the underlying io.Writer and freeing objects, but does not close the underlying io.Writer.
func (*Writer) SetNbWorkers ¶ added in v1.5.5
Set the number of workers to run the compression in parallel using multiple threads If > 1, the Write() call will become asynchronous. This means data will be buffered until processed. If you call Write() too fast, you might incur a memory buffer up to as large as your input. Consider calling Flush() periodically if you need to compress a very large file that would not fit all in memory. By default only one worker is used.