Documentation ¶
Overview ¶
Package lz4 implements reading and writing lz4 compressed data (a frame), as specified in http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html, using an io.Reader (decompression) and io.Writer (compression). It is designed to minimize memory usage while maximizing throughput by being able to [de]compress data concurrently.
The Reader and the Writer support concurrent processing provided the supplied buffers are large enough (in multiples of BlockMaxSize) and there is no block dependency. Reader.WriteTo and Writer.ReadFrom do leverage the concurrency transparently. The runtime.GOMAXPROCS() value is used to apply concurrency or not.
Although the block level compression and decompression functions are exposed and are fully compatible with the lz4 block format definition, they are low level and should not be used directly. For a complete description of an lz4 compressed block, see: http://fastcompression.blogspot.fr/2011/05/lz4-explained.html
See https://github.com/Cyan4973/lz4 for the reference C implementation.
Index ¶
Constants ¶
const ( // Extension is the LZ4 frame file name extension Extension = ".lz4" // Version is the LZ4 frame format version Version = 1 )
Variables ¶
var ( // ErrInvalidSource is returned by UncompressBlock when a compressed block is corrupted. ErrInvalidSource = errors.New("lz4: invalid source") // ErrShortBuffer is returned by UncompressBlock, CompressBlock or CompressBlockHC when // the supplied buffer for [de]compression is too small. ErrShortBuffer = errors.New("lz4: short buffer") )
var ErrInvalid = errors.New("invalid lz4 data")
ErrInvalid is returned when the data being read is not an LZ4 archive (LZ4 magic number detection failed).
Functions ¶
func CompressBlock ¶
CompressBlock compresses the source buffer starting at soffet into the destination one. This is the fast version of LZ4 compression and also the default one.
The size of the compressed data is returned. If it is 0 and no error, then the data is incompressible.
An error is returned if the destination buffer is too small.
func CompressBlockBound ¶
CompressBlockBound returns the maximum size of a given buffer of size n, when not compressible.
func CompressBlockHC ¶
CompressBlockHC compresses the source buffer starting at soffet into the destination one. CompressBlockHC compression ratio is better than CompressBlock but it is also slower.
The size of the compressed data is returned. If it is 0 and no error, then the data is not compressible.
An error is returned if the destination buffer is too small.
func UncompressBlock ¶
UncompressBlock decompresses the source buffer into the destination one, starting at the di index and returning the decompressed size.
The destination buffer must be sized appropriately.
An error is returned if the source data is invalid or the destination buffer is too small.
Types ¶
type Header ¶
type Header struct { BlockDependency bool // compressed blocks are dependent (one block depends on the last 64Kb of the previous one) BlockChecksum bool // compressed blocks are checksumed NoChecksum bool // frame checksum BlockMaxSize int // the size of the decompressed data block (one of [64KB, 256KB, 1MB, 4MB]). Default=4MB. Size uint64 // the frame total size. It is _not_ computed by the Writer. HighCompression bool // use high compression (only for the Writer) // contains filtered or unexported fields }
Header describes the various flags that can be set on a Writer or obtained from a Reader. The default values match those of the LZ4 frame format definition (http://fastcompression.blogspot.com/2013/04/lz4-streaming-format-final.html).
NB. in a Reader, in case of concatenated frames, the Header values may change between Read() calls. It is the caller responsibility to check them if necessary (typically when using the Reader concurrency).
type Reader ¶
type Reader struct { Pos int64 // position within the source Header // contains filtered or unexported fields }
Reader implements the LZ4 frame decoder. The Header is set after the first call to Read(). The Header may change between Read() calls in case of concatenated frames.
func NewReader ¶
NewReader returns a new LZ4 frame decoder. No access to the underlying io.Reader is performed.
func (*Reader) Read ¶
Read decompresses data from the underlying source into the supplied buffer.
Since there can be multiple streams concatenated, Header values may change between calls to Read(). If that is the case, no data is actually read from the underlying io.Reader, to allow for potential input buffer resizing.
Data is buffered if the input buffer is too small, and exhausted upon successive calls.
If the buffer is large enough (typically in multiples of BlockMaxSize) and there is no block dependency, then the data will be decompressed concurrently based on the GOMAXPROCS value.
type Writer ¶
type Writer struct { Header // contains filtered or unexported fields }
Writer implements the LZ4 frame encoder.
func NewWriter ¶
NewWriter returns a new LZ4 frame encoder. No access to the underlying io.Writer is performed. The supplied Header is checked at the first Write. It is ok to change it before the first Write but then not until a Reset() is performed.
func (*Writer) Close ¶
Close closes the Writer, flushing any unwritten data to the underlying io.Writer, but does not close the underlying io.Writer.
func (*Writer) Flush ¶
Flush flushes any pending compressed data to the underlying writer. Flush does not return until the data has been written. If the underlying writer returns an error, Flush returns that error.
Flush is only required when in BlockDependency mode and the total of data written is less than 64Kb.
func (*Writer) ReadFrom ¶
ReadFrom compresses the data read from the io.Reader and writes it to the underlying io.Writer. Returns the number of bytes read. It does not close the Writer.
func (*Writer) Reset ¶
Reset clears the state of the Writer z such that it is equivalent to its initial state from NewWriter, but instead writing to w. No access to the underlying io.Writer is performed.
func (*Writer) Write ¶
Write compresses data from the supplied buffer into the underlying io.Writer. Write does not return until the data has been written.
If the input buffer is large enough (typically in multiples of BlockMaxSize) the data will be compressed concurrently.
Write never buffers any data unless in BlockDependency mode where it may do so until it has 64Kb of data, after which it never buffers any.