Documentation ¶
Overview ¶
Package compressio provides parallel compression and decompression, as well as optional SHA-256 hashing. It also provides another storage variant (nocompressio) that does not compress data but tracks its integrity.
The stream format is defined as follows.
/------------------------------------------------------\ | chunk size (4-bytes) | +------------------------------------------------------+ | (optional) hash (32-bytes) | +------------------------------------------------------+ | compressed data size (4-bytes) | +------------------------------------------------------+ | compressed data | +------------------------------------------------------+ | (optional) hash (32-bytes) | +------------------------------------------------------+ | compressed data size (4-bytes) | +------------------------------------------------------+ | ...... | \------------------------------------------------------/
where each subsequent hash is calculated from the following items in order
compressed data compressed data size previous hash
so the stream integrity cannot be compromised by switching and mixing compressed chunks.
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ErrHashMismatch = errors.New("hash mismatch")
ErrHashMismatch is returned if the hash does not match.
Functions ¶
Types ¶
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader is a compressed reader.
type SimpleReader ¶
type SimpleReader struct {
// contains filtered or unexported fields
}
SimpleReader is a reader for uncompressed image containing hashes.
type SimpleWriter ¶
type SimpleWriter struct {
// contains filtered or unexported fields
}
SimpleWriter is a writer that does not compress.
func NewSimpleWriter ¶
func NewSimpleWriter(out io.Writer, key []byte, chunkSize uint32) *SimpleWriter
NewSimpleWriter returns a new non-compressing writer. If key is non-nil, hash values are generated and written out for compressed bytes. See package comments for details. chunkSize is the buffer size used for buffering. Large writes are not buffered and written out directly as a single chunk.
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
Writer is a compressed writer.
func NewWriter ¶
NewWriter returns a new compressed writer. If key is non-nil, hash values are generated and written out for compressed bytes. See package comments for details.
The recommended chunkSize is on the order of 1M. Extra memory may be buffered (in the form of read-ahead, or buffered writes), and is limited to O(chunkSize * [1+GOMAXPROCS]).