Documentation ¶
Overview ¶
Package compress contains the interfaces and implementations for handling compression/decompression of parquet data at the column levels.
Index ¶
Constants ¶
const DefaultCompressionLevel = flate.DefaultCompression
DefaultCompressionLevel will use flate.DefaultCompression since many of the compression libraries use that to denote "use the default".
Variables ¶
var Codecs = struct { Uncompressed Compression Snappy Compression Gzip Compression // LZO is unsupported in this library since LZO license is incompatible with Apache License Lzo Compression Brotli Compression // LZ4 unsupported in this library due to problematic issues between the Hadoop LZ4 spec vs regular lz4 // see: http://mail-archives.apache.org/mod_mbox/arrow-dev/202007.mbox/%3CCAAri41v24xuA8MGHLDvgSnE+7AAgOhiEukemW_oPNHMvfMmrWw@mail.gmail.com%3E Lz4 Compression Zstd Compression }{ Uncompressed: Compression(parquet.CompressionCodec_UNCOMPRESSED), Snappy: Compression(parquet.CompressionCodec_SNAPPY), Gzip: Compression(parquet.CompressionCodec_GZIP), Lzo: Compression(parquet.CompressionCodec_LZO), Brotli: Compression(parquet.CompressionCodec_BROTLI), Lz4: Compression(parquet.CompressionCodec_LZ4), Zstd: Compression(parquet.CompressionCodec_ZSTD), }
Codecs is a useful struct to provide namespaced enum values to use for specifying the compression type to use which make for easy internal swapping between them and the thrift enum since they are initialized to the same constant values.
Functions ¶
This section is empty.
Types ¶
type Codec ¶
type Codec interface { // NewReader provides a reader that wraps a stream with compressed data to stream the uncompressed data NewReader(io.Reader) io.ReadCloser // NewWriter provides a wrapper around a write stream to compress data before writing it. NewWriter(io.Writer) io.WriteCloser // NewWriterLevel is like NewWriter but allows specifying the compression level NewWriterLevel(io.Writer, int) (io.WriteCloser, error) // Encode encodes a block of data given by src and returns the compressed block. dst should be either nil // or sized large enough to fit the compressed block (use CompressBound to allocate). dst and src should not // overlap since some of the compression types don't allow it. // // The returned slice will be one of the following: // 1. If dst was nil or dst was too small to fit the compressed data, it will be a newly allocated slice // 2. If dst was large enough to fit the compressed data (depending on the compression algorithm it might // be required to be at least CompressBound length) then it might be a slice of dst. Encode(dst, src []byte) []byte // EncodeLevel is like Encode, but specifies a particular encoding level instead of the default. EncodeLevel(dst, src []byte, level int) []byte // CompressBound returns the boundary of maximum size of compressed data under the chosen codec. CompressBound(int64) int64 // Decode is for decoding a single block rather than a stream, like with Encode, dst must be either nil or // sized large enough to accommodate the uncompressed data and should not overlap with src. // // the returned slice *might* be a slice of dst. Decode(dst, src []byte) []byte }
Codec is an interface which is implemented for each compression type in order to make the interactions easy to implement. Most consumers won't be calling GetCodec directly.
func GetCodec ¶
func GetCodec(typ Compression) (Codec, error)
GetCodec returns a Codec interface for the requested Compression type
type Compression ¶
type Compression parquet.CompressionCodec
Compression is an alias to the thrift compression codec enum type for easy use
func (Compression) String ¶
func (c Compression) String() string