brotli

package module
v0.0.0-...-89700a6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 19, 2024 License: MIT Imports: 13 Imported by: 2

README

This package is a brotli compressor and decompressor implemented in Go. It was translated from the reference implementation (https://github.com/google/brotli) with the c2go tool at https://github.com/andybalholm/c2go.

I have been working on new compression algorithms (not translated from C) in the matchfinder package. You can use them with the NewWriterV2 function. Currently they give better results than the old implementation (at least for compressing my test file, Newton’s Opticks) on levels 2 to 6.

I am using it in production with https://github.com/andybalholm/redwood.

API documentation is found at https://pkg.go.dev/github.com/andybalholm/brotli?tab=doc.

Documentation

Index

Examples

Constants

View Source
const (
	BestSpeed          = 0
	BestCompression    = 11
	DefaultCompression = 6
)

Variables

This section is empty.

Functions

func HTTPCompressor

func HTTPCompressor(w http.ResponseWriter, r *http.Request) io.WriteCloser

HTTPCompressor chooses a compression method (brotli, gzip, or none) based on the Accept-Encoding header, sets the Content-Encoding header, and returns a WriteCloser that implements that compression. The Close method must be called before the current HTTP handler returns.

func MemClrNoHeapPointers

func MemClrNoHeapPointers(ptr unsafe.Pointer, n uintptr)

func NewWriterV2

func NewWriterV2(dst io.Writer, level int) *matchfinder.Writer

NewWriterV2 is like NewWriterLevel, but it uses the new implementation based on the matchfinder package. It currently supports up to level 7; if a higher level is specified, level 7 will be used.

Types

type Encoder

type Encoder struct {
	// contains filtered or unexported fields
}

An Encoder implements the matchfinder.Encoder interface, writing in Brotli format.

func (*Encoder) Encode

func (e *Encoder) Encode(dst []byte, src []byte, matches []matchfinder.Match, lastBlock bool) []byte

func (*Encoder) Reset

func (e *Encoder) Reset()

type Reader

type Reader struct {
	// contains filtered or unexported fields
}

func NewReader

func NewReader(src io.Reader) *Reader

NewReader creates a new Reader reading the given reader.

func (*Reader) Close

func (r *Reader) Close() error

func (*Reader) NilSrc

func (r *Reader) NilSrc()

func (*Reader) Read

func (r *Reader) Read(p []byte) (n int, err error)

func (*Reader) Reset

func (r *Reader) Reset(src io.Reader) error

Reset discards the Reader's state and makes it equivalent to the result of its original state from NewReader, but reading from src instead. This permits reusing a Reader rather than allocating a new one. Error is always nil

type Writer

type Writer struct {
	// contains filtered or unexported fields
}

func NewWriter

func NewWriter(dst io.Writer) *Writer

Writes to the returned writer are compressed and written to dst. It is the caller's responsibility to call Close on the Writer when done. Writes may be buffered and not flushed until Close.

func NewWriterLevel

func NewWriterLevel(dst io.Writer, level int) *Writer

NewWriterLevel is like NewWriter but specifies the compression level instead of assuming DefaultCompression. The compression level can be DefaultCompression or any integer value between BestSpeed and BestCompression inclusive.

func NewWriterOptions

func NewWriterOptions(dst io.Writer, options WriterOptions) *Writer

NewWriterOptions is like NewWriter but specifies WriterOptions

func (*Writer) Close

func (w *Writer) Close() error

Close flushes remaining data to the decorated writer.

func (*Writer) Flush

func (w *Writer) Flush() error

Flush outputs encoded data for all input provided to Write. The resulting output can be decoded to match all input before Flush, but the stream is not yet complete until after Close. Flush has a negative impact on compression.

func (*Writer) Reset

func (w *Writer) Reset(dst io.Writer)

Reset discards the Writer's state and makes it equivalent to the result of its original state from NewWriter or NewWriterLevel, but writing to dst instead. This permits reusing a Writer rather than allocating a new one.

Example
proverbs := []string{
	"Don't communicate by sharing memory, share memory by communicating.\n",
	"Concurrency is not parallelism.\n",
	"The bigger the interface, the weaker the abstraction.\n",
	"Documentation is for users.\n",
}

var b bytes.Buffer

bw := NewWriter(nil)
br := NewReader(nil)

for _, s := range proverbs {
	b.Reset()

	// Reset the compressor and encode from some input stream.
	bw.Reset(&b)
	if _, err := io.WriteString(bw, s); err != nil {
		log.Fatal(err)
	}
	if err := bw.Close(); err != nil {
		log.Fatal(err)
	}

	// Reset the decompressor and decode to some output stream.
	if err := br.Reset(&b); err != nil {
		log.Fatal(err)
	}
	if _, err := io.Copy(os.Stdout, br); err != nil {
		log.Fatal(err)
	}
}
Output:

Don't communicate by sharing memory, share memory by communicating.
Concurrency is not parallelism.
The bigger the interface, the weaker the abstraction.
Documentation is for users.

func (*Writer) Write

func (w *Writer) Write(p []byte) (n int, err error)

Write implements io.Writer. Flush or Close must be called to ensure that the encoded bytes are actually flushed to the underlying Writer.

type WriterOptions

type WriterOptions struct {
	// Quality controls the compression-speed vs compression-density trade-offs.
	// The higher the quality, the slower the compression. Range is 0 to 11.
	Quality int
	// LGWin is the base 2 logarithm of the sliding window size.
	// Range is 10 to 24. 0 indicates automatic configuration based on Quality.
	LGWin int
}

WriterOptions configures Writer.

Directories

Path Synopsis
The matchfinder package defines reusable components for data compression.
The matchfinder package defines reusable components for data compression.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL