gzhttp

package
v1.13.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 30, 2021 License: Apache-2.0, BSD-3-Clause, MIT, + 1 more Imports: 12 Imported by: 148

README

Gzip Middleware

This Go package which wraps HTTP server handlers to transparently gzip the response body, for clients which support it.

For HTTP clients we provide a transport wrapper that will do gzip decompression faster than what the standard library offers.

Both the client and server wrappers are fully compatible with other servers and clients.

This package is forked from the dead nytimes/gziphandler and extends functionality for it.

Install

go get -u github.com/klauspost/compress

Documentation

Go Reference

Usage

There are 2 main parts, one for http servers and one for http clients.

Client

The standard library automatically adds gzip compression to most requests and handles decompression of the responses.

However, by wrapping the transport we are able to override this and provide our own (faster) decompressor.

Wrapping is done on the Transport of the http client:

func ExampleTransport() {
	// Get an HTTP client.
	client := http.Client{
		// Wrap the transport:
		Transport: gzhttp.Transport(http.DefaultTransport),
	}

	resp, err := client.Get("https://google.com")
	if err != nil {
		return
	}
	defer resp.Body.Close()
	
	body, _ := ioutil.ReadAll(resp.Body)
	fmt.Println("body:", string(body))
}

Speed compared to standard library DefaultTransport for an approximate 127KB payload:

BenchmarkTransport

Single core:
BenchmarkTransport/gzhttp-32         	    1995	    609791 ns/op	 214.14 MB/s	   10129 B/op	      73 allocs/op
BenchmarkTransport/stdlib-32         	    1567	    772161 ns/op	 169.11 MB/s	   53950 B/op	      99 allocs/op

Multi Core:
BenchmarkTransport/gzhttp-par-32     	   29113	     36802 ns/op	3548.27 MB/s	   11061 B/op	      73 allocs/op
BenchmarkTransport/stdlib-par-32     	   16114	     66442 ns/op	1965.38 MB/s	   54971 B/op	      99 allocs/op

This includes both serving the http request, parsing requests and decompressing.

Server

For the simplest usage call GzipHandler with any handler (an object which implements the http.Handler interface), and it'll return a new handler which gzips the response. For example:

package main

import (
	"io"
	"net/http"
	"github.com/klauspost/compress/gzhttp"
)

func main() {
	handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "text/plain")
		io.WriteString(w, "Hello, World")
	})
    
	http.Handle("/", gzhttp.GzipHandler(handler))
	http.ListenAndServe("0.0.0.0:8000", nil)
}

This will wrap a handler using the default options.

To specify custom options a reusable wrapper can be created that can be used to wrap any number of handlers.

package main

import (
	"io"
	"log"
	"net/http"
	
	"github.com/klauspost/compress/gzhttp"
	"github.com/klauspost/compress/gzip"
)

func main() {
	handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "text/plain")
		io.WriteString(w, "Hello, World")
	})
	
	// Create a reusable wrapper with custom options.
	wrapper, err := gzhttp.NewWrapper(gzhttp.MinSize(2000), gzhttp.CompressionLevel(gzip.BestSpeed))
	if err != nil {
		log.Fatalln(err)
	}
	
	http.Handle("/", wrapper(handler))
	http.ListenAndServe("0.0.0.0:8000", nil)
}

Performance

Speed compared to nytimes/gziphandler with default settings, 2KB, 20KB and 100KB:

λ benchcmp before.txt after.txt
benchmark                         old ns/op     new ns/op     delta
BenchmarkGzipHandler_S2k-32       51302         23679         -53.84%
BenchmarkGzipHandler_S20k-32      301426        156331        -48.14%
BenchmarkGzipHandler_S100k-32     1546203       818981        -47.03%
BenchmarkGzipHandler_P2k-32       3973          1522          -61.69%
BenchmarkGzipHandler_P20k-32      20319         9397          -53.75%
BenchmarkGzipHandler_P100k-32     96079         46361         -51.75%

benchmark                         old MB/s     new MB/s     speedup
BenchmarkGzipHandler_S2k-32       39.92        86.49        2.17x
BenchmarkGzipHandler_S20k-32      67.94        131.00       1.93x
BenchmarkGzipHandler_S100k-32     66.23        125.03       1.89x
BenchmarkGzipHandler_P2k-32       515.44       1345.31      2.61x
BenchmarkGzipHandler_P20k-32      1007.92      2179.47      2.16x
BenchmarkGzipHandler_P100k-32     1065.79      2208.75      2.07x

benchmark                         old allocs     new allocs     delta
BenchmarkGzipHandler_S2k-32       22             16             -27.27%
BenchmarkGzipHandler_S20k-32      25             19             -24.00%
BenchmarkGzipHandler_S100k-32     28             21             -25.00%
BenchmarkGzipHandler_P2k-32       22             16             -27.27%
BenchmarkGzipHandler_P20k-32      25             19             -24.00%
BenchmarkGzipHandler_P100k-32     27             21             -22.22%

benchmark                         old bytes     new bytes     delta
BenchmarkGzipHandler_S2k-32       8836          2980          -66.27%
BenchmarkGzipHandler_S20k-32      69034         20562         -70.21%
BenchmarkGzipHandler_S100k-32     356582        86682         -75.69%
BenchmarkGzipHandler_P2k-32       9062          2971          -67.21%
BenchmarkGzipHandler_P20k-32      67799         20051         -70.43%
BenchmarkGzipHandler_P100k-32     300972        83077         -72.40%
Stateless compression

In cases where you expect to run many thousands of compressors concurrently, but with very little activity you can use stateless compression. This is not intended for regular web servers serving individual requests.

Use CompressionLevel(-3) or CompressionLevel(gzip.StatelessCompression) to enable. Consider adding a bufio.Writer with a small buffer.

See more details on stateless compression.

Migrating from gziphandler

This package removes some of the extra constructors. When replacing, this can be used to find a replacement.

  • GzipHandler(h) -> GzipHandler(h) (keep as-is)
  • GzipHandlerWithOpts(opts...) -> NewWrapper(opts...)
  • MustNewGzipLevelHandler(n) -> NewWrapper(CompressionLevel(n))
  • NewGzipLevelAndMinSize(n, s) -> NewWrapper(CompressionLevel(n), MinSize(s))

By default, some mime types will now be excluded. To re-enable compression of all types, use the ContentTypeFilter(gzhttp.CompressAllContentTypeFilter) option.

Range Requests

Ranged requests are not well supported with compression. Therefore any request with a "Content-Range" header is not compressed.

To signify that range requests are not supported any "Accept-Ranges" header set is removed when data is compressed. If you do not want this behavior use the KeepAcceptRanges() option.

Flushing data

The wrapper supports the http.Flusher interface.

The only caveat is that the writer may not yet have received enough bytes to determine if MinSize has been reached. In this case it will assume that the minimum size has been reached.

If nothing has been written to the response writer, nothing will be flushed.

License

Apache 2.0

Documentation

Index

Examples

Constants

View Source
const (
	// DefaultQValue is the default qvalue to assign to an encoding if no explicit qvalue is set.
	// This is actually kind of ambiguous in RFC 2616, so hopefully it's correct.
	// The examples seem to indicate that it is.
	DefaultQValue = 1.0

	// DefaultMinSize is the default minimum size until we enable gzip compression.
	// 1500 bytes is the MTU size for the internet since that is the largest size allowed at the network layer.
	// If you take a file that is 1300 bytes and compress it to 800 bytes, it’s still transmitted in that same 1500 byte packet regardless, so you’ve gained nothing.
	// That being the case, you should restrict the gzip compression to files with a size (plus header) greater than a single packet,
	// 1024 bytes (1KB) is therefore default.
	DefaultMinSize = 1024
)
View Source
const (
	// HeaderNoCompression can be used to disable compression.
	// Any header value will disable compression.
	// The Header is always removed from output.
	HeaderNoCompression = "No-Gzip-Compression"
)

Variables

This section is empty.

Functions

func CompressAllContentTypeFilter

func CompressAllContentTypeFilter(ct string) bool

CompressAllContentTypeFilter will compress all mime types.

func CompressionLevel

func CompressionLevel(level int) option

CompressionLevel sets the compression level

func ContentTypeFilter

func ContentTypeFilter(compress func(ct string) bool) option

ContentTypeFilter allows adding a custom content type filter.

The supplied function must return true/false to indicate if content should be compressed.

When called no parsing of the content type 'ct' has been done. It may have been set or auto-detected.

Setting this will override default and any previous Content Type settings.

func ContentTypes

func ContentTypes(types []string) option

ContentTypes specifies a list of content types to compare the Content-Type header to before compressing. If none match, the response will be returned as-is.

Content types are compared in a case-insensitive, whitespace-ignored manner.

A MIME type without any other directive will match a content type that has the same MIME type, regardless of that content type's other directives. I.e., "text/html" will match both "text/html" and "text/html; charset=utf-8".

A MIME type with any other directive will only match a content type that has the same MIME type and other directives. I.e., "text/html; charset=utf-8" will only match "text/html; charset=utf-8".

By default common compressed audio, video and archive formats, see DefaultContentTypeFilter.

Setting this will override default and any previous Content Type settings.

func DefaultContentTypeFilter

func DefaultContentTypeFilter(ct string) bool

DefaultContentTypeFilter excludes common compressed audio, video and archive formats.

func ExceptContentTypes

func ExceptContentTypes(types []string) option

ExceptContentTypes specifies a list of content types to compare the Content-Type header to before compressing. If none match, the response will be compressed.

Content types are compared in a case-insensitive, whitespace-ignored manner.

A MIME type without any other directive will match a content type that has the same MIME type, regardless of that content type's other directives. I.e., "text/html" will match both "text/html" and "text/html; charset=utf-8".

A MIME type with any other directive will only match a content type that has the same MIME type and other directives. I.e., "text/html; charset=utf-8" will only match "text/html; charset=utf-8".

By default common compressed audio, video and archive formats, see DefaultContentTypeFilter.

Setting this will override default and any previous Content Type settings.

func GzipHandler

func GzipHandler(h http.Handler) http.HandlerFunc

GzipHandler allows to easily wrap an http handler with default settings.

func Implementation

func Implementation(writer writer.GzipWriterFactory) option

Implementation changes the implementation of GzipWriter

The default implementation is writer/stdlib/NewWriter which is backed by standard library's compress/zlib

func KeepAcceptRanges

func KeepAcceptRanges() option

KeepAcceptRanges will keep Accept-Ranges header on gzipped responses. This will likely break ranged requests since that cannot be transparently handled by the filter.

func MinSize

func MinSize(size int) option

func NewWrapper

func NewWrapper(opts ...option) (func(http.Handler) http.HandlerFunc, error)

NewWrapper returns a reusable wrapper with the supplied options.

Example
package main

import (
	"fmt"
	"io"
	"io/ioutil"
	"log"
	"net/http"
	"net/http/httptest"

	"github.com/klauspost/compress/gzhttp"
	"github.com/klauspost/compress/gzip"
)

func main() {
	handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "text/plain")
		io.WriteString(w, "Hello, World, Welcome to the jungle...")
	})
	handler2 := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		io.WriteString(w, "Hello, Another World.................")
	})

	// Create a reusable wrapper with custom options.
	wrapper, err := gzhttp.NewWrapper(gzhttp.MinSize(20), gzhttp.CompressionLevel(gzip.BestSpeed))
	if err != nil {
		log.Fatalln(err)
	}
	server := http.NewServeMux()
	server.Handle("/a", wrapper(handler))
	server.Handle("/b", wrapper(handler2))

	test := httptest.NewServer(server)
	defer test.Close()

	resp, err := http.Get(test.URL + "/a")
	if err != nil {
		log.Fatalln(err)
	}
	content, _ := ioutil.ReadAll(resp.Body)
	fmt.Println(string(content))

	resp, err = http.Get(test.URL + "/b")
	if err != nil {
		log.Fatalln(err)
	}
	content, _ = ioutil.ReadAll(resp.Body)
	fmt.Println(string(content))
}
Output:

Hello, World, Welcome to the jungle...
Hello, Another World.................

func Transport

func Transport(parent http.RoundTripper) http.RoundTripper

Transport will wrap a transport with a custom gzip handler that will request gzip and automatically decompress it. Using this is significantly faster than using the default transport.

Example
package main

import (
	"fmt"
	"io/ioutil"
	"net/http"

	"github.com/klauspost/compress/gzhttp"
)

func main() {
	// Get a client.
	client := http.Client{
		// Wrap the transport:
		Transport: gzhttp.Transport(http.DefaultTransport),
	}

	resp, err := client.Get("https://google.com")
	if err != nil {
		fmt.Println(err)
		return
	}
	defer resp.Body.Close()

	body, _ := ioutil.ReadAll(resp.Body)
	fmt.Println("body:", string(body))
}
Output:

Types

type GzipResponseWriter

type GzipResponseWriter struct {
	http.ResponseWriter
	// contains filtered or unexported fields
}

GzipResponseWriter provides an http.ResponseWriter interface, which gzips bytes before writing them to the underlying response. This doesn't close the writers, so don't forget to do that. It can be configured to skip response smaller than minSize.

func (*GzipResponseWriter) Close

func (w *GzipResponseWriter) Close() error

Close will close the gzip.Writer and will put it back in the gzipWriterPool.

func (*GzipResponseWriter) Flush

func (w *GzipResponseWriter) Flush()

Flush flushes the underlying *gzip.Writer and then the underlying http.ResponseWriter if it is an http.Flusher. This makes GzipResponseWriter an http.Flusher. If not enough bytes has been written to determine if we have reached minimum size, this will be ignored. If nothing has been written yet, nothing will be flushed.

func (*GzipResponseWriter) Hijack

func (w *GzipResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error)

Hijack implements http.Hijacker. If the underlying ResponseWriter is a Hijacker, its Hijack method is returned. Otherwise an error is returned.

func (*GzipResponseWriter) Write

func (w *GzipResponseWriter) Write(b []byte) (int, error)

Write appends data to the gzip writer.

func (*GzipResponseWriter) WriteHeader

func (w *GzipResponseWriter) WriteHeader(code int)

WriteHeader just saves the response code until close or GZIP effective writes.

type GzipResponseWriterWithCloseNotify

type GzipResponseWriterWithCloseNotify struct {
	*GzipResponseWriter
}

func (GzipResponseWriterWithCloseNotify) CloseNotify

func (w GzipResponseWriterWithCloseNotify) CloseNotify() <-chan bool

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL