goavro

package module
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 31, 2017 License: Apache-2.0 Imports: 21 Imported by: 14

README

goavro

Description

Goavro is a library written in Go that supports translating binary and textual Avro data to Go native data types, and conversely translating Go native data types to binary or textual Avro data. It encodes by appending to an existing or empty Go byte slice, and decodes by consuming bytes from an existing Go byte slice.

A goavro Codec is created as a stateless structure that can be safely used in multiple go routines simultaneously.

With the exeption of features not yet supported, goavro attempts to be fully compliant with the most recent version of the Avro specification.

Resources

Usage

Documentation is available via GoDoc.

Also please see the example programs in the examples directory for reference.

package main

import (
	"fmt"

	"github.com/karrick/goavro"
)

func main() {
	codec, err := goavro.NewCodec(`
{
  "type": "record",
  "name": "LongList",
  "fields" : [
	{"name": "next", "type": ["null", "LongList"], "default": null}
  ]
}
`)
	if err != nil {
		fmt.Println(err)
	}

	// NOTE: May omit fields when using default value
	textual := []byte(`{"next":{"LongList":{}}}`)

	// Convert textual Avro data (in Avro JSON format) to native Go form
	native, _, err := codec.NativeFromText(textual)
	if err != nil {
		fmt.Println(err)
	}

	// Convert native Go form to binary Avro data
	binary, err := codec.BinaryFromNative(nil, native)
	if err != nil {
		fmt.Println(err)
	}

	// Convert binary Avro data back to native Go form
	native, _, err = codec.NativeFromBinary(binary)
	if err != nil {
		fmt.Println(err)
	}

	// Convert native Go form to textual Avro data
	textual, err = codec.TextFromNative(nil, native)
	if err != nil {
		fmt.Println(err)
	}

	// NOTE: Textual encoding will show all fields, even those with values that
	// match their default values
	fmt.Println(string(textual))
	// Output: {"next":{"LongList":{"next":null}}}
}
Translating Data

A Codec provides four methods for translating between a byte slice of either binary or textual Avro data and native Go data.

The following methods convert data between native Go data and byte slices of the binary Avro representation:

BinaryFromNative
NativeFromBinary

The following methods convert data between native Go data and byte slices of the textual Avro representation:

NativeFromText
TextFromNative

Each Codec also exposes the Schema method to return a simplified version of the JSON schema string used to create the Codec.

Translating From Avro to Go Data

Goavro does not use Go's structure tags to translate data between native Go types and Avro encoded data.

When translating from either binary or textual Avro to native Go data, goavro returns primitive Go data values for corresponding Avro data values. That is, a Go nil is returned for an Avro null; a Go bool for an Avro boolean; a Go []byte for an Avro bytes; a Go float32 for an Avro float, a Go float64 for an Avro double; a Go int64 for an Avro long; a Go int32 for an Avro int; and a Go string for an Avro string.

For complex Avro data types, a Go []interface{} is returned for an Avro array; a Go string for an Avro enum; a Go []byte for an Avro fixed; a Go map[string]interface{} for an Avro map and record.

Because of encoding rules for Avro unions, when an union's value is null, a simple Go nil is returned. However when an union's value is non-nil, a Go map[string]interface{} with a single key is returned for the union. The map's single key is the Avro type name and its value is the datum's value.

Translating From Go to Avro Data

Goavro does not use Go's structure tags to translate data between native Go types and Avro encoded data.

When translating from native Go to either binary or textual Avro data, goavro generally requires the same native Go data types as the decoder would provide, with some exceptions for programmer convenience. Goavro will accept any numerical data type provided there is no precision lost when encoding the value. For instance, providing float64(3.0) to an encoder expecting an Avro int would succeed, while sending float64(3.5) to the same encoder would return an error.

When providing a slice of items for an encoder, the encoder will accept either []interface{}, or any slice of the required type. For instance, when the Avro schema specifies: {"type":"array","items":"string"}, the encoder will accept either []interface{}, or []string. If given []int, the encoder will return an error when it attempts to encode the first non-string array value using the string encoder.

When providing a value for an Avro union, the encoder will accept nil for a null value. If the value is non-nil, it must be a map[string]interface{} with a single key-value pair, where the key is the Avro type name and the value is the datum's value. As a convenience, the Union function wraps any datum value in a map as specified above.

func ExampleUnion() {
	codec, err := goavro.NewCodec(`["null","string","int"]`)
	if err != nil {
		fmt.Println(err)
	}
	buf, err := codec.TextFromNative(nil, goavro.Union("string", "some string"))
	if err != nil {
		fmt.Println(err)
	}
	fmt.Println(string(buf))
	// Output: {"string":"some string"}
}

Implementation Notes

API

In general it is poor form to define a library API which shares the same function or method names but provides a different method signature to an accepted standard. Go has particular strong emphasis on what a Reader and Writer are, and they conflict with what the Avro specification describes as a reader and a writer.

In Go, an io.Reader reads data from the stream specified at object instantiation time into a preallocated slice of bytes and returns both the number of bytes read along with an error. In the Avro specification, a reader is a function that decodes Avro data and returns data in native form.

A Go io.Writer writes bytes from a slice of bytes to a stream specified at its instantiation time and returns the number of bytes written along with an error. In the Avro specification, a writer is a function that encodes data from native form to either binary or text Avro bytes.

Record Field Default Values

The Avro specification allows for providing default values for each Avro Record field. The default value is to be used when reading instances that lack the respective field.

When reading binary Avro data, a Record is decoded by reading bytes for the first Record field, immediately followed by the second Record field, and so on. No fields may be skipped in a Record's binary encoding, so a default value is deemed unusable. If this assessment is wrong, please open a Bug, and provide one or more suitable examples, and the developers will be happy to revisit the issue.

When decoding from textual Avro data that is missing a particular record field name, if the record field has a default value, it will be used in place of the missing value.

When encoding from native Go data that is missing a particular record field name, if the record field has a default value, it will be used in place of the missing value.

Limitations

With the exeption of features not yet supported, goavro attempts to be fully compliant with the most recent version of the Avro specification.

Default maximum block count and block size for OCF

To prevent over allocation of memory when decoding OCF data, goavro returns an error whenever an OCF block count exceeds MaxBlockCount, or a block size exceeds MaxBlockSize. Both of these tokens are set to math.MaxInt32, or ~2.2 GiB, but are declared as variables so a user can change the limit if deemed necessary.

Aliases

The Avro specification allows an implementation to optionally map a writer's schema to a reader's schema using aliases. Although goavro can compile schemas with aliases, it does not implement this feature.

Canonicalization of Schemas

The Avro specification describes the process by which schemas are canonlicalized. Goavro does not canonicalize schema strings when creating a Codec, although it does eliminate extra whitespace.

Logical Types

Goavro does not implement Logical Types as required by the Avro specification.

Kafka Streams

Kafka is the reason goavro was written. Similar to Avro Object Container Files being a layer of abstraction above Avro Data Serialization format, Kafka's use of Avro is a layer of abstraction that also sits above Avro Data Serialization format, but has its own schema.

RPC Support

Goavro does not implement any high level RPC mechanics required by the Avro specification. Avro protocol declarations, messages, message transports, message framing, handshakes, and call format are all unsupported by this library.

Record Field Order

The Avro specification allows for providing a sory order string, either ascending, descending, or ignore, for use when sorting records. While goavro can create Codec instances that specify order, those values are not used.

Record Field Aliases

The Avro specification allows for providing a JSON array of strings as alternate names for a Record field. While goavro can create Codec instances that specify aliases, that list is ignored.

Documentation

Index

Examples

Constants

View Source
const (
	// CompressionNullLabel is used when OCF blocks are not compressed.
	CompressionNullLabel = "null"

	// CompressionDeflateLabel is used when OCF blocks are compressed using the
	// deflate algorithm.
	CompressionDeflateLabel = "deflate"

	// CompressionSnappyLabel is used when OCF blocks are compressed using the
	// snappy algorithm.
	CompressionSnappyLabel = "snappy"
)

Variables

View Source
var (
	// MaxBlockCount is the maximum number of data items allowed in a single
	// binary block that will be decoded from a binary stream. This check is to
	// ensure decoding binary data will not cause the library to over allocate
	// RAM, potentially creating a denial of service on the system.
	//
	// If a particular application needs to decode binary Avro data that
	// potentially has more data items in a single block, then this variable may
	// be modified at your discretion.
	MaxBlockCount = int64(math.MaxInt32)

	// MaxBlockSize is the maximum number of bytes that will be allocated for a
	// single block of data items when decoding from a binary stream. This check
	// is to ensure decoding binary data will not cause the library to over
	// allocate RAM, potentially creating a denial of service on the system.
	//
	// If a particular application needs to decode binary Avro data that
	// potentially has more bytes in a single block, then this variable may be
	// modified at your discretion.
	MaxBlockSize = int64(math.MaxInt32)
)

Functions

func Union added in v0.0.2

func Union(name string, datum interface{}) interface{}

Union wraps a datum value in a map for encoding as a Union, as required by Union encoder.

Example
package main

import (
	"fmt"

	"github.com/karrick/goavro"
)

func main() {
	codec, err := goavro.NewCodec(`["null","string","int"]`)
	if err != nil {
		fmt.Println(err)
	}
	buf, err := codec.TextualFromNative(nil, goavro.Union("string", "some string"))
	if err != nil {
		fmt.Println(err)
	}
	fmt.Println(string(buf))
}
Output:

{"string":"some string"}

Types

type Codec added in v0.0.6

type Codec struct {
	// contains filtered or unexported fields
}

Codec supports decoding binary and text Avro data to Go native data types, and conversely encoding Go native data types to binary or text Avro data. A Codec is created as a stateless structure that can be safely used in multiple go routines simultaneously.

func NewCodec

func NewCodec(schemaSpecification string) (*Codec, error)

NewCodec returns a Codec used to translate between a byte slice of either binary or textual Avro data and native Go data.

Creating a `Codec` is fast, but ought to be performed exactly once per Avro schema to process. Once a `Codec` is created, it may be used multiple times to convert data between native form and binary Avro representation, or between native form and textual Avro representation.

A particular `Codec` can work with only one Avro schema. However, there is no practical limit to how many `Codec`s may be created and used in a program. Internally a `Codec` is merely a named tuple of four function pointers, and maintains no runtime state that is mutated after instantiation. In other words, `Codec`s may be safely used by many go routines simultaneously, as your program requires.

codec, err := goavro.NewCodec(`
{
  "type": "record",
  "name": "LongList",
  "fields" : [
    {"name": "next", "type": ["null", "LongList"], "default": null}
  ]
}
`)
if err != nil {
        fmt.Println(err)
}

func (Codec) BinaryFromNative added in v0.2.0

func (c Codec) BinaryFromNative(buf []byte, datum interface{}) ([]byte, error)

BinaryFromNative appends the binary encoded byte slice representation of the provided native datum value to the provided byte slice in accordance with the Avro schema supplied when creating the Codec. It is supplied a byte slice to which to append the binary encoded data along with the actual data to encode. On success, it returns a new byte slice with the encoded bytes appended, and a nil error value. On error, it returns the original byte slice, and the error message.

func ExampleBinaryFromNative() {
    codec, err := goavro.NewCodec(`
{
  "type": "record",
  "name": "LongList",
  "fields" : [
    {"name": "next", "type": ["null", "LongList"], "default": null}
  ]
}
`)
    if err != nil {
        fmt.Println(err)
    }

    // Convert native Go form to binary Avro data
    binary, err := codec.BinaryFromNative(nil, map[string]interface{}{
        "next": map[string]interface{}{
            "LongList": map[string]interface{}{
                "next": map[string]interface{}{
                    "LongList": map[string]interface{}{
                    // NOTE: May omit fields when using default value
                    },
                },
            },
        },
    })
    if err != nil {
        fmt.Println(err)
    }

    fmt.Printf("%#v", binary)
    // Output: []byte{0x2, 0x2, 0x0}
}

func (Codec) NativeFromBinary added in v0.2.0

func (c Codec) NativeFromBinary(buf []byte) (interface{}, []byte, error)

NativeFromBinary returns a native datum value from the binary encoded byte slice in accordance with the Avro schema supplied when creating the Codec. On success, it returns the decoded datum, along with a new byte slice with the decoded bytes consumed, and a nil error value. On error, it returns nil for the datum value, the original byte slice, and the error message.

func ExampleNativeFromBinary() {
    codec, err := goavro.NewCodec(`
{
  "type": "record",
  "name": "LongList",
  "fields" : [
    {"name": "next", "type": ["null", "LongList"], "default": null}
  ]
}
`)
    if err != nil {
        fmt.Println(err)
    }

    // Convert native Go form to binary Avro data
    binary := []byte{0x2, 0x2, 0x0}

    native, _, err := codec.NativeFromBinary(binary)
    if err != nil {
        fmt.Println(err)
    }

    fmt.Printf("%v", native)
    // Output: map[next:map[LongList:map[next:map[LongList:map[next:<nil>]]]]]
}

func (Codec) NativeFromTextual added in v0.2.0

func (c Codec) NativeFromTextual(buf []byte) (interface{}, []byte, error)

NativeFromTextual converts Avro data in JSON text format from the provided byte slice to Go native data types in accordance with the Avro schema supplied when creating the Codec. On success, it returns the decoded datum, along with a new byte slice with the decoded bytes consumed, and a nil error value. On error, it returns nil for the datum value, the original byte slice, and the error message.

func ExampleNativeFromTextual() {
    codec, err := goavro.NewCodec(`
{
  "type": "record",
  "name": "LongList",
  "fields" : [
    {"name": "next", "type": ["null", "LongList"], "default": null}
  ]
}
`)
    if err != nil {
        fmt.Println(err)
    }

    // Convert native Go form to text Avro data
    text := []byte(`{"next":{"LongList":{"next":{"LongList":{"next":null}}}}}`)

    native, _, err := codec.NativeFromTextual(text)
    if err != nil {
        fmt.Println(err)
    }

    fmt.Printf("%v", native)
    // Output: map[next:map[LongList:map[next:map[LongList:map[next:<nil>]]]]]
}

func (Codec) Schema added in v0.1.7

func (c Codec) Schema() string

Schema returns the compact schema used to create the Codec.

func ExampleCodecSchema() {
    schema := `{"type":"map","values":{"type":"enum","name":"foo","symbols":["alpha","bravo"]}}`
    codec, err := goavro.NewCodec(schema)
    if err != nil {
        fmt.Println(err)
    }
    fmt.Println(codec.Schema())
    // Output: {"type":"map","values":{"name":"foo","type":"enum","symbols":["alpha","bravo"]}}
}

func (Codec) TextualFromNative added in v0.2.0

func (c Codec) TextualFromNative(buf []byte, datum interface{}) ([]byte, error)

TextualFromNative converts Go native data types to Avro data in JSON text format in accordance with the Avro schema supplied when creating the Codec. It is supplied a byte slice to which to append the encoded data and the actual data to encode. On success, it returns a new byte slice with the encoded bytes appended, and a nil error value. On error, it returns the original byte slice, and the error message.

func ExampleTextualFromNative() {
    codec, err := goavro.NewCodec(`
{
  "type": "record",
  "name": "LongList",
  "fields" : [
    {"name": "next", "type": ["null", "LongList"], "default": null}
  ]
}
`)
    if err != nil {
        fmt.Println(err)
    }

    // Convert native Go form to text Avro data
    text, err := codec.TextualFromNative(nil, map[string]interface{}{
        "next": map[string]interface{}{
            "LongList": map[string]interface{}{
                "next": map[string]interface{}{
                    "LongList": map[string]interface{}{
                    // NOTE: May omit fields when using default value
                    },
                },
            },
        },
    })
    if err != nil {
        fmt.Println(err)
    }

    fmt.Printf("%s", text)
    // Output: {"next":{"LongList":{"next":{"LongList":{"next":null}}}}}
}

type Compression added in v0.0.8

type Compression uint8

Compression are values used to specify compression algorithm used to compress and decompress Avro Object Container File (OCF) streams.

const (
	// CompressionNull is used when OCF blocks are not compressed.
	CompressionNull Compression = iota

	// CompressionDeflate is used when OCF blocks are compressed using the
	// deflate algorithm.
	CompressionDeflate

	// CompressionSnappy is used when OCF blocks are compressed using the snappy
	// algorithm.
	CompressionSnappy
)

type ErrInvalidName

type ErrInvalidName struct {
	Message string
}

ErrInvalidName is the error returned when one or more parts of an Avro name is invalid.

func (ErrInvalidName) Error

func (e ErrInvalidName) Error() string

type OCFReader added in v0.0.8

type OCFReader struct {
	// contains filtered or unexported fields
}

OCFReader structure is used to read Object Container Files (OCF).

func NewOCFReader added in v0.0.8

func NewOCFReader(ior io.Reader) (*OCFReader, error)

NewOCFReader initializes and returns a new structure used to read an Avro Object Container File (OCF).

func example(ior io.Reader) error {
	ocfr, err := goavro.NewOCFReader(ior)
	if err != nil {
		return err
	}
	for ocfr.Scan() {
		datum, err := ocfr.Read()
		if err != nil {
			return err
		}
		fmt.Println(datum)
	}
	return ocfr.Err()
}

func (*OCFReader) Codec added in v0.1.0

func (ocfr *OCFReader) Codec() *Codec

Codec returns the codec found within the OCF file.

func (*OCFReader) CompressionID added in v0.1.2

func (ocfr *OCFReader) CompressionID() Compression

CompressionID returns the ID of the compression algorithm found within the OCF file.

func (*OCFReader) CompressionName added in v0.1.2

func (ocfr *OCFReader) CompressionName() string

CompressionName returns the name of the compression algorithm found within the OCF file.

func (*OCFReader) Err added in v0.0.8

func (ocfr *OCFReader) Err() error

Err returns the last error encountered while reading the OCF file. It does not reset the read error.

func (*OCFReader) Read added in v0.0.8

func (ocfr *OCFReader) Read() (interface{}, error)

Read consumes one data item from the Avro OCF stream and returns it. Read is designed to be called only once after each invocation of the Scan method. See the documentation for goavro.NewOCFReader for an example of how to use Read.

func (*OCFReader) Scan added in v0.0.8

func (ocfr *OCFReader) Scan() bool

Scan returns true when there is at least one more data item to be read from the Avro OCF. Scan ought to be called prior to calling the Read method each time the Read method is invoked. See the documentation for goavro.NewOCFReader for an example of how to use Scan.

func (*OCFReader) Schema added in v0.0.8

func (ocfr *OCFReader) Schema() string

Schema returns the schema found within the OCF file.

type OCFWriter added in v0.0.8

type OCFWriter struct {
	// contains filtered or unexported fields
}

OCFWriter is used to create an Avro Object Container File (OCF).

func NewOCFWriter added in v0.0.8

func NewOCFWriter(config OCFWriterConfig) (*OCFWriter, error)

NewOCFWriter returns a newly created OCFWriter which may be used to create an Avro Object Container File (OCF).

func (*OCFWriter) Append added in v0.0.8

func (ocf *OCFWriter) Append(data []interface{}) error

Append appends one or more data items to an OCF file in a block. If there are more data items in the slice than MaxBlockCount allows, the data slice will be chunked into multiple blocks, each not having more than MaxBlockCount items.

type OCFWriterConfig added in v0.0.8

type OCFWriterConfig struct {
	// W specifies the io.Writer to send the encode the data, (required).
	W io.Writer

	// Schema specifies the Avro schema for the data to be encoded, (required).
	Schema string

	// Codec specifies the compression codec used, (optional). If omitted,
	// defaults to "null" codec.
	Compression Compression
}

OCFWriterConfig is used to specify creation parameters for OCFWriter.

Directories

Path Synopsis
examples

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL