gojsonlex

package module
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 19, 2020 License: MIT Imports: 8 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IsDelim

func IsDelim(c rune) bool

IsDelim reports whether the given rune is a JSON delimiter

func IsHexDigit added in v0.2.0

func IsHexDigit(c rune) bool

func IsValidEscapedSymbol added in v0.2.0

func IsValidEscapedSymbol(c rune) bool

func StringDeepCopy added in v0.2.0

func StringDeepCopy(s string) string

StringDeepCopy creates a copy of the given string with it's own underlying bytearray. Use this function to make a copy of string returned by Token()

Types

type JSONLexer

type JSONLexer struct {
	// contains filtered or unexported fields
}

JSONLexer is a JSON lexical analyzer with streaming API support, where stream is a sequence of JSON tokens. JSONLexer does its own IO buffering so prefer low-level readers if you want to miminize memory footprint.

JSONLexer uses a ring buffer for parsing tokens, every token must fit in its size, otherwise buffer will be automatically grown. Initial size of buffer is 4096 bytes, however you can tweak it with SetBufSize() in case you know that most tokens are going to be long.

JSONLexer uses unsafe pointers into the underlying buf to minimize allocations, see Token() for the provided guarantees.

func NewJSONLexer

func NewJSONLexer(r io.Reader) (*JSONLexer, error)

NewJSONLexer creates a new JSONLexer with the given reader.

func (*JSONLexer) SetBufSize

func (l *JSONLexer) SetBufSize(bufSize int)

SetBufSize creates a new buffer of the given size. MUST be called before parsing started.

func (*JSONLexer) SetDebug added in v0.2.0

func (l *JSONLexer) SetDebug(debug bool)

SetDebug enables debug logging

func (*JSONLexer) SetSkipDelims

func (l *JSONLexer) SetSkipDelims(mustSkip bool)

SetSkipDelims tells JSONLexer to skip delimiters and return only keys and values. This can be useful in case you want to simply match the input to some specific grammar and have no intention of doing full syntax analysis.

func (*JSONLexer) Token

func (l *JSONLexer) Token() (json.Token, error)

Token returns the next JSON token. All strings returned by Token are guaranteed to be valid until the next Token call, otherwise you MUST make a deep copy.

func (*JSONLexer) TokenFast added in v0.2.0

func (l *JSONLexer) TokenFast() (TokenGeneric, error)

TokenFast is a more efficient version of Token(). All strings returned by Token are guaranteed to be valid until the next Token call, otherwise you MUST make a deep copy.

type TokenGeneric added in v0.2.0

type TokenGeneric struct {
	// contains filtered or unexported fields
}

func (*TokenGeneric) Bool added in v0.2.0

func (t *TokenGeneric) Bool() bool

func (*TokenGeneric) Delim added in v0.2.0

func (t *TokenGeneric) Delim() byte

func (*TokenGeneric) IsNull added in v0.2.0

func (t *TokenGeneric) IsNull() bool

func (*TokenGeneric) Number added in v0.2.0

func (t *TokenGeneric) Number() float64

func (*TokenGeneric) String added in v0.2.0

func (t *TokenGeneric) String() string

String returns string that points into internal lexer buffer and is guaranteed to be valid until the next Token call, otherwise you MUST make a deep copy

func (*TokenGeneric) StringCopy added in v0.2.0

func (t *TokenGeneric) StringCopy() string

StringCopy return a deep copy of string

func (*TokenGeneric) Type added in v0.2.0

func (t *TokenGeneric) Type() TokenType

type TokenType

type TokenType byte
const (
	LexerTokenTypeDelim TokenType = iota
	LexerTokenTypeString
	LexerTokenTypeNumber
	LexerTokenTypeBool
	LexerTokenTypeNull
)

func (TokenType) String added in v0.2.0

func (t TokenType) String() string

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL