tokenizers

package
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 12, 2023 License: MIT Imports: 2 Imported by: 0

Documentation

Index

Constants

View Source
const (
	Unknown = iota
	Eof
	Eol
	Float
	Integer
	HexDecimal
	Number
	Symbol
	Quoted
	Word
	Keyword
	Whitespace
	Comment
	Special
)

Types (categories) of tokens such as "number", "symbol" or "word".

Variables

This section is empty.

Functions

This section is empty.

Types

type AbstractTokenizer

type AbstractTokenizer struct {
	Overrides ITokenizerOverrides

	Scanner        io.IScanner
	NextTokenValue *Token
	LastTokenType  int
	// contains filtered or unexported fields
}

Implements an abstract tokenizer class.

func InheritAbstractTokenizer

func InheritAbstractTokenizer(overrides ITokenizerOverrides) *AbstractTokenizer

func (*AbstractTokenizer) ClearCharacterStates

func (c *AbstractTokenizer) ClearCharacterStates()

func (*AbstractTokenizer) CommentState

func (c *AbstractTokenizer) CommentState() ICommentState

func (*AbstractTokenizer) DecodeStrings

func (c *AbstractTokenizer) DecodeStrings() bool

func (*AbstractTokenizer) GetCharacterState

func (c *AbstractTokenizer) GetCharacterState(symbol rune) ITokenizerState

func (*AbstractTokenizer) HasNextToken

func (c *AbstractTokenizer) HasNextToken() bool

func (*AbstractTokenizer) MergeWhitespaces

func (c *AbstractTokenizer) MergeWhitespaces() bool

func (*AbstractTokenizer) NextToken

func (c *AbstractTokenizer) NextToken() *Token

func (*AbstractTokenizer) NumberState

func (c *AbstractTokenizer) NumberState() INumberState

func (*AbstractTokenizer) QuoteState

func (c *AbstractTokenizer) QuoteState() IQuoteState

func (*AbstractTokenizer) ReadNextToken

func (c *AbstractTokenizer) ReadNextToken() *Token

func (*AbstractTokenizer) Reader

func (c *AbstractTokenizer) Reader() io.IScanner

func (*AbstractTokenizer) SetCharacterState

func (c *AbstractTokenizer) SetCharacterState(fromSymbol rune, toSymbol rune, state ITokenizerState)

func (*AbstractTokenizer) SetCommentState

func (c *AbstractTokenizer) SetCommentState(value ICommentState)

func (*AbstractTokenizer) SetDecodeStrings

func (c *AbstractTokenizer) SetDecodeStrings(value bool)

func (*AbstractTokenizer) SetMergeWhitespaces

func (c *AbstractTokenizer) SetMergeWhitespaces(value bool)

func (*AbstractTokenizer) SetNumberState

func (c *AbstractTokenizer) SetNumberState(value INumberState)

func (*AbstractTokenizer) SetQuoteState

func (c *AbstractTokenizer) SetQuoteState(value IQuoteState)

func (*AbstractTokenizer) SetReader

func (c *AbstractTokenizer) SetReader(value io.IScanner)

func (*AbstractTokenizer) SetSkipComments

func (c *AbstractTokenizer) SetSkipComments(value bool)

func (*AbstractTokenizer) SetSkipEof

func (c *AbstractTokenizer) SetSkipEof(value bool)

func (*AbstractTokenizer) SetSkipUnknown

func (c *AbstractTokenizer) SetSkipUnknown(value bool)

func (*AbstractTokenizer) SetSkipWhitespaces

func (c *AbstractTokenizer) SetSkipWhitespaces(value bool)

func (*AbstractTokenizer) SetSymbolState

func (c *AbstractTokenizer) SetSymbolState(value ISymbolState)

func (*AbstractTokenizer) SetUnifyNumbers

func (c *AbstractTokenizer) SetUnifyNumbers(value bool)

func (*AbstractTokenizer) SetWhitespaceState

func (c *AbstractTokenizer) SetWhitespaceState(value IWhitespaceState)

func (*AbstractTokenizer) SetWordState

func (c *AbstractTokenizer) SetWordState(value IWordState)

func (*AbstractTokenizer) SkipComments

func (c *AbstractTokenizer) SkipComments() bool

func (*AbstractTokenizer) SkipEof

func (c *AbstractTokenizer) SkipEof() bool

func (*AbstractTokenizer) SkipUnknown

func (c *AbstractTokenizer) SkipUnknown() bool

func (*AbstractTokenizer) SkipWhitespaces

func (c *AbstractTokenizer) SkipWhitespaces() bool

func (*AbstractTokenizer) SymbolState

func (c *AbstractTokenizer) SymbolState() ISymbolState

func (*AbstractTokenizer) TokenizeBuffer

func (c *AbstractTokenizer) TokenizeBuffer(buffer string) []*Token

func (*AbstractTokenizer) TokenizeBufferToStrings

func (c *AbstractTokenizer) TokenizeBufferToStrings(buffer string) []string

func (*AbstractTokenizer) TokenizeStream

func (c *AbstractTokenizer) TokenizeStream(scanner io.IScanner) []*Token

func (*AbstractTokenizer) TokenizeStreamToStrings

func (c *AbstractTokenizer) TokenizeStreamToStrings(scanner io.IScanner) []string

func (*AbstractTokenizer) UnifyNumbers

func (c *AbstractTokenizer) UnifyNumbers() bool

func (*AbstractTokenizer) WhitespaceState

func (c *AbstractTokenizer) WhitespaceState() IWhitespaceState

func (*AbstractTokenizer) WordState

func (c *AbstractTokenizer) WordState() IWordState

type ICommentState

type ICommentState interface {
	ITokenizerState
}

Defines an interface for tokenizer state that processes comments.

type INumberState

type INumberState interface {
	ITokenizerState
}

Defines interface for tokenizer state that processes numbers - Integers, Floats, HexDec..

type IQuoteState

type IQuoteState interface {
	ITokenizerState

	// Encodes a string value.
	//
	// Parameters:
	//   - value: A string value to be encoded.
	//   - quoteSymbol: A string quote character.
	// Returns: An encoded string.
	EncodeString(value string, quoteSymbol rune) string

	// Decodes a string value.
	//
	// Parameters:
	//   - value: A string value to be decoded.
	//   - quoteSymbol: A string quote character.
	// Returns: An decoded string.
	DecodeString(value string, quoteSymbol rune) string
}

Defines an interface for tokenizer state that processes quoted strings.

type ISymbolState

type ISymbolState interface {
	ITokenizerState

	// Add a multi-character symbol.
	//
	// Parameters:
	//   - value: The symbol to add, such as "=:="
	Add(value string, tokenType int)
}

Defines an interface for tokenizer state that processes delimiters.

type ITokenizer

type ITokenizer interface {
	// Gets skip unknown characters flag.
	SkipUnknown() bool

	// Sets skip unknown characters flag.
	SetSkipUnknown(value bool)

	// Gets skip whitespaces flag.
	SkipWhitespaces() bool

	// Sets skip whitespaces flag.
	SetSkipWhitespaces(value bool)

	// Gets skip comments flag.
	SkipComments() bool

	// Sets skip comments flag.
	SetSkipComments(value bool)

	// Gets skip End-Of-File token at the end of stream flag.
	SkipEof() bool

	// Sets skip End-Of-File token at the end of stream flag.
	SetSkipEof(value bool)

	// Gets merges whitespaces flag.
	MergeWhitespaces() bool

	// Sets merges whitespaces flag.
	SetMergeWhitespaces(value bool)

	// Gets unifies numbers: "Integers" and "Floats" makes just "Numbers" flag
	UnifyNumbers() bool

	// Sets unifies numbers: "Integers" and "Floats" makes just "Numbers" flag
	SetUnifyNumbers(value bool)

	// Gets decodes quoted strings flag.
	DecodeStrings() bool

	// Sets decodes quoted strings flag.
	SetDecodeStrings(value bool)

	// Gets a token state to process comments.
	CommentState() ICommentState

	// Gets a token state to process numbers.
	NumberState() INumberState

	// Gets a token state to process quoted strings.
	QuoteState() IQuoteState

	// Gets a token state to process symbols (single like "=" or muti-character like "<>")
	SymbolState() ISymbolState

	// Gets a token state to process white space delimiters.
	WhitespaceState() IWhitespaceState

	// Gets a token state to process words or indentificators.
	WordState() IWordState

	// Gets the stream scanner to tokenize.
	Reader() io.IScanner

	// Sets the stream scanner to tokenize.
	SetReader(scanner io.IScanner)

	// Checks if there is the next token exist.
	//
	// Returns: <code>true</code> if scanner has the next token.
	HasNextToken() bool

	// Gets the next token from the scanner.
	//
	// Returns: Next token of <code>null</code> if there are no more tokens left.
	NextToken() *Token

	// Tokenizes a textual stream into a list of token structures.
	//
	// Parameters:
	//   - scanner: A textual stream to be tokenized.
	// Returns: A list of token structures.
	TokenizeStream(scanner io.IScanner) []*Token

	// Tokenizes a string buffer into a list of tokens structures.
	//
	// Parameters:
	//   - buffer: A string buffer to be tokenized.
	// Returns: A list of token structures.
	TokenizeBuffer(buffer string) []*Token

	// Tokenizes a textual stream into a list of strings.
	//
	// Parameters:
	//   - scanner: A textual stream to be tokenized.
	// Returns: A list of token strings.
	TokenizeStreamToStrings(scanner io.IScanner) []string

	// Tokenizes a string buffer into a list of strings.
	//
	// Parameters:
	//   - buffer: A string buffer to be tokenized.
	// Returns: A list of token strings.
	TokenizeBufferToStrings(buffer string) []string
}

A tokenizer divides a string into tokens. This class is highly customizable with regard to exactly how this division occurs, but it also has defaults that are suitable for many languages. This class assumes that the character values read from the string lie in the range 0-255. For example, the Unicode value of a capital A is 65, so <code> System.out.println((char)65); </code> prints out a capital A. <p> The behavior of a tokenizer depends on its character state table. This table is an array of 256 <code>TokenizerState</code> states. The state table decides which state to enter upon reading a character from the input string. <p> For example, by default, upon reading an 'A', a tokenizer will enter a "word" state. This means the tokenizer will ask a <code>WordState</code> object to consume the 'A', along with the characters after the 'A' that form a word. The state's responsibility is to consume characters and return a complete token. <p> The default table sets a SymbolState for every character from 0 to 255, and then overrides this with:<blockquote><pre> From To State 0 ' ' whitespaceState 'a' 'z' wordState 'A' 'Z' wordState 160 255 wordState '0' '9' numberState '-' '-' numberState '.' '.' numberState '"' '"' quoteState '\” '\” quoteState '/' '/' slashState </pre></blockquote> In addition to allowing modification of the state table, this class makes each of the states above available. Some of these states are customizable. For example, wordState allows customization of what characters can be part of a word, after the first character.

type ITokenizerOverrides

type ITokenizerOverrides interface {
	ReadNextToken() *Token
}

type ITokenizerState

type ITokenizerState interface {
	// Gets the next token from the stream started from the character linked to this state.
	//
	// Parameters:
	//   - scanner: A textual string to be tokenized.
	//   - tokenizer: A tokenizer class that controls the process.
	// Returns: The next token from the top of the stream.
	NextToken(scanner io.IScanner, tokenizer ITokenizer) *Token
}

A tokenizerState returns a token, given a scanner, an initial character read from the scanner, and a tokenizer that is conducting an overall tokenization of the scanner. The tokenizer will typically have a character state table that decides which state to use, depending on an initial character. If a single character is insufficient, a state such as <code>SlashState</code> will read a second character, and may delegate to another state, such as <code>SlashStarState</code>. This prospect of delegation is the reason that the <code>nextToken()</code> method has a tokenizer argument.

type IWhitespaceState

type IWhitespaceState interface {
	ITokenizerState

	// Establish the given characters as whitespace to ignore.
	//
	// Parameters:
	//   - fromSymbol: First character index of the interval.
	//   - toSymbol: Last character index of the interval.
	//   - enable: <code>true</code> if this state should ignore characters in the given range.
	SetWhitespaceChars(fromSymbol rune, toSymbol rune, enable bool)

	// Clears definitions of whitespace characters.
	ClearWhitespaceChars()
}

Defines an interface for tokenizer state that processes whitespaces (' ', '\t')

type IWordState

type IWordState interface {
	ITokenizerState

	// Establish characters in the given range as valid characters for part of a word after
	// the first character. Note that the tokenizer must determine which characters are valid
	// as the beginning character of a word.
	//
	// Parameters:
	//   - fromSymbol: First character index of the interval.
	//   - toSymbol: Last character index of the interval.
	//   - enable: <code>true</code> if this state should use characters in the given range.
	SetWordChars(fromSymbol rune, toSymbol rune, enable bool)

	// Clears definitions of word chars.
	ClearWordChars()
}

Defines an interface for tokenizer state that processes words, identificators or keywords

type Token

type Token struct {
	// contains filtered or unexported fields
}

A token represents a logical chunk of a string. For example, a typical tokenizer would break the string "1.23 &lt;= 12.3" into three tokens: the number 1.23, a less-than-or-equal symbol, and the number 12.3. A token is a receptacle, and relies on a tokenizer to decide precisely how to divide a string into tokens.

func NewToken

func NewToken(typ int, value string, line int, column int) *Token

Constructs this token with type and value.

Parameters:

  • typ: The type of this token.
  • value: The token string value.
  • line: The line number where the token is.
  • column: The column number where the token is.

Returns: Created token

func (*Token) Column

func (c *Token) Column() int

The column number where the token is.

func (*Token) Equals

func (c *Token) Equals(obj interface{}) bool

func (*Token) Line

func (c *Token) Line() int

The line number where the token is.

func (*Token) Type

func (c *Token) Type() int

The token type.

func (*Token) Value

func (c *Token) Value() string

The token value.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL