Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
View Source
var (
ErrForceStopped = errors.New("force stopped")
)
error messages
Functions ¶
This section is empty.
Types ¶
type Lexer ¶
type Lexer struct {
// contains filtered or unexported fields
}
Lexer represents a lexical analyzer
type Token ¶
type Token struct {
// contains filtered or unexported fields
}
Token represents a known sequence of characters (lexical unit)
func Tokenize ¶
Tokenize takes an array of bytes and returns all the tokens within it, or an error if a token can't be identified.
type TokenType ¶
type TokenType uint8
TokenType represents all the possible types of a lexical unit
const ( TokenInvalid TokenType = iota TokenOpenExpression // Open parenthesis: "[" TokenCloseExpression // Close parenthesis: "]" TokenOpenList // Open square bracket: "(" TokenCloseList // Close square bracker: ")" TokenOpenMap // Open curly bracket: "{" TokenCloseMap // Close curly bracker: "}" TokenNewLine // Newline: "\n" TokenDoubleQuote // Double quote: '"' TokenHash // Hash: "#" TokenWhitespace // Space, tab, linefeed or carriage return: \s\f\t\r TokenWord // Letters ([a-zA-Z]) and underscore TokenInteger // Integers TokenSequence // Extended sequence TokenColon // Colon: ":" TokenDot // Dot: "." TokenBackslash // Backslash: "\" TokenEOF // End of file )
List of types of lexical units
Click to show internal directories.
Click to hide internal directories.