Documentation ¶
Overview ¶
Package lex provides all the lexing functions that transform text into lexical tokens, using token types defined in the pi/token package. It also has the basic file source and position / region management functionality.
Index ¶
- Variables
- func DigitVal(ch rune) int
- func IsDigit(ch rune) bool
- func IsLetter(ch rune) bool
- func IsWhiteSpace(ch rune) bool
- func PrintError(w io.Writer, err error)
- func RunesFromBytes(b []byte) [][]rune
- func RunesFromString(str string) [][]rune
- type Actions
- type EosPos
- type Error
- type ErrorList
- func (p *ErrorList) Add(pos Pos, fname, msg string, srcln string, rule ki.Ki) *Error
- func (p ErrorList) Err() error
- func (p ErrorList) Error() string
- func (p ErrorList) Len() int
- func (p ErrorList) Less(i, j int) bool
- func (p *ErrorList) RemoveMultiples()
- func (p ErrorList) Report(maxN int, basepath string, showSrc, showRule bool) string
- func (p *ErrorList) Reset()
- func (p ErrorList) Sort()
- func (p ErrorList) Swap(i, j int)
- type File
- func (fl *File) AllocLines()
- func (fl *File) EnsureFinalEos(ln int)
- func (fl *File) InitFromLine(sfl *File, ln int) bool
- func (fl *File) InitFromString(str string, fname string, sup filecat.Supported) bool
- func (fl *File) InsertEos(cp Pos) Pos
- func (fl *File) IsLexPosValid(pos Pos) bool
- func (fl *File) LexAt(cp Pos) *Lex
- func (fl *File) LexAtSafe(cp Pos) Lex
- func (fl *File) LexLine(ln int) Line
- func (fl *File) LexTagSrc() string
- func (fl *File) LexTagSrcLn(ln int) string
- func (fl *File) LinesDeleted(stln, edln int)
- func (fl *File) LinesInserted(stln, nsz int)
- func (fl *File) NLines() int
- func (fl *File) NTokens(ln int) int
- func (fl *File) NextEos(stpos Pos, depth int) (Pos, bool)
- func (fl *File) NextEosAnyDepth(stpos Pos) (Pos, bool)
- func (fl *File) NextTokenPos(pos Pos) (Pos, bool)
- func (fl *File) OpenFile(fname string) error
- func (fl *File) PrevDepth(ln int) int
- func (fl *File) PrevStack(ln int) Stack
- func (fl *File) PrevTokenPos(pos Pos) (Pos, bool)
- func (fl *File) RegSrc(reg Reg) string
- func (fl *File) ReplaceEos(cp Pos)
- func (fl *File) SetLine(ln int, lexs, comments Line, stack Stack)
- func (fl *File) SetSrc(src *[][]rune, fname string, sup filecat.Supported)
- func (fl *File) SrcLine(ln int) string
- func (fl *File) Token(pos Pos) token.KeyToken
- func (fl *File) TokenMapReg(reg Reg) TokenMap
- func (fl *File) TokenRegSrc(reg Reg) string
- func (fl *File) TokenSrc(pos Pos) []rune
- func (fl *File) TokenSrcPos(pos Pos) Reg
- func (fl *File) TokenSrcReg(reg Reg) Reg
- func (fl *File) ValidTokenPos(pos Pos) (Pos, bool)
- type LangLexer
- type Lex
- type Lexer
- type Line
- type MatchPos
- type Matches
- type PassTwo
- func (pt *PassTwo) EosDetect(ts *TwoState)
- func (pt *PassTwo) EosDetectPos(ts *TwoState, pos Pos, nln int)
- func (pt *PassTwo) Error(ts *TwoState, msg string)
- func (pt *PassTwo) HasErrs(ts *TwoState) bool
- func (pt *PassTwo) MismatchError(ts *TwoState, tok token.Tokens)
- func (pt *PassTwo) NestDepth(ts *TwoState)
- func (pt *PassTwo) NestDepthLine(line Line, initDepth int)
- func (pt *PassTwo) PopNest(ts *TwoState, tok token.Tokens)
- func (pt *PassTwo) PushNest(ts *TwoState, tok token.Tokens)
- type Pos
- type Reg
- type Rule
- func (lr *Rule) AsLexRule() *Rule
- func (lr *Rule) BaseIface() reflect.Type
- func (lr *Rule) Compile(ls *State) bool
- func (lr *Rule) CompileAll(ls *State) bool
- func (lr *Rule) CompileNameMap(ls *State) bool
- func (lr *Rule) ComputeMatchLen(ls *State)
- func (lr *Rule) DoAct(ls *State, act Actions, tok *token.KeyToken)
- func (lr *Rule) Find(find string) []*Rule
- func (lr *Rule) IsMatch(ls *State) bool
- func (lr *Rule) IsMatchPos(ls *State) bool
- func (lr *Rule) Lex(ls *State) *Rule
- func (lr *Rule) LexStart(ls *State) *Rule
- func (lr *Rule) TargetLen(ls *State) int
- func (lr *Rule) Validate(ls *State) bool
- func (lr *Rule) WriteGrammar(writer io.Writer, depth int)
- type Stack
- type State
- func (ls *State) Add(tok token.KeyToken, st, ed int)
- func (ls *State) AtEol() bool
- func (ls *State) CurRune() bool
- func (ls *State) CurState() string
- func (ls *State) Error(pos int, msg string, rule *Rule)
- func (ls *State) Init()
- func (ls *State) LineString() string
- func (ls *State) MatchState(st string) bool
- func (ls *State) Next(inc int) bool
- func (ls *State) NextRune() bool
- func (ls *State) NextSrcLine() string
- func (ls *State) PopState() string
- func (ls *State) PushState(st string)
- func (ls *State) ReadEscape(quote rune) bool
- func (ls *State) ReadName()
- func (ls *State) ReadNameTmp(off int) string
- func (ls *State) ReadNumber() token.Tokens
- func (ls *State) ReadQuoted()
- func (ls *State) ReadUntil(until string)
- func (ls *State) Rune(off int) (rune, bool)
- func (ls *State) ScanMantissa(base int)
- func (ls *State) SetLine(src []rune)
- func (ls *State) String(off, sz int) (string, bool)
- type TokenMap
- type TwoState
Constants ¶
This section is empty.
Variables ¶
var KiT_Rule = kit.Types.AddType(&Rule{}, RuleProps)
var PosErr = Pos{-1, -1}
PosErr represents an error text position (-1 for both line and char) used as a return value for cases where error positions are possible
var PosZero = Pos{}
PosZero is the uninitialized zero text position (which is still a valid position)
var RegZero = Reg{}
RegZero is the zero region
var RuleProps = ki.Props{}
Functions ¶
func IsWhiteSpace ¶
func PrintError ¶
PrintError is a utility function that prints a list of errors to w, one error per line, if the err parameter is an ErrorList. Otherwise it prints the err string.
func RunesFromBytes ¶ added in v0.5.5
RunesFromBytes returns the lines of runes from a basic byte array
func RunesFromString ¶ added in v0.5.5
RunesFromString returns the lines of runes from a string (more efficient than converting to bytes)
Types ¶
type Actions ¶
type Actions int
Actions are lexing actions to perform
const ( // Next means advance input position to the next character(s) after the matched characters Next Actions = iota // Name means read in an entire name, which is letters, _ and digits after first letter // position will be advanced to just after Name // Number means read in an entire number -- the token type will automatically be // set to the actual type of number that was read in, and position advanced to just after Number // Quoted means read in an entire string enclosed in quote delimeter // that is present at current position, with proper skipping of escaped. // Position advanced to just after Quoted // QuotedRaw means read in an entire string enclosed in quote delimeter // that is present at start position, with proper skipping of escaped. // Position advanced to just after. // Raw version supports multi-line and includes CR etc at end of lines (e.g., back-tick // in various languages) QuotedRaw // EOL means read till the end of the line (e.g., for single-line comments) EOL // ReadUntil reads until string(s) in the Until field are found, // or until the EOL if none are found ReadUntil // PushState means push the given state value onto the state stack PushState // PopState means pop given state value off the state stack PopState // SetGuestLex means install the Name (must be a prior action) as the guest // lexer -- it will take over lexing until PopGuestLex is called SetGuestLex // PopGuestLex removes the current guest lexer and returns to the original // language lexer PopGuestLex ActionsN )
The lexical acts
func (*Actions) FromString ¶
func (Actions) MarshalJSON ¶
func (*Actions) UnmarshalJSON ¶
type EosPos ¶ added in v0.5.5
type EosPos []int
EosPos is a line of EOS token positions, always sorted low-to-high
type Error ¶
type Error struct { Pos Pos `desc:"position where the error occurred in the source"` Filename string `desc:"full filename with path"` Msg string `desc:"brief error message"` Src string `desc:"line of source where error was"` Rule ki.Ki `desc:"lexer or parser rule that emitted the error"` }
In an ErrorList, an error is represented by an *Error. The position Pos, if valid, points to the beginning of the offending token, and the error condition is described by Msg.
func (Error) Error ¶
Error implements the error interface -- gives the minimal version of error string
type ErrorList ¶
type ErrorList []*Error
ErrorList is a list of *Errors. The zero value for an ErrorList is an empty ErrorList ready to use.
func (ErrorList) Err ¶
Err returns an error equivalent to this error list. If the list is empty, Err returns nil.
func (*ErrorList) RemoveMultiples ¶
func (p *ErrorList) RemoveMultiples()
RemoveMultiples sorts an ErrorList and removes all but the first error per line.
func (ErrorList) Report ¶ added in v0.5.5
Report returns all (or up to maxN if > 0) errors in the list in one string with customizable output options for viewing errors: - basepath if non-empty shows filename relative to that path. - showSrc shows the source line on a second line -- truncated to 30 chars around err - showRule prints the rule name
type File ¶
type File struct { Filename string `desc:"the current file being lex'd"` Sup filecat.Supported `desc:"the supported file type, if supported (typically only supported files are processed)"` BasePath string `desc:"base path for reporting file names -- this must be set externally e.g., by gide for the project root path"` Lines *[][]rune `desc:"contents of the file as lines of runes"` Lexs []Line `desc:"lex'd version of the lines -- allocated to size of Lines"` Comments []Line `` /* 148-byte string literal not displayed */ LastStacks []Stack `desc:"stack present at the end of each line -- needed for contextualizing line-at-time lexing while editing"` EosPos []EosPos `desc:"token positions per line for the EOS (end of statement) tokens -- very important for scoping top-down parsing"` }
File contains the contents of the file being parsed -- all kept in memory, and represented by Line as runes, so that positions in the file are directly convertible to indexes in Lines structure
func (*File) AllocLines ¶
func (fl *File) AllocLines()
AllocLines allocates the data per line: lex outputs and stack. We reset state so stale state is not hanging around.
func (*File) EnsureFinalEos ¶ added in v0.5.5
EnsureFinalEos makes sure that the given line ends with an EOS (if it has tokens). Used for line-at-time parsing just to make sure it matches even if you haven't gotten to the end etc.
func (*File) InitFromLine ¶ added in v0.5.5
InitFromLine initializes from one line of source file
func (*File) InitFromString ¶ added in v0.5.5
InitFromString initializes from given string. Returns false if string is empty
func (*File) InsertEos ¶ added in v0.5.5
InsertEos inserts an EOS just after the given token position (e.g., cp = last token in line)
func (*File) IsLexPosValid ¶
IsLexPosValid returns true if given lexical token position is valid
func (*File) LexAtSafe ¶
LexAtSafe returns the Lex item at given position, or last lex item if beyond end
func (*File) LexLine ¶
LexLine returns the lexing output for given line, combining comments and all other tokens and allocating new memory using clone
func (*File) LexTagSrcLn ¶
LexTagSrcLn returns the lex'd tagged source line for given line
func (*File) LinesDeleted ¶
LinesDeleted deletes lines -- called e.g., by giv.TextBuf to sync the markup with ongoing edits
func (*File) LinesInserted ¶
LinesInserted inserts new lines -- called e.g., by giv.TextBuf to sync the markup with ongoing edits
func (*File) NextEos ¶ added in v0.5.5
NextEos finds the next EOS position at given depth, false if none
func (*File) NextEosAnyDepth ¶ added in v0.5.5
NextEosAnyDepth finds the next EOS at any depth
func (*File) NextTokenPos ¶
NextTokenPos returns the next token position, false if at end of tokens
func (*File) PrevTokenPos ¶
PrevTokenPos returns the previous token position, false if at end of tokens
func (*File) ReplaceEos ¶ added in v0.5.5
ReplaceEos replaces given token with an EOS
func (*File) SetSrc ¶
SetSrc sets the source to given content, and alloc Lexs -- if basepath is empty then it is set to the path for the filename
func (*File) SrcLine ¶ added in v0.5.5
SrcLine returns given line of source, as a string, or "" if out of range
func (*File) TokenMapReg ¶
TokenMapReg creates a TokenMap of tokens in region, including their Cat and SubCat levels -- err's on side of inclusiveness -- used for optimizing token matching
func (*File) TokenRegSrc ¶
TokenRegSrc returns the source code associated with the given token region
func (*File) TokenSrcPos ¶
TokenSrcPos returns source reg associated with lex token at given token position
func (*File) TokenSrcReg ¶
TokenSrcReg translates a region of tokens into a region of source
type LangLexer ¶
type LangLexer interface { // LexerByName returns the top-level lex.Rule for given language (case invariant lookup) LexerByName(lang string) *Rule }
LangLexer looks up lexer for given language -- impl in parent pi package so we need the interface
var TheLangLexer LangLexer
TheLangLexer is the instance of LangLexer interface used to lookup lexers for languages -- is set in pi/langs.go
type Lex ¶
type Lex struct { Tok token.KeyToken `` /* 261-byte string literal not displayed */ St int `desc:"start rune index within original source line for this token"` Ed int `desc:"end rune index within original source line for this token (exclusive -- ends one before this)"` Time nptime.Time `` /* 129-byte string literal not displayed */ }
Lex represents a single lexical element, with a token, and start and end rune positions within a line of a file. Critically it also contains the nesting depth computed from all the parens, brackets, braces. Todo: also support XML < > </ > tag depth.
func (*Lex) ContainsPos ¶
ContainsPos returns true if the Lex element contains given character position
func (*Lex) OverlapsReg ¶
OverlapsReg returns true if the two regions overlap
type Lexer ¶
type Lexer interface { ki.Ki // Compile performs any one-time compilation steps on the rule Compile(ls *State) bool // Validate checks for any errors in the rules and issues warnings, // returns true if valid (no err) and false if invalid (errs) Validate(ls *State) bool // Lex tries to apply rule to given input state, returns true if matched, false if not Lex(ls *State) *Rule // AsLexRule returns object as a lex.Rule AsLexRule() *Rule }
Lexer is the interface type for lexers -- likely not necessary except is essential for defining the BaseIface for gui in making new nodes
type Line ¶
type Line []Lex
Line is one line of Lex'd text
func MergeLines ¶
MergeLines merges the two lines of lex regions into a combined list properly ordered by sequence of tags within the line.
func (*Line) AddLex ¶
Add adds one element to the lex line with given params, returns pointer to that new lex
func (*Line) AddSort ¶
AddSort adds a new lex element in sorted order to list, sorted by start position, and if at the same start position, then sorted by end position
type MatchPos ¶
type MatchPos int
MatchPos are special positions for a match to occur
const ( // AnyPos matches at any position AnyPos MatchPos = iota // StartOfLine matches at start of line StartOfLine // EndOfLine matches at end of line EndOfLine // MiddleOfLine matches not at the start or end MiddleOfLine // StartOfWord matches at start of word StartOfWord // EndOfWord matches at end of word EndOfWord // MiddleOfWord matches not at the start or end MiddleOfWord MatchPosN )
Matching position rules
func (*MatchPos) FromString ¶
func (MatchPos) MarshalJSON ¶
func (*MatchPos) UnmarshalJSON ¶
type Matches ¶
type Matches int
Matches are what kind of lexing matches to make
const ( // String means match a specific string as given in the rule // Note: this only looks for the string with no constraints on // what happens after this string -- use StrName to match entire names String Matches = iota // StrName means match a specific string that is a complete alpha-numeric // string (including underbar _) with some other char at the end // must use this for all keyword matches to ensure that it isn't just // the start of a longer name StrName // Match any letter, including underscore Letter // Match digit 0-9 Digit // Match any white space (space, tab) -- input is already broken into lines WhiteSpace // CurState means match current state value set by a PushState action, using String value in rule // all CurState cases must generally be first in list of rules so they can preempt // other rules when the state is active CurState // AnyRune means match any rune -- use this as the last condition where other terminators // come first! AnyRune MatchesN )
Matching rules
func (*Matches) FromString ¶
func (Matches) MarshalJSON ¶
func (*Matches) UnmarshalJSON ¶
type PassTwo ¶
type PassTwo struct { DoEos bool `desc:"should we perform EOS detection on this type of file?"` Eol bool `desc:"use end-of-line as a default EOS, if nesting depth is same as start of line (python) -- see also EolToks"` Semi bool `desc:"replace all semicolons with EOS to keep it consistent (C, Go..)"` Backslash bool `desc:"use backslash as a line continuer (python)"` RBraceEos bool `` /* 167-byte string literal not displayed */ EolToks token.KeyTokenList `desc:"specific tokens to recognize at the end of a line that trigger an EOS (Go)"` }
PassTwo performs second pass(s) through the lexicalized version of the source, computing nesting depth for every token once and for all -- this is essential for properly matching tokens and also for colorization in syntax highlighting. Optionally, a subsequent pass finds end-of-statement (EOS) tokens, which are essential for parsing to first break the source down into statement-sized chunks. A separate list of EOS token positions is maintained for very fast access.
func (*PassTwo) EosDetectPos ¶ added in v0.5.5
Perform EOS detection at given starting position, for given number of lines
func (*PassTwo) MismatchError ¶
MismatchError reports a mismatch for given type of parentheses / bracket
func (*PassTwo) NestDepthLine ¶
Perform nesting depth computation on only one line, starting at given initial depth -- updates the given line
type Pos ¶
Pos is a position within the source file -- it is recorded always in 0, 0 offset positions, but is converted into 1,1 offset for public consumption Ch positions are always in runes, not bytes. Also used for lex token indexes.
type Reg ¶
type Reg struct { St Pos `desc:"starting position of region"` Ed Pos `desc:"ending position of region"` }
Reg is a contiguous region within the source file
type Rule ¶
type Rule struct { ki.Node Off bool `desc:"disable this rule -- useful for testing and exploration"` Desc string `desc:"description / comments about this rule"` Token token.Tokens `desc:"the token value that this rule generates -- use None for non-terminals"` Match Matches `desc:"the lexical match that we look for to engage this rule"` Pos MatchPos `desc:"position where match can occur"` String string `desc:"if action is LexMatch, this is the string we match"` Offset int `desc:"offset into the input to look for a match: 0 = current char, 1 = next one, etc"` SizeAdj int `` /* 151-byte string literal not displayed */ Acts []Actions `desc:"the action(s) to perform, in order, if there is a match -- these are performed prior to iterating over child nodes"` Until string `` /* 260-byte string literal not displayed */ PushState string `desc:"the state to push if our action is PushState -- note that State matching is on String, not this value"` NameMap bool `` /* 336-byte string literal not displayed */ MatchLen int `view:"-" json:"-" xml:"-" desc:"length of source that matched -- if Next is called, this is what will be skipped to"` NmMap map[string]*Rule `inactive:"+" json:"-" xml:"-" desc:"NameMap lookup map -- created during Compile"` }
lex.Rule operates on the text input to produce the lexical tokens.
Lexing is done line-by-line -- you must push and pop states to coordinate across multiple lines, e.g., for multi-line comments.
There is full access to entire line and you can decide based on future (offset) characters.
In general it is best to keep lexing as simple as possible and leave the more complex things for the parsing step.
func (*Rule) Compile ¶ added in v0.5.5
Compile performs any one-time compilation steps on the rule returns false if there are any problems.
func (*Rule) CompileAll ¶ added in v0.5.5
CompileAll is called on the top-level Rule to compile all nodes. returns true if everything is ok
func (*Rule) CompileNameMap ¶ added in v0.5.5
CompileNameMap compiles name map -- returns false if there are problems.
func (*Rule) ComputeMatchLen ¶ added in v0.5.5
ComputeMatchLen computes MatchLen based on match type
func (*Rule) Find ¶
Find looks for rules in the tree that contain given string in String or Name fields
func (*Rule) IsMatch ¶
IsMatch tests if the rule matches for current input state, returns true if so, false if not
func (*Rule) IsMatchPos ¶
IsMatchPos tests if the rule matches position
func (*Rule) Lex ¶
Lex tries to apply rule to given input state, returns lowest-level rule that matched, nil if none
func (*Rule) LexStart ¶
LexStart is called on the top-level lex node to start lexing process for one step
type State ¶
type State struct { Filename string `desc:"the current file being lex'd"` KeepWS bool `desc:"if true, record whitespace tokens -- else ignore"` Src []rune `desc:"the current line of source being processed"` Lex Line `desc:"the lex output for this line"` Comments Line `desc:"the comments output for this line -- kept separately"` Pos int `desc:"the current rune char position within the line"` Ln int `desc:"the line within overall source that we're operating on (0 indexed)"` Ch rune `desc:"the current rune read by NextRune"` Stack Stack `desc:"state stack"` LastName string `desc:"the last name that was read"` GuestLex *Rule `desc:"a guest lexer that can be installed for managing a different language type, e.g., quoted text in markdown files"` SaveStack Stack `desc:"copy of stack at point when guest lexer was installed -- restore when popped"` Time nptime.Time `desc:"time stamp for lexing -- set at start of new lex process"` Errs ErrorList `desc:"any error messages accumulated during lexing specifically"` }
lex.State is the state maintained for lexing
func (*State) LineString ¶
LineString returns the current lex output as tagged source
func (*State) MatchState ¶ added in v0.5.5
MatchState returns true if the current state matches the string
func (*State) Next ¶
Next moves to next position using given increment in source line -- returns false if at end
func (*State) NextSrcLine ¶
NextSrcLine returns the next line of text
func (*State) ReadEscape ¶
ReadEscape parses an escape sequence where rune is the accepted escaped quote. In case of a syntax error, it stops at the offending character (without consuming it) and returns false. Otherwise it returns true.
func (*State) ReadName ¶
func (ls *State) ReadName()
ReadName reads a standard alpha-numeric_ name -- saves in LastName
func (*State) ReadNameTmp ¶ added in v0.5.5
ReadNameTmp reads a standard alpha-numeric_ name and returns it. Does not update the lexing position -- a "lookahead" name read
func (*State) ReadNumber ¶
ReadNumber reads a number of any sort, returning the type of the number
func (*State) ReadQuoted ¶
func (ls *State) ReadQuoted()
func (*State) ReadUntil ¶ added in v0.5.5
ReadUntil reads until given string(s) -- does do depth tracking if looking for a bracket open / close kind of symbol. For multiple "until" string options, separate each by | and use empty to match a single | or || in combination with other options. Terminates at end of line without error
func (*State) Rune ¶
Rune gets the rune at given offset from current position, returns false if out of range
func (*State) ScanMantissa ¶
type TokenMap ¶
TokenMap is a token map, for optimizing token exclusion
type TwoState ¶
type TwoState struct { Pos Pos `desc:"position in lex tokens we're on"` Src *File `desc:"file that we're operating on"` NestStack []token.Tokens `desc:"stack of nesting tokens"` Errs ErrorList `desc:"any error messages accumulated during lexing specifically"` }
TwoState is the state maintained for the PassTwo process
func (*TwoState) Init ¶
func (ts *TwoState) Init()
Init initializes state for a new pass -- called at start of NestDepth
func (*TwoState) NestStackStr ¶
NestStackStr returns the token stack as strings