Documentation ¶
Overview ¶
Package antlr implements the Go version of the ANTLR 4 runtime.
The ANTLR Tool ¶
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface (or visitor) that makes it easy to respond to the recognition of phrases of interest.
Go Runtime ¶
At version 4.11.x and prior, the Go runtime was not properly versioned for go modules. After this point, the runtime source code to be imported was held in the `runtime/Go/antlr/v4` directory, and the go.mod file was updated to reflect the version of ANTLR4 that it is compatible with (I.E. uses the /v4 path).
However, this was found to be problematic, as it meant that with the runtime embedded so far underneath the root of the repo, the `go get` and related commands could not properly resolve the location of the go runtime source code. This meant that the reference to the runtime in your `go.mod` file would refer to the correct source code, but would not list the release tag such as @4.13.1 - this was confusing, to say the least.
As of 4.13.0, the runtime is now available as a go module in its own repo, and can be imported as `github.com/antlr4-go/antlr` (the go get command should also be used with this path). See the main documentation for the ANTLR4 project for more information, which is available at ANTLR docs. The documentation for using the Go runtime is available at Go runtime docs.
This means that if you are using the source code without modules, you should also use the source code in the new repo. Though we highly recommend that you use go modules, as they are now idiomatic for Go.
I am aware that this change will prove Hyrum's Law, but am prepared to live with it for the common good.
Go runtime author: Jim Idle jimi@idle.ws
Code Generation ¶
ANTLR supports the generation of code in a number of target languages, and the generated code is supported by a runtime library, written specifically to support the generated code in the target language. This library is the runtime for the Go target.
To generate code for the go target, it is generally recommended to place the source grammar files in a package of their own, and use the `.sh` script method of generating code, using the go generate directive. In that same directory it is usual, though not required, to place the antlr tool that should be used to generate the code. That does mean that the antlr tool JAR file will be checked in to your source code control though, so you are, of course, free to use any other way of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile in your IDE, or configuration in your CI system. Checking in the jar does mean that it is easy to reproduce the build as it was at any point in its history.
Here is a general/recommended template for an ANTLR based recognizer in Go:
. ├── parser │ ├── mygrammar.g4 │ ├── antlr-4.13.1-complete.jar │ ├── generate.go │ └── generate.sh ├── parsing - generated code goes here │ └── error_listeners.go ├── go.mod ├── go.sum ├── main.go └── main_test.go
Make sure that the package statement in your grammar file(s) reflects the go package the generated code will exist in.
The generate.go file then looks like this:
package parser //go:generate ./generate.sh
And the generate.sh file will look similar to this:
#!/bin/sh alias antlr4='java -Xmx500M -cp "./antlr4-4.13.1-complete.jar:$CLASSPATH" org.antlr.v4.Tool' antlr4 -Dlanguage=Go -no-visitor -package parsing *.g4
depending on whether you want visitors or listeners or any other ANTLR options. Not that another option here is to generate the code into a
From the command line at the root of your source package (location of go.mo)d) you can then simply issue the command:
go generate ./...
Which will generate the code for the parser, and place it in the parsing package. You can then use the generated code by importing the parsing package.
There are no hard and fast rules on this. It is just a recommendation. You can generate the code in any way and to anywhere you like.
Copyright Notice ¶
Copyright (c) 2012-2023 The ANTLR Project. All rights reserved.
Use of this file is governed by the BSD 3-clause license, which can be found in the LICENSE.txt file in the project root.
Index ¶
- Constants
- Variables
- func ConfigureRuntime(options ...runtimeOption) error
- func EscapeWhitespace(s string, escapeSpaces bool) string
- func InitBaseParserRuleContext(prc *BaseParserRuleContext, parent ParserRuleContext, invokingStateNumber int)
- func PredictionModeallConfigsInRuleStopStates(configs *ATNConfigSet) bool
- func PredictionModeallSubsetsConflict(altsets []*BitSet) bool
- func PredictionModeallSubsetsEqual(altsets []*BitSet) bool
- func PredictionModegetSingleViableAlt(altsets []*BitSet) int
- func PredictionModegetUniqueAlt(altsets []*BitSet) int
- func PredictionModehasConfigInRuleStopState(configs *ATNConfigSet) bool
- func PredictionModehasConflictingAltSet(altsets []*BitSet) bool
- func PredictionModehasNonConflictingAltSet(altsets []*BitSet) bool
- func PredictionModehasSLLConflictTerminatingPrediction(mode int, configs *ATNConfigSet) bool
- func PredictionModehasStateAssociatedWithOneAlt(configs *ATNConfigSet) bool
- func PredictionModeresolvesToJustOneViableAlt(altsets []*BitSet) int
- func PrintArrayJavaStyle(sa []string) string
- func TerminalNodeToStringArray(sa []TerminalNode) []string
- func TreesGetNodeText(t Tree, ruleNames []string, recog Parser) string
- func TreesStringTree(tree Tree, ruleNames []string, recog Recognizer) string
- func WithLRLoopEntryBranchOpt(off bool) runtimeOption
- func WithLexerATNSimulatorDFADebug(debug bool) runtimeOption
- func WithLexerATNSimulatorDebug(debug bool) runtimeOption
- func WithMemoryManager(use bool) runtimeOption
- func WithParserATNSimulatorDFADebug(debug bool) runtimeOption
- func WithParserATNSimulatorDebug(debug bool) runtimeOption
- func WithParserATNSimulatorRetryDebug(debug bool) runtimeOption
- func WithParserATNSimulatorTraceATNSim(trace bool) runtimeOption
- func WithStatsTraceStacks(trace bool) runtimeOption
- func WithTopN(topN int) statsOption
- type AND
- type ATN
- type ATNAltConfigComparator
- type ATNConfig
- func NewATNConfig(c *ATNConfig, state ATNState, context *PredictionContext, ...) *ATNConfig
- func NewATNConfig1(c *ATNConfig, state ATNState, context *PredictionContext) *ATNConfig
- func NewATNConfig2(c *ATNConfig, semanticContext SemanticContext) *ATNConfig
- func NewATNConfig3(c *ATNConfig, state ATNState, semanticContext SemanticContext) *ATNConfig
- func NewATNConfig4(c *ATNConfig, state ATNState) *ATNConfig
- func NewATNConfig5(state ATNState, alt int, context *PredictionContext, ...) *ATNConfig
- func NewATNConfig6(state ATNState, alt int, context *PredictionContext) *ATNConfig
- func NewLexerATNConfig1(state ATNState, alt int, context *PredictionContext) *ATNConfig
- func NewLexerATNConfig2(c *ATNConfig, state ATNState, context *PredictionContext) *ATNConfig
- func NewLexerATNConfig3(c *ATNConfig, state ATNState, lexerActionExecutor *LexerActionExecutor) *ATNConfig
- func NewLexerATNConfig4(c *ATNConfig, state ATNState) *ATNConfig
- func NewLexerATNConfig6(state ATNState, alt int, context *PredictionContext) *ATNConfig
- func (a *ATNConfig) Equals(o Collectable[*ATNConfig]) bool
- func (a *ATNConfig) GetAlt() int
- func (a *ATNConfig) GetContext() *PredictionContext
- func (a *ATNConfig) GetReachesIntoOuterContext() int
- func (a *ATNConfig) GetSemanticContext() SemanticContext
- func (a *ATNConfig) GetState() ATNState
- func (a *ATNConfig) Hash() int
- func (a *ATNConfig) InitATNConfig(c *ATNConfig, state ATNState, alt int, context *PredictionContext, ...)
- func (a *ATNConfig) LEquals(other Collectable[*ATNConfig]) bool
- func (a *ATNConfig) LHash() int
- func (a *ATNConfig) PEquals(o Collectable[*ATNConfig]) bool
- func (a *ATNConfig) PHash() int
- func (a *ATNConfig) SetContext(v *PredictionContext)
- func (a *ATNConfig) SetReachesIntoOuterContext(v int)
- func (a *ATNConfig) String() string
- type ATNConfigComparator
- type ATNConfigSet
- func (b *ATNConfigSet) Add(config *ATNConfig, mergeCache *JPCMap) bool
- func (b *ATNConfigSet) AddAll(coll []*ATNConfig) bool
- func (b *ATNConfigSet) Alts() *BitSet
- func (b *ATNConfigSet) Clear()
- func (b *ATNConfigSet) Compare(bs *ATNConfigSet) bool
- func (b *ATNConfigSet) Contains(item *ATNConfig) bool
- func (b *ATNConfigSet) ContainsFast(item *ATNConfig) bool
- func (b *ATNConfigSet) Equals(other Collectable[ATNConfig]) bool
- func (b *ATNConfigSet) GetPredicates() []SemanticContext
- func (b *ATNConfigSet) GetStates() *JStore[ATNState, Comparator[ATNState]]
- func (b *ATNConfigSet) Hash() int
- func (b *ATNConfigSet) OptimizeConfigs(interpreter *BaseATNSimulator)
- func (b *ATNConfigSet) String() string
- type ATNConfigSetPair
- type ATNDeserializationOptions
- func (opts *ATNDeserializationOptions) GenerateRuleBypassTransitions() bool
- func (opts *ATNDeserializationOptions) ReadOnly() bool
- func (opts *ATNDeserializationOptions) SetGenerateRuleBypassTransitions(generateRuleBypassTransitions bool)
- func (opts *ATNDeserializationOptions) SetReadOnly(readOnly bool)
- func (opts *ATNDeserializationOptions) SetVerifyATN(verifyATN bool)
- func (opts *ATNDeserializationOptions) VerifyATN() bool
- type ATNDeserializer
- type ATNState
- type AbstractPredicateTransition
- type ActionTransition
- type AltDict
- type AtomTransition
- type BailErrorStrategy
- type BaseATNConfigComparator
- type BaseATNSimulator
- type BaseATNState
- func (as *BaseATNState) AddTransition(trans Transition, index int)
- func (as *BaseATNState) Equals(other Collectable[ATNState]) bool
- func (as *BaseATNState) GetATN() *ATN
- func (as *BaseATNState) GetEpsilonOnlyTransitions() bool
- func (as *BaseATNState) GetNextTokenWithinRule() *IntervalSet
- func (as *BaseATNState) GetRuleIndex() int
- func (as *BaseATNState) GetStateNumber() int
- func (as *BaseATNState) GetStateType() int
- func (as *BaseATNState) GetTransitions() []Transition
- func (as *BaseATNState) Hash() int
- func (as *BaseATNState) SetATN(atn *ATN)
- func (as *BaseATNState) SetNextTokenWithinRule(v *IntervalSet)
- func (as *BaseATNState) SetRuleIndex(v int)
- func (as *BaseATNState) SetStateNumber(stateNumber int)
- func (as *BaseATNState) SetTransitions(t []Transition)
- func (as *BaseATNState) String() string
- type BaseAbstractPredicateTransition
- type BaseBlockStartState
- type BaseDecisionState
- type BaseInterpreterRuleContext
- type BaseLexer
- func (b *BaseLexer) Emit() Token
- func (b *BaseLexer) EmitEOF() Token
- func (b *BaseLexer) EmitToken(token Token)
- func (b *BaseLexer) GetATN() *ATN
- func (b *BaseLexer) GetAllTokens() []Token
- func (b *BaseLexer) GetCharIndex() int
- func (b *BaseLexer) GetCharPositionInLine() int
- func (b *BaseLexer) GetInputStream() CharStream
- func (b *BaseLexer) GetInterpreter() ILexerATNSimulator
- func (b *BaseLexer) GetLine() int
- func (b *BaseLexer) GetSourceName() string
- func (b *BaseLexer) GetText() string
- func (b *BaseLexer) GetTokenFactory() TokenFactory
- func (b *BaseLexer) GetTokenSourceCharStreamPair() *TokenSourceCharStreamPair
- func (b *BaseLexer) GetType() int
- func (b *BaseLexer) More()
- func (b *BaseLexer) NextToken() Token
- func (b *BaseLexer) PopMode() int
- func (b *BaseLexer) PushMode(m int)
- func (b *BaseLexer) Recover(re RecognitionException)
- func (b *BaseLexer) Reset()
- func (b *BaseLexer) SetChannel(v int)
- func (b *BaseLexer) SetInputStream(input CharStream)
- func (b *BaseLexer) SetMode(m int)
- func (b *BaseLexer) SetText(text string)
- func (b *BaseLexer) SetType(t int)
- func (b *BaseLexer) Skip()
- type BaseLexerAction
- type BaseParseTreeListener
- type BaseParseTreeVisitor
- type BaseParser
- func (p *BaseParser) AddParseListener(listener ParseTreeListener)
- func (p *BaseParser) Consume() Token
- func (p *BaseParser) DumpDFA()
- func (p *BaseParser) EnterOuterAlt(localctx ParserRuleContext, altNum int)
- func (p *BaseParser) EnterRecursionRule(localctx ParserRuleContext, state, _, precedence int)
- func (p *BaseParser) EnterRule(localctx ParserRuleContext, state, _ int)
- func (p *BaseParser) ExitRule()
- func (p *BaseParser) GetATN() *ATN
- func (p *BaseParser) GetATNWithBypassAlts()
- func (p *BaseParser) GetCurrentToken() Token
- func (p *BaseParser) GetDFAStrings() string
- func (p *BaseParser) GetErrorHandler() ErrorStrategy
- func (p *BaseParser) GetExpectedTokens() *IntervalSet
- func (p *BaseParser) GetExpectedTokensWithinCurrentRule() *IntervalSet
- func (p *BaseParser) GetInputStream() IntStream
- func (p *BaseParser) GetInterpreter() *ParserATNSimulator
- func (p *BaseParser) GetInvokingContext(ruleIndex int) ParserRuleContext
- func (p *BaseParser) GetParseListeners() []ParseTreeListener
- func (p *BaseParser) GetParserRuleContext() ParserRuleContext
- func (p *BaseParser) GetPrecedence() int
- func (p *BaseParser) GetRuleIndex(ruleName string) int
- func (p *BaseParser) GetRuleInvocationStack(c ParserRuleContext) []string
- func (p *BaseParser) GetSourceName() string
- func (p *BaseParser) GetTokenFactory() TokenFactory
- func (p *BaseParser) GetTokenStream() TokenStream
- func (p *BaseParser) IsExpectedToken(symbol int) bool
- func (p *BaseParser) Match(ttype int) Token
- func (p *BaseParser) MatchWildcard() Token
- func (p *BaseParser) NotifyErrorListeners(msg string, offendingToken Token, err RecognitionException)
- func (p *BaseParser) Precpred(_ RuleContext, precedence int) bool
- func (p *BaseParser) PushNewRecursionContext(localctx ParserRuleContext, state, _ int)
- func (p *BaseParser) RemoveParseListener(listener ParseTreeListener)
- func (p *BaseParser) SetErrorHandler(e ErrorStrategy)
- func (p *BaseParser) SetInputStream(input TokenStream)
- func (p *BaseParser) SetParserRuleContext(v ParserRuleContext)
- func (p *BaseParser) SetTokenStream(input TokenStream)
- func (p *BaseParser) SetTrace(trace *TraceListener)
- func (p *BaseParser) TriggerEnterRuleEvent()
- func (p *BaseParser) TriggerExitRuleEvent()
- func (p *BaseParser) UnrollRecursionContexts(parentCtx ParserRuleContext)
- type BaseParserRuleContext
- func (prc *BaseParserRuleContext) Accept(visitor ParseTreeVisitor) interface{}
- func (prc *BaseParserRuleContext) AddChild(child RuleContext) RuleContext
- func (prc *BaseParserRuleContext) AddErrorNode(badToken Token) *ErrorNodeImpl
- func (prc *BaseParserRuleContext) AddTokenNode(token Token) *TerminalNodeImpl
- func (prc *BaseParserRuleContext) CopyFrom(ctx *BaseParserRuleContext)
- func (prc *BaseParserRuleContext) EnterRule(_ ParseTreeListener)
- func (prc *BaseParserRuleContext) ExitRule(_ ParseTreeListener)
- func (prc *BaseParserRuleContext) GetAltNumber() int
- func (prc *BaseParserRuleContext) GetChild(i int) Tree
- func (prc *BaseParserRuleContext) GetChildCount() int
- func (prc *BaseParserRuleContext) GetChildOfType(i int, childType reflect.Type) RuleContext
- func (prc *BaseParserRuleContext) GetChildren() []Tree
- func (prc *BaseParserRuleContext) GetInvokingState() int
- func (prc *BaseParserRuleContext) GetParent() Tree
- func (prc *BaseParserRuleContext) GetPayload() interface{}
- func (prc *BaseParserRuleContext) GetRuleContext() RuleContext
- func (prc *BaseParserRuleContext) GetRuleIndex() int
- func (prc *BaseParserRuleContext) GetSourceInterval() Interval
- func (prc *BaseParserRuleContext) GetStart() Token
- func (prc *BaseParserRuleContext) GetStop() Token
- func (prc *BaseParserRuleContext) GetText() string
- func (prc *BaseParserRuleContext) GetToken(ttype int, i int) TerminalNode
- func (prc *BaseParserRuleContext) GetTokens(ttype int) []TerminalNode
- func (prc *BaseParserRuleContext) GetTypedRuleContext(ctxType reflect.Type, i int) RuleContext
- func (prc *BaseParserRuleContext) GetTypedRuleContexts(ctxType reflect.Type) []RuleContext
- func (prc *BaseParserRuleContext) IsEmpty() bool
- func (prc *BaseParserRuleContext) RemoveLastChild()
- func (prc *BaseParserRuleContext) SetAltNumber(_ int)
- func (prc *BaseParserRuleContext) SetException(e RecognitionException)
- func (prc *BaseParserRuleContext) SetInvokingState(t int)
- func (prc *BaseParserRuleContext) SetParent(v Tree)
- func (prc *BaseParserRuleContext) SetStart(t Token)
- func (prc *BaseParserRuleContext) SetStop(t Token)
- func (prc *BaseParserRuleContext) String(ruleNames []string, stop RuleContext) string
- func (prc *BaseParserRuleContext) ToStringTree(ruleNames []string, recog Recognizer) string
- type BaseRecognitionException
- type BaseRecognizer
- func (b *BaseRecognizer) Action(_ RuleContext, _, _ int)
- func (b *BaseRecognizer) AddErrorListener(listener ErrorListener)
- func (b *BaseRecognizer) GetError() RecognitionException
- func (b *BaseRecognizer) GetErrorHeader(e RecognitionException) string
- func (b *BaseRecognizer) GetErrorListenerDispatch() ErrorListener
- func (b *BaseRecognizer) GetLiteralNames() []string
- func (b *BaseRecognizer) GetRuleIndexMap() map[string]int
- func (b *BaseRecognizer) GetRuleNames() []string
- func (b *BaseRecognizer) GetState() int
- func (b *BaseRecognizer) GetSymbolicNames() []string
- func (b *BaseRecognizer) GetTokenErrorDisplay(t Token) stringdeprecated
- func (b *BaseRecognizer) GetTokenNames() []string
- func (b *BaseRecognizer) GetTokenType(_ string) int
- func (b *BaseRecognizer) HasError() bool
- func (b *BaseRecognizer) Precpred(_ RuleContext, _ int) bool
- func (b *BaseRecognizer) RemoveErrorListeners()
- func (b *BaseRecognizer) Sempred(_ RuleContext, _ int, _ int) bool
- func (b *BaseRecognizer) SetError(err RecognitionException)
- func (b *BaseRecognizer) SetState(v int)
- type BaseRewriteOperation
- func (op *BaseRewriteOperation) Execute(_ *bytes.Buffer) int
- func (op *BaseRewriteOperation) GetIndex() int
- func (op *BaseRewriteOperation) GetInstructionIndex() int
- func (op *BaseRewriteOperation) GetOpName() string
- func (op *BaseRewriteOperation) GetText() string
- func (op *BaseRewriteOperation) GetTokens() TokenStream
- func (op *BaseRewriteOperation) SetIndex(val int)
- func (op *BaseRewriteOperation) SetInstructionIndex(val int)
- func (op *BaseRewriteOperation) SetOpName(val string)
- func (op *BaseRewriteOperation) SetText(val string)
- func (op *BaseRewriteOperation) SetTokens(val TokenStream)
- func (op *BaseRewriteOperation) String() string
- type BaseToken
- func (b *BaseToken) GetChannel() int
- func (b *BaseToken) GetColumn() int
- func (b *BaseToken) GetInputStream() CharStream
- func (b *BaseToken) GetLine() int
- func (b *BaseToken) GetSource() *TokenSourceCharStreamPair
- func (b *BaseToken) GetStart() int
- func (b *BaseToken) GetStop() int
- func (b *BaseToken) GetText() string
- func (b *BaseToken) GetTokenIndex() int
- func (b *BaseToken) GetTokenSource() TokenSource
- func (b *BaseToken) GetTokenType() int
- func (b *BaseToken) SetText(text string)
- func (b *BaseToken) SetTokenIndex(v int)
- func (b *BaseToken) String() string
- type BaseTransition
- type BasicBlockStartState
- type BasicState
- type BitSet
- type BlockEndState
- type BlockStartState
- type CharStream
- type ClosureBusy
- type Collectable
- type CollectionDescriptor
- type CollectionSource
- type CommonToken
- type CommonTokenFactory
- type CommonTokenStream
- func (c *CommonTokenStream) Consume()
- func (c *CommonTokenStream) Fill()
- func (c *CommonTokenStream) Get(index int) Token
- func (c *CommonTokenStream) GetAllText() string
- func (c *CommonTokenStream) GetAllTokens() []Token
- func (c *CommonTokenStream) GetHiddenTokensToLeft(tokenIndex, channel int) []Token
- func (c *CommonTokenStream) GetHiddenTokensToRight(tokenIndex, channel int) []Token
- func (c *CommonTokenStream) GetSourceName() string
- func (c *CommonTokenStream) GetTextFromInterval(interval Interval) string
- func (c *CommonTokenStream) GetTextFromRuleContext(interval RuleContext) string
- func (c *CommonTokenStream) GetTextFromTokens(start, end Token) string
- func (c *CommonTokenStream) GetTokenSource() TokenSource
- func (c *CommonTokenStream) GetTokens(start int, stop int, types *IntervalSet) []Token
- func (c *CommonTokenStream) Index() int
- func (c *CommonTokenStream) LA(i int) int
- func (c *CommonTokenStream) LB(k int) Token
- func (c *CommonTokenStream) LT(k int) Token
- func (c *CommonTokenStream) Mark() int
- func (c *CommonTokenStream) NextTokenOnChannel(i, _ int) int
- func (c *CommonTokenStream) Release(_ int)
- func (c *CommonTokenStream) Reset()
- func (c *CommonTokenStream) Seek(index int)
- func (c *CommonTokenStream) SetTokenSource(tokenSource TokenSource)
- func (c *CommonTokenStream) Size() int
- func (c *CommonTokenStream) Sync(i int) bool
- type Comparator
- type ConsoleErrorListener
- type DFA
- type DFASerializer
- type DFAState
- type DecisionState
- type DefaultErrorListener
- func (d *DefaultErrorListener) ReportAmbiguity(_ Parser, _ *DFA, _, _ int, _ bool, _ *BitSet, _ *ATNConfigSet)
- func (d *DefaultErrorListener) ReportAttemptingFullContext(_ Parser, _ *DFA, _, _ int, _ *BitSet, _ *ATNConfigSet)
- func (d *DefaultErrorListener) ReportContextSensitivity(_ Parser, _ *DFA, _, _, _ int, _ *ATNConfigSet)
- func (d *DefaultErrorListener) SyntaxError(_ Recognizer, _ interface{}, _, _ int, _ string, _ RecognitionException)
- type DefaultErrorStrategy
- func (d *DefaultErrorStrategy) GetErrorRecoverySet(recognizer Parser) *IntervalSet
- func (d *DefaultErrorStrategy) GetExpectedTokens(recognizer Parser) *IntervalSet
- func (d *DefaultErrorStrategy) GetMissingSymbol(recognizer Parser) Token
- func (d *DefaultErrorStrategy) GetTokenErrorDisplay(t Token) string
- func (d *DefaultErrorStrategy) InErrorRecoveryMode(_ Parser) bool
- func (d *DefaultErrorStrategy) Recover(recognizer Parser, _ RecognitionException)
- func (d *DefaultErrorStrategy) RecoverInline(recognizer Parser) Token
- func (d *DefaultErrorStrategy) ReportError(recognizer Parser, e RecognitionException)
- func (d *DefaultErrorStrategy) ReportFailedPredicate(recognizer Parser, e *FailedPredicateException)
- func (d *DefaultErrorStrategy) ReportInputMisMatch(recognizer Parser, e *InputMisMatchException)
- func (d *DefaultErrorStrategy) ReportMatch(recognizer Parser)
- func (d *DefaultErrorStrategy) ReportMissingToken(recognizer Parser)
- func (d *DefaultErrorStrategy) ReportNoViableAlternative(recognizer Parser, e *NoViableAltException)
- func (d *DefaultErrorStrategy) ReportUnwantedToken(recognizer Parser)
- func (d *DefaultErrorStrategy) SingleTokenDeletion(recognizer Parser) Token
- func (d *DefaultErrorStrategy) SingleTokenInsertion(recognizer Parser) bool
- func (d *DefaultErrorStrategy) Sync(recognizer Parser)
- type DiagnosticErrorListener
- func (d *DiagnosticErrorListener) ReportAmbiguity(recognizer Parser, dfa *DFA, startIndex, stopIndex int, exact bool, ...)
- func (d *DiagnosticErrorListener) ReportAttemptingFullContext(recognizer Parser, dfa *DFA, startIndex, stopIndex int, _ *BitSet, ...)
- func (d *DiagnosticErrorListener) ReportContextSensitivity(recognizer Parser, dfa *DFA, startIndex, stopIndex, _ int, _ *ATNConfigSet)
- type EpsilonTransition
- type ErrorListener
- type ErrorNode
- type ErrorNodeImpl
- type ErrorStrategy
- type FailedPredicateException
- type FileStream
- type IATNSimulator
- type ILexerATNSimulator
- type InputMisMatchException
- type InputStream
- func (is *InputStream) Consume()
- func (*InputStream) GetSourceName() string
- func (is *InputStream) GetText(start int, stop int) string
- func (is *InputStream) GetTextFromInterval(i Interval) string
- func (is *InputStream) GetTextFromTokens(start, stop Token) string
- func (is *InputStream) Index() int
- func (is *InputStream) LA(offset int) int
- func (is *InputStream) LT(offset int) int
- func (is *InputStream) Mark() int
- func (is *InputStream) Release(_ int)
- func (is *InputStream) Seek(index int)
- func (is *InputStream) Size() int
- func (is *InputStream) String() string
- type InsertAfterOp
- type InsertBeforeOp
- type IntStack
- type IntStream
- type InterpreterRuleContext
- type Interval
- type IntervalSet
- type IterativeParseTreeWalker
- type JMap
- type JPCEntry
- type JPCMap
- type JPCMap2
- type JStatRec
- type JStore
- func (s *JStore[T, C]) Contains(key T) bool
- func (s *JStore[T, C]) Each(f func(T) bool)
- func (s *JStore[T, C]) Get(key T) (T, bool)
- func (s *JStore[T, C]) Len() int
- func (s *JStore[T, C]) Put(value T) (v T, exists bool)
- func (s *JStore[T, C]) SortedSlice(less func(i, j T) bool) []T
- func (s *JStore[T, C]) Values() []T
- type LL1Analyzer
- type Lexer
- type LexerATNSimulator
- func (l *LexerATNSimulator) Consume(input CharStream)
- func (l *LexerATNSimulator) GetCharPositionInLine() int
- func (l *LexerATNSimulator) GetLine() int
- func (l *LexerATNSimulator) GetText(input CharStream) string
- func (l *LexerATNSimulator) GetTokenName(tt int) string
- func (l *LexerATNSimulator) Match(input CharStream, mode int) int
- func (l *LexerATNSimulator) MatchATN(input CharStream) int
- type LexerAction
- type LexerActionExecutor
- type LexerChannelAction
- type LexerCustomAction
- type LexerDFASerializer
- type LexerIndexedCustomAction
- type LexerModeAction
- type LexerMoreAction
- type LexerNoViableAltException
- type LexerPopModeAction
- type LexerPushModeAction
- type LexerSkipAction
- type LexerTypeAction
- type LoopEndState
- type Mutex
- type NoViableAltException
- type NotSetTransition
- type OR
- type ObjEqComparator
- type ParseCancellationException
- type ParseTree
- type ParseTreeListener
- type ParseTreeVisitor
- type ParseTreeWalker
- type Parser
- type ParserATNSimulator
- func (p *ParserATNSimulator) AdaptivePredict(parser *BaseParser, input TokenStream, decision int, ...) int
- func (p *ParserATNSimulator) GetAltThatFinishedDecisionEntryRule(configs *ATNConfigSet) int
- func (p *ParserATNSimulator) GetPredictionMode() int
- func (p *ParserATNSimulator) GetTokenName(t int) string
- func (p *ParserATNSimulator) ReportAmbiguity(dfa *DFA, _ *DFAState, startIndex, stopIndex int, exact bool, ...)
- func (p *ParserATNSimulator) ReportAttemptingFullContext(dfa *DFA, conflictingAlts *BitSet, configs *ATNConfigSet, ...)
- func (p *ParserATNSimulator) ReportContextSensitivity(dfa *DFA, prediction int, configs *ATNConfigSet, startIndex, stopIndex int)
- func (p *ParserATNSimulator) SetPredictionMode(v int)
- type ParserRuleContext
- type PlusBlockStartState
- type PlusLoopbackState
- type PrecedencePredicate
- type PrecedencePredicateTransition
- type PredPrediction
- type Predicate
- type PredicateTransition
- type PredictionContext
- func NewArrayPredictionContext(parents []*PredictionContext, returnStates []int) *PredictionContext
- func NewBaseSingletonPredictionContext(parent *PredictionContext, returnState int) *PredictionContext
- func NewEmptyPredictionContext() *PredictionContext
- func SingletonBasePredictionContextCreate(parent *PredictionContext, returnState int) *PredictionContext
- func (p *PredictionContext) ArrayEquals(o Collectable[*PredictionContext]) bool
- func (p *PredictionContext) Equals(other Collectable[*PredictionContext]) bool
- func (p *PredictionContext) GetParent(i int) *PredictionContext
- func (p *PredictionContext) GetReturnStates() []int
- func (p *PredictionContext) Hash() int
- func (p *PredictionContext) SingletonEquals(other Collectable[*PredictionContext]) bool
- func (p *PredictionContext) String() string
- func (p *PredictionContext) Type() int
- type PredictionContextCache
- type ProxyErrorListener
- func (p *ProxyErrorListener) ReportAmbiguity(recognizer Parser, dfa *DFA, startIndex, stopIndex int, exact bool, ...)
- func (p *ProxyErrorListener) ReportAttemptingFullContext(recognizer Parser, dfa *DFA, startIndex, stopIndex int, ...)
- func (p *ProxyErrorListener) ReportContextSensitivity(recognizer Parser, dfa *DFA, startIndex, stopIndex, prediction int, ...)
- func (p *ProxyErrorListener) SyntaxError(recognizer Recognizer, offendingSymbol interface{}, line, column int, ...)
- type RWMutex
- type RangeTransition
- type RecognitionException
- type Recognizer
- type ReplaceOp
- type RewriteOperation
- type RuleContext
- type RuleNode
- type RuleStartState
- type RuleStopState
- type RuleTransition
- type SemCComparator
- type SemanticContext
- type SetTransition
- type SimState
- type StarBlockStartState
- type StarLoopEntryState
- type StarLoopbackState
- type SyntaxTree
- type TerminalNode
- type TerminalNodeImpl
- func (t *TerminalNodeImpl) Accept(v ParseTreeVisitor) interface{}
- func (t *TerminalNodeImpl) GetChild(_ int) Tree
- func (t *TerminalNodeImpl) GetChildCount() int
- func (t *TerminalNodeImpl) GetChildren() []Tree
- func (t *TerminalNodeImpl) GetParent() Tree
- func (t *TerminalNodeImpl) GetPayload() interface{}
- func (t *TerminalNodeImpl) GetSourceInterval() Interval
- func (t *TerminalNodeImpl) GetSymbol() Token
- func (t *TerminalNodeImpl) GetText() string
- func (t *TerminalNodeImpl) SetChildren(_ []Tree)
- func (t *TerminalNodeImpl) SetParent(tree Tree)
- func (t *TerminalNodeImpl) String() string
- func (t *TerminalNodeImpl) ToStringTree(_ []string, _ Recognizer) string
- type Token
- type TokenFactory
- type TokenSource
- type TokenSourceCharStreamPair
- type TokenStream
- type TokenStreamRewriter
- func (tsr *TokenStreamRewriter) AddToProgram(name string, op RewriteOperation)
- func (tsr *TokenStreamRewriter) Delete(programName string, from, to int)
- func (tsr *TokenStreamRewriter) DeleteDefault(from, to int)
- func (tsr *TokenStreamRewriter) DeleteDefaultPos(index int)
- func (tsr *TokenStreamRewriter) DeleteProgram(programName string)
- func (tsr *TokenStreamRewriter) DeleteProgramDefault()
- func (tsr *TokenStreamRewriter) DeleteToken(programName string, from, to Token)
- func (tsr *TokenStreamRewriter) DeleteTokenDefault(from, to Token)
- func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndex(programName string) int
- func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndexDefault() int
- func (tsr *TokenStreamRewriter) GetProgram(name string) []RewriteOperation
- func (tsr *TokenStreamRewriter) GetText(programName string, interval Interval) string
- func (tsr *TokenStreamRewriter) GetTextDefault() string
- func (tsr *TokenStreamRewriter) GetTokenStream() TokenStream
- func (tsr *TokenStreamRewriter) InitializeProgram(name string) []RewriteOperation
- func (tsr *TokenStreamRewriter) InsertAfter(programName string, index int, text string)
- func (tsr *TokenStreamRewriter) InsertAfterDefault(index int, text string)
- func (tsr *TokenStreamRewriter) InsertAfterToken(programName string, token Token, text string)
- func (tsr *TokenStreamRewriter) InsertBefore(programName string, index int, text string)
- func (tsr *TokenStreamRewriter) InsertBeforeDefault(index int, text string)
- func (tsr *TokenStreamRewriter) InsertBeforeToken(programName string, token Token, text string)
- func (tsr *TokenStreamRewriter) Replace(programName string, from, to int, text string)
- func (tsr *TokenStreamRewriter) ReplaceDefault(from, to int, text string)
- func (tsr *TokenStreamRewriter) ReplaceDefaultPos(index int, text string)
- func (tsr *TokenStreamRewriter) ReplaceToken(programName string, from, to Token, text string)
- func (tsr *TokenStreamRewriter) ReplaceTokenDefault(from, to Token, text string)
- func (tsr *TokenStreamRewriter) ReplaceTokenDefaultPos(index Token, text string)
- func (tsr *TokenStreamRewriter) Rollback(programName string, instructionIndex int)
- func (tsr *TokenStreamRewriter) RollbackDefault(instructionIndex int)
- func (tsr *TokenStreamRewriter) SetLastRewriteTokenIndex(programName string, i int)
- type TokensStartState
- type TraceListener
- type Transition
- type Tree
- type VisitEntry
- type VisitList
- type VisitRecord
- type WildcardTransition
Constants ¶
const ( ATNStateInvalidType = 0 ATNStateBasic = 1 ATNStateRuleStart = 2 ATNStateBlockStart = 3 ATNStatePlusBlockStart = 4 ATNStateStarBlockStart = 5 ATNStateTokenStart = 6 ATNStateRuleStop = 7 ATNStateBlockEnd = 8 ATNStateStarLoopBack = 9 ATNStateStarLoopEntry = 10 ATNStatePlusLoopBack = 11 ATNStateLoopEnd = 12 ATNStateInvalidStateNumber = -1 )
Constants for serialization.
const ( ATNTypeLexer = 0 ATNTypeParser = 1 )
Represent the type of recognizer an ATN applies to.
const ( LexerDefaultMode = 0 LexerMore = -2 LexerSkip = -3 )
const ( LexerDefaultTokenChannel = TokenDefaultChannel LexerHidden = TokenHiddenChannel LexerMinCharValue = 0x0000 LexerMaxCharValue = 0x10FFFF )
const ( // LexerActionTypeChannel represents a [LexerChannelAction] action. LexerActionTypeChannel = 0 // LexerActionTypeCustom represents a [LexerCustomAction] action. LexerActionTypeCustom = 1 // LexerActionTypeMode represents a [LexerModeAction] action. LexerActionTypeMode = 2 // LexerActionTypeMore represents a [LexerMoreAction] action. LexerActionTypeMore = 3 // LexerActionTypePopMode represents a [LexerPopModeAction] action. LexerActionTypePopMode = 4 // LexerActionTypePushMode represents a [LexerPushModeAction] action. LexerActionTypePushMode = 5 // LexerActionTypeSkip represents a [LexerSkipAction] action. LexerActionTypeSkip = 6 // LexerActionTypeType represents a [LexerTypeAction] action. LexerActionTypeType = 7 )
const ( PredictionContextEmpty = iota PredictionContextSingleton PredictionContextArray )
const ( // PredictionModeSLL represents the SLL(*) prediction mode. // This prediction mode ignores the current // parser context when making predictions. This is the fastest prediction // mode, and provides correct results for many grammars. This prediction // mode is more powerful than the prediction mode provided by ANTLR 3, but // may result in syntax errors for grammar and input combinations which are // not SLL. // // When using this prediction mode, the parser will either return a correct // parse tree (i.e. the same parse tree that would be returned with the // [PredictionModeLL] prediction mode), or it will Report a syntax error. If a // syntax error is encountered when using the SLL prediction mode, // it may be due to either an actual syntax error in the input or indicate // that the particular combination of grammar and input requires the more // powerful LL prediction abilities to complete successfully. // // This prediction mode does not provide any guarantees for prediction // behavior for syntactically-incorrect inputs. // PredictionModeSLL = 0 // PredictionModeLL represents the LL(*) prediction mode. // This prediction mode allows the current parser // context to be used for resolving SLL conflicts that occur during // prediction. This is the fastest prediction mode that guarantees correct // parse results for all combinations of grammars with syntactically correct // inputs. // // When using this prediction mode, the parser will make correct decisions // for all syntactically-correct grammar and input combinations. However, in // cases where the grammar is truly ambiguous this prediction mode might not // report a precise answer for exactly which alternatives are // ambiguous. // // This prediction mode does not provide any guarantees for prediction // behavior for syntactically-incorrect inputs. // PredictionModeLL = 1 // PredictionModeLLExactAmbigDetection represents the LL(*) prediction mode // with exact ambiguity detection. // // In addition to the correctness guarantees provided by the [PredictionModeLL] prediction mode, // this prediction mode instructs the prediction algorithm to determine the // complete and exact set of ambiguous alternatives for every ambiguous // decision encountered while parsing. // // This prediction mode may be used for diagnosing ambiguities during // grammar development. Due to the performance overhead of calculating sets // of ambiguous alternatives, this prediction mode should be avoided when // the exact results are not necessary. // // This prediction mode does not provide any guarantees for prediction // behavior for syntactically-incorrect inputs. // PredictionModeLLExactAmbigDetection = 2 )
const ( TokenInvalidType = 0 // TokenEpsilon - during lookahead operations, this "token" signifies we hit the rule end [ATN] state // and did not follow it despite needing to. TokenEpsilon = -2 TokenMinUserTokenType = 1 TokenEOF = -1 // TokenDefaultChannel is the default channel upon which tokens are sent to the parser. // // All tokens go to the parser (unless [Skip] is called in the lexer rule) // on a particular "channel". The parser tunes to a particular channel // so that whitespace etc... can go to the parser on a "hidden" channel. TokenDefaultChannel = 0 // TokenHiddenChannel defines the normal hidden channel - the parser wil not see tokens that are not on [TokenDefaultChannel]. // // Anything on a different channel than TokenDefaultChannel is not parsed by parser. TokenHiddenChannel = 1 )
const ( DefaultProgramName = "default" ProgramInitSize = 100 MinTokenIndex = 0 )
const ( TransitionEPSILON = 1 TransitionRANGE = 2 TransitionRULE = 3 TransitionPREDICATE = 4 // e.g., {isType(input.LT(1))}? TransitionATOM = 5 TransitionACTION = 6 TransitionSET = 7 // ~(A|B) or ~atom, wildcard, which convert to next 2 TransitionNOTSET = 8 TransitionWILDCARD = 9 TransitionPRECEDENCE = 10 )
const ( // BasePredictionContextEmptyReturnState represents {@code $} in an array in full context mode, $ // doesn't mean wildcard: // // $ + x = [$,x] // // Here, // // $ = EmptyReturnState BasePredictionContextEmptyReturnState = 0x7FFFFFFF )
const ( // LL1AnalyzerHitPred is a special value added to the lookahead sets to indicate that we hit // a predicate during analysis if // // seeThruPreds==false LL1AnalyzerHitPred = TokenInvalidType )
Variables ¶
var ( LexerATNSimulatorMinDFAEdge = 0 LexerATNSimulatorMaxDFAEdge = 127 // forces unicode to stay in ATN LexerATNSimulatorMatchCalls = 0 )
var ( BasePredictionContextglobalNodeCount = 1 BasePredictionContextid = BasePredictionContextglobalNodeCount )
TODO: JI These are meant to be atomics - this does not seem to match the Java runtime here
var ATNInvalidAltNumber int
ATNInvalidAltNumber is used to represent an ALT number that has yet to be calculated or which is invalid for a particular struct such as *antlr.BaseRuleContext
var ATNSimulatorError = NewDFAState(0x7FFFFFFF, NewATNConfigSet(false))
var ATNStateInitialNumTransitions = 4
var BasePredictionContextEMPTY = &PredictionContext{ cachedHash: calculateEmptyHash(), pcType: PredictionContextEmpty, returnState: BasePredictionContextEmptyReturnState, }
var CollectionDescriptors = map[CollectionSource]CollectionDescriptor{ UnknownCollection: { SybolicName: "UnknownCollection", Description: "Unknown collection type. Only used if the target author thought it was an unimportant collection.", }, ATNConfigCollection: { SybolicName: "ATNConfigCollection", Description: "ATNConfig collection. Used to store the ATNConfigs for a particular state in the ATN." + "For instance, it is used to store the results of the closure() operation in the ATN.", }, ATNConfigLookupCollection: { SybolicName: "ATNConfigLookupCollection", Description: "ATNConfigLookup collection. Used to store the ATNConfigs for a particular state in the ATN." + "This is used to prevent duplicating equivalent states in an ATNConfigurationSet.", }, ATNStateCollection: { SybolicName: "ATNStateCollection", Description: "ATNState collection. This is used to store the states of the ATN.", }, DFAStateCollection: { SybolicName: "DFAStateCollection", Description: "DFAState collection. This is used to store the states of the DFA.", }, PredictionContextCollection: { SybolicName: "PredictionContextCollection", Description: "PredictionContext collection. This is used to store the prediction contexts of the ATN and cache computes.", }, SemanticContextCollection: { SybolicName: "SemanticContextCollection", Description: "SemanticContext collection. This is used to store the semantic contexts of the ATN.", }, ClosureBusyCollection: { SybolicName: "ClosureBusyCollection", Description: "ClosureBusy collection. This is used to check and prevent infinite recursion right recursive rules." + "It stores ATNConfigs that are currently being processed in the closure() operation.", }, PredictionVisitedCollection: { SybolicName: "PredictionVisitedCollection", Description: "A map that records whether we have visited a particular context when searching through cached entries.", }, MergeCacheCollection: { SybolicName: "MergeCacheCollection", Description: "A map that records whether we have already merged two particular contexts and can save effort by not repeating it.", }, PredictionContextCacheCollection: { SybolicName: "PredictionContextCacheCollection", Description: "A map that records whether we have already created a particular context and can save effort by not computing it again.", }, AltSetCollection: { SybolicName: "AltSetCollection", Description: "Used to eliminate duplicate alternatives in an ATN config set.", }, ReachSetCollection: { SybolicName: "ReachSetCollection", Description: "Used as merge cache to prevent us needing to compute the merge of two states if we have already done it.", }, }
var CommonTokenFactoryDEFAULT = NewCommonTokenFactory(false)
CommonTokenFactoryDEFAULT is the default CommonTokenFactory. It does not explicitly copy token text when constructing tokens.
var ConsoleErrorListenerINSTANCE = NewConsoleErrorListener()
ConsoleErrorListenerINSTANCE provides a default instance of {@link ConsoleErrorListener}.
var ErrEmptyStack = errors.New("stack is empty")
var LexerMoreActionINSTANCE = NewLexerMoreAction()
var LexerPopModeActionINSTANCE = NewLexerPopModeAction()
var LexerSkipActionINSTANCE = NewLexerSkipAction()
LexerSkipActionINSTANCE provides a singleton instance of this parameterless lexer action.
var ParseTreeWalkerDefault = NewParseTreeWalker()
var ParserRuleContextEmpty = NewBaseParserRuleContext(nil, -1)
var SemanticContextNone = NewPredicate(-1, -1, false)
var Statistics = &goRunStats{}
var TransitionserializationNames = []string{
"INVALID",
"EPSILON",
"RANGE",
"RULE",
"PREDICATE",
"ATOM",
"ACTION",
"SET",
"NOT_SET",
"WILDCARD",
"PRECEDENCE",
}
var TreeInvalidInterval = NewInterval(-1, -2)
Functions ¶
func ConfigureRuntime ¶ added in v4.13.0
func ConfigureRuntime(options ...runtimeOption) error
ConfigureRuntime allows the runtime to be configured globally setting things like trace and statistics options. It uses the functional options pattern for go. This is a package global function as it operates on the runtime configuration regardless of the instantiation of anything higher up such as a parser or lexer. Generally this is used for debugging/tracing/statistics options, which are usually used by the runtime maintainers (or rather the only maintainer). However, it is possible that you might want to use this to set a global option concerning the memory allocation type used by the runtime such as sync.Pool or not.
The options are applied in the order they are passed in, so the last option will override any previous options.
For example, if you want to turn on the collection create point stack flag to true, you can do:
antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(true))
If you want to turn it off, you can do:
antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(false))
func EscapeWhitespace ¶
func InitBaseParserRuleContext ¶ added in v4.13.0
func InitBaseParserRuleContext(prc *BaseParserRuleContext, parent ParserRuleContext, invokingStateNumber int)
func PredictionModeallConfigsInRuleStopStates ¶
func PredictionModeallConfigsInRuleStopStates(configs *ATNConfigSet) bool
PredictionModeallConfigsInRuleStopStates checks if all configurations in configs are in a RuleStopState. Configurations meeting this condition have reached the end of the decision rule (local context) or end of start rule (full context).
the func returns true if all configurations in configs are in a RuleStopState
func PredictionModeallSubsetsConflict ¶
PredictionModeallSubsetsConflict determines if every alternative subset in altsets contains more than one alternative.
The func returns true if every BitSet in altsets has BitSet.cardinality cardinality > 1
func PredictionModeallSubsetsEqual ¶
PredictionModeallSubsetsEqual determines if every alternative subset in altsets is equivalent.
The func returns true if every member of altsets is equal to the others.
func PredictionModegetSingleViableAlt ¶
PredictionModegetSingleViableAlt gets the single alternative predicted by all alternative subsets in altsets if there is one.
TODO: JI - Review this code - it does not seem to do the same thing as the Java code - maybe because BitSet is not like the Java utils BitSet
func PredictionModegetUniqueAlt ¶
PredictionModegetUniqueAlt returns the unique alternative predicted by all alternative subsets in altsets. If no such alternative exists, this method returns ATNInvalidAltNumber.
@param altsets a collection of alternative subsets
func PredictionModehasConfigInRuleStopState ¶
func PredictionModehasConfigInRuleStopState(configs *ATNConfigSet) bool
PredictionModehasConfigInRuleStopState checks if any configuration in the given configs is in a RuleStopState. Configurations meeting this condition have reached the end of the decision rule (local context) or end of start rule (full context).
The func returns true if any configuration in the supplied configs is in a RuleStopState
func PredictionModehasConflictingAltSet ¶
PredictionModehasConflictingAltSet determines if any single alternative subset in altsets contains more than one alternative.
The func returns true if altsets contains a BitSet with BitSet.cardinality cardinality > 1, otherwise false
func PredictionModehasNonConflictingAltSet ¶
PredictionModehasNonConflictingAltSet determines if any single alternative subset in altsets contains exactly one alternative.
The func returns true if altsets contains at least one BitSet with BitSet.cardinality cardinality 1
func PredictionModehasSLLConflictTerminatingPrediction ¶
func PredictionModehasSLLConflictTerminatingPrediction(mode int, configs *ATNConfigSet) bool
PredictionModehasSLLConflictTerminatingPrediction computes the SLL prediction termination condition.
This method computes the SLL prediction termination condition for both of the following cases:
- The usual SLL+LL fallback upon SLL conflict
- Pure SLL without LL fallback
Combined SLL+LL Parsing ¶
When LL-fallback is enabled upon SLL conflict, correct predictions are ensured regardless of how the termination condition is computed by this method. Due to the substantially higher cost of LL prediction, the prediction should only fall back to LL when the additional lookahead cannot lead to a unique SLL prediction.
Assuming combined SLL+LL parsing, an SLL configuration set with only conflicting subsets should fall back to full LL, even if the configuration sets don't resolve to the same alternative, e.g.
{1,2} and {3,4}
If there is at least one non-conflicting configuration, SLL could continue with the hopes that more lookahead will resolve via one of those non-conflicting configurations.
Here's the prediction termination rule them: SLL (for SLL+LL parsing) stops when it sees only conflicting configuration subsets. In contrast, full LL keeps going when there is uncertainty.
Heuristic ¶
As a heuristic, we stop prediction when we see any conflicting subset unless we see a state that only has one alternative associated with it. The single-alt-state thing lets prediction continue upon rules like (otherwise, it would admit defeat too soon):
[12|1|[], 6|2|[], 12|2|[]]. s : (ID | ID ID?) ;
When the ATN simulation reaches the state before ';', it has a DFA state that looks like:
[12|1|[], 6|2|[], 12|2|[]]
Naturally
12|1|[] and 12|2|[]
conflict, but we cannot stop processing this node because alternative to has another way to continue, via
[6|2|[]]
It also let's us continue for this rule:
[1|1|[], 1|2|[], 8|3|[]] a : A | A | A B ;
After Matching input A, we reach the stop state for rule A, state 1. State 8 is the state immediately before B. Clearly alternatives 1 and 2 conflict and no amount of further lookahead will separate the two. However, alternative 3 will be able to continue, and so we do not stop working on this state. In the previous example, we're concerned with states associated with the conflicting alternatives. Here alt 3 is not associated with the conflicting configs, but since we can continue looking for input reasonably, don't declare the state done.
Pure SLL Parsing ¶
To handle pure SLL parsing, all we have to do is make sure that we combine stack contexts for configurations that differ only by semantic predicate. From there, we can do the usual SLL termination heuristic.
Predicates in SLL+LL Parsing ¶
SLL decisions don't evaluate predicates until after they reach DFA stop states because they need to create the DFA cache that works in all semantic situations. In contrast, full LL evaluates predicates collected during start state computation, so it can ignore predicates thereafter. This means that SLL termination detection can totally ignore semantic predicates.
Implementation-wise, ATNConfigSet combines stack contexts but not semantic predicate contexts, so we might see two configurations like the following:
(s, 1, x, {}), (s, 1, x', {p})
Before testing these configurations against others, we have to merge x and x' (without modifying the existing configurations). For example, we test (x+x')==x” when looking for conflicts in the following configurations:
(s, 1, x, {}), (s, 1, x', {p}), (s, 2, x”, {})
If the configuration set has predicates (as indicated by [ATNConfigSet.hasSemanticContext]), this algorithm makes a copy of the configurations to strip out all the predicates so that a standard ATNConfigSet will merge everything ignoring predicates.
func PredictionModehasStateAssociatedWithOneAlt ¶
func PredictionModehasStateAssociatedWithOneAlt(configs *ATNConfigSet) bool
func PredictionModeresolvesToJustOneViableAlt ¶
PredictionModeresolvesToJustOneViableAlt checks full LL prediction termination.
Can we stop looking ahead during ATN simulation or is there some uncertainty as to which alternative we will ultimately pick, after consuming more input? Even if there are partial conflicts, we might know that everything is going to resolve to the same minimum alternative. That means we can stop since no more lookahead will change that fact. On the other hand, there might be multiple conflicts that resolve to different minimums. That means we need more look ahead to decide which of those alternatives we should predict.
The basic idea is to split the set of configurations 'C', into conflicting subsets (s, _, ctx, _) and singleton subsets with non-conflicting configurations. Two configurations conflict if they have identical ATNConfig.state and ATNConfig.context values but a different ATNConfig.alt value, e.g.
(s, i, ctx, _)
and
(s, j, ctx, _) ; for i != j
Reduce these configuration subsets to the set of possible alternatives. You can compute the alternative subsets in one pass as follows:
A_s,ctx = {i | (s, i, ctx, _)}
for each configuration in C holding s and ctx fixed.
Or in pseudo-code:
for each configuration c in C: map[c] U = c.ATNConfig.alt alt // map hash/equals uses s and x, not alt and not pred
The values in map are the set of
A_s,ctx
sets.
If
|A_s,ctx| = 1
then there is no conflict associated with s and ctx.
Reduce the subsets to singletons by choosing a minimum of each subset. If the union of these alternative subsets is a singleton, then no amount of further lookahead will help us. We will always pick that alternative. If, however, there is more than one alternative, then we are uncertain which alternative to predict and must continue looking for resolution. We may or may not discover an ambiguity in the future, even if there are no conflicting subsets this round.
The biggest sin is to terminate early because it means we've made a decision but were uncertain as to the eventual outcome. We haven't used enough lookahead. On the other hand, announcing a conflict too late is no big deal; you will still have the conflict. It's just inefficient. It might even look until the end of file.
No special consideration for semantic predicates is required because predicates are evaluated on-the-fly for full LL prediction, ensuring that no configuration contains a semantic context during the termination check.
Conflicting Configs ¶
Two configurations:
(s, i, x) and (s, j, x')
conflict when i != j but x = x'. Because we merge all (s, i, _) configurations together, that means that there are at most n configurations associated with state s for n possible alternatives in the decision. The merged stacks complicate the comparison of configuration contexts x and x'.
Sam checks to see if one is a subset of the other by calling merge and checking to see if the merged result is either x or x'. If the x associated with lowest alternative i is the superset, then i is the only possible prediction since the others resolve to min(i) as well. However, if x is associated with j > i then at least one stack configuration for j is not in conflict with alternative i. The algorithm should keep going, looking for more lookahead due to the uncertainty.
For simplicity, I'm doing an equality check between x and x', which lets the algorithm continue to consume lookahead longer than necessary. The reason I like the equality is of course the simplicity but also because that is the test you need to detect the alternatives that are actually in conflict.
Continue/Stop Rule ¶
Continue if the union of resolved alternative sets from non-conflicting and conflicting alternative subsets has more than one alternative. We are uncertain about which alternative to predict.
The complete set of alternatives,
[i for (_, i, _)]
tells us which alternatives are still in the running for the amount of input we've consumed at this point. The conflicting sets let us to strip away configurations that won't lead to more states because we resolve conflicts to the configuration with a minimum alternate for the conflicting set.
Cases
- no conflicts and more than 1 alternative in set => continue
- (s, 1, x), (s, 2, x), (s, 3, z), (s', 1, y), (s', 2, y) yields non-conflicting set {3} ∪ conflicting sets min({1,2}) ∪ min({1,2}) = {1,3} => continue
- (s, 1, x), (s, 2, x), (s', 1, y), (s', 2, y), (s”, 1, z) yields non-conflicting set {1} ∪ conflicting sets min({1,2}) ∪ min({1,2}) = {1} => stop and predict 1
- (s, 1, x), (s, 2, x), (s', 1, y), (s', 2, y) yields conflicting, reduced sets {1} ∪ {1} = {1} => stop and predict 1, can announce ambiguity {1,2}
- (s, 1, x), (s, 2, x), (s', 2, y), (s', 3, y) yields conflicting, reduced sets {1} ∪ {2} = {1,2} => continue
- (s, 1, x), (s, 2, x), (s', 2, y), (s', 3, y) yields conflicting, reduced sets {1} ∪ {2} = {1,2} => continue
- (s, 1, x), (s, 2, x), (s', 3, y), (s', 4, y) yields conflicting, reduced sets {1} ∪ {3} = {1,3} => continue
Exact Ambiguity Detection ¶
If all states report the same conflicting set of alternatives, then we know we have the exact ambiguity set:
|A_i| > 1
and
A_i = A_j ; for all i, j
In other words, we continue examining lookahead until all A_i have more than one alternative and all A_i are the same. If
A={{1,2}, {1,3}}
then regular LL prediction would terminate because the resolved set is {1}. To determine what the real ambiguity is, we have to know whether the ambiguity is between one and two or one and three so we keep going. We can only stop prediction when we need exact ambiguity detection when the sets look like:
A={{1,2}}
or
{{1,2},{1,2}}, etc...
func PrintArrayJavaStyle ¶
func TerminalNodeToStringArray ¶
func TerminalNodeToStringArray(sa []TerminalNode) []string
func TreesStringTree ¶
func TreesStringTree(tree Tree, ruleNames []string, recog Recognizer) string
TreesStringTree prints out a whole tree in LISP form. [getNodeText] is used on the node payloads to get the text for the nodes. Detects parse trees and extracts data appropriately.
func WithLRLoopEntryBranchOpt ¶ added in v4.13.0
func WithLRLoopEntryBranchOpt(off bool) runtimeOption
WithLRLoopEntryBranchOpt sets the global flag indicating whether let recursive loop operations should be optimized or not. This is useful for debugging parser issues by comparing the output with the Java runtime. It turns off the functionality of [canDropLoopEntryEdgeInLeftRecursiveRule] in ParserATNSimulator.
Note that default is to use this optimization.
Use:
antlr.ConfigureRuntime(antlr.WithLRLoopEntryBranchOpt(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithLRLoopEntryBranchOpt(false))
func WithLexerATNSimulatorDFADebug ¶ added in v4.13.0
func WithLexerATNSimulatorDFADebug(debug bool) runtimeOption
WithLexerATNSimulatorDFADebug sets the global flag indicating whether to log debug information from the lexer ATN DFA simulator. This is useful for debugging lexer issues by comparing the output with the Java runtime. Only useful to the runtime maintainers.
Use:
antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDFADebug(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDFADebug(false))
func WithLexerATNSimulatorDebug ¶ added in v4.13.0
func WithLexerATNSimulatorDebug(debug bool) runtimeOption
WithLexerATNSimulatorDebug sets the global flag indicating whether to log debug information from the lexer ATN simulator. This is useful for debugging lexer issues by comparing the output with the Java runtime. Only useful to the runtime maintainers.
Use:
antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDebug(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDebug(false))
func WithMemoryManager ¶ added in v4.13.0
func WithMemoryManager(use bool) runtimeOption
WithMemoryManager sets the global flag indicating whether to use the memory manager or not. This is useful for poorly constructed grammars that create a lot of garbage. It turns on the functionality of [memoryManager], which will intercept garbage collection and cause available memory to be reused. At the end of the day, this is no substitute for fixing your grammar by ridding yourself of extreme ambiguity. BUt if you are just trying to reuse an opensource grammar, this may help make it more practical.
Note that default is to use normal Go memory allocation and not pool memory.
Use:
antlr.ConfigureRuntime(antlr.WithMemoryManager(true))
Note that if you turn this on, you should probably leave it on. You should use only one memory strategy or the other and should remember to nil out any references to the parser or lexer when you are done with them.
func WithParserATNSimulatorDFADebug ¶ added in v4.13.0
func WithParserATNSimulatorDFADebug(debug bool) runtimeOption
WithParserATNSimulatorDFADebug sets the global flag indicating whether to log debug information from the parser ATN DFA simulator. This is useful for debugging parser issues by comparing the output with the Java runtime. Only useful to the runtime maintainers.
Use:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDFADebug(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDFADebug(false))
func WithParserATNSimulatorDebug ¶ added in v4.13.0
func WithParserATNSimulatorDebug(debug bool) runtimeOption
WithParserATNSimulatorDebug sets the global flag indicating whether to log debug information from the parser ATN simulator. This is useful for debugging parser issues by comparing the output with the Java runtime. Only useful to the runtime maintainers.
Use:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDebug(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDebug(false))
func WithParserATNSimulatorRetryDebug ¶ added in v4.13.0
func WithParserATNSimulatorRetryDebug(debug bool) runtimeOption
WithParserATNSimulatorRetryDebug sets the global flag indicating whether to log debug information from the parser ATN DFA simulator when retrying a decision. This is useful for debugging parser issues by comparing the output with the Java runtime. Only useful to the runtime maintainers.
Use:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorRetryDebug(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorRetryDebug(false))
func WithParserATNSimulatorTraceATNSim ¶ added in v4.13.0
func WithParserATNSimulatorTraceATNSim(trace bool) runtimeOption
WithParserATNSimulatorTraceATNSim sets the global flag indicating whether to log trace information from the parser ATN simulator DFA. This is useful for debugging parser issues by comparing the output with the Java runtime. Only useful to the runtime maintainers.
Use:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorTraceATNSim(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithParserATNSimulatorTraceATNSim(false))
func WithStatsTraceStacks ¶ added in v4.13.0
func WithStatsTraceStacks(trace bool) runtimeOption
WithStatsTraceStacks sets the global flag indicating whether to collect stack traces at the create-point of certain structs, such as collections, or the use point of certain methods such as Put(). Because this can be expensive, it is turned off by default. However, it can be useful to track down exactly where memory is being created and used.
Use:
antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(true))
You can turn it off at any time using:
antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(false))
Types ¶
type AND ¶
type AND struct {
// contains filtered or unexported fields
}
func NewAND ¶
func NewAND(a, b SemanticContext) *AND
func (*AND) Equals ¶
func (a *AND) Equals(other Collectable[SemanticContext]) bool
type ATN ¶
type ATN struct { // DecisionToState is the decision points for all rules, sub-rules, optional // blocks, ()+, ()*, etc. Each sub-rule/rule is a decision point, and we must track them, so we // can go back later and build DFA predictors for them. This includes // all the rules, sub-rules, optional blocks, ()+, ()* etc... DecisionToState []DecisionState // contains filtered or unexported fields }
ATN represents an “Augmented Transition Network”, though general in ANTLR the term “Augmented Recursive Transition Network” though there are some descriptions of “Recursive Transition Network” in existence.
ATNs represent the main networks in the system and are serialized by the code generator and support ALL(*).
func NewATN ¶
NewATN returns a new ATN struct representing the given grammarType and is used for runtime deserialization of ATNs from the code generated by the ANTLR tool
func (*ATN) NextTokens ¶
func (a *ATN) NextTokens(s ATNState, ctx RuleContext) *IntervalSet
NextTokens computes and returns the set of valid tokens starting in state s, by calling either [NextTokensNoContext] (ctx == nil) or [NextTokensInContext] (ctx != nil).
func (*ATN) NextTokensInContext ¶
func (a *ATN) NextTokensInContext(s ATNState, ctx RuleContext) *IntervalSet
NextTokensInContext computes and returns the set of valid tokens that can occur starting in state s. If ctx is nil, the set of tokens will not include what can follow the rule surrounding s. In other words, the set will be restricted to tokens reachable staying within the rule of s.
func (*ATN) NextTokensNoContext ¶
func (a *ATN) NextTokensNoContext(s ATNState) *IntervalSet
NextTokensNoContext computes and returns the set of valid tokens that can occur starting in state s and staying in same rule. antlr.Token.EPSILON is in set if we reach end of rule.
type ATNAltConfigComparator ¶
type ATNAltConfigComparator[T Collectable[T]] struct { }
ATNAltConfigComparator is used as the comparator for mapping configs to Alt Bitsets
func (*ATNAltConfigComparator[T]) Equals2 ¶
func (c *ATNAltConfigComparator[T]) Equals2(o1, o2 *ATNConfig) bool
Equals2 is a custom comparator for ATNConfigs specifically for configLookup
func (*ATNAltConfigComparator[T]) Hash1 ¶
func (c *ATNAltConfigComparator[T]) Hash1(o *ATNConfig) int
Hash1 is custom hash implementation for ATNConfigs specifically for configLookup
type ATNConfig ¶
type ATNConfig struct {
// contains filtered or unexported fields
}
ATNConfig is a tuple: (ATN state, predicted alt, syntactic, semantic context). The syntactic context is a graph-structured stack node whose path(s) to the root is the rule invocation(s) chain used to arrive in the state. The semantic context is the tree of semantic predicates encountered before reaching an ATN state.
func NewATNConfig ¶ added in v4.13.0
func NewATNConfig(c *ATNConfig, state ATNState, context *PredictionContext, semanticContext SemanticContext) *ATNConfig
NewATNConfig creates a new ATNConfig instance given an existing config, a state, a context and a semantic context, other 'constructors' are just wrappers around this one.
func NewATNConfig1 ¶ added in v4.13.0
func NewATNConfig1(c *ATNConfig, state ATNState, context *PredictionContext) *ATNConfig
NewATNConfig1 creates a new ATNConfig instance given an existing config, a state, and a context only
func NewATNConfig2 ¶ added in v4.13.0
func NewATNConfig2(c *ATNConfig, semanticContext SemanticContext) *ATNConfig
NewATNConfig2 creates a new ATNConfig instance given an existing config, and a context only
func NewATNConfig3 ¶ added in v4.13.0
func NewATNConfig3(c *ATNConfig, state ATNState, semanticContext SemanticContext) *ATNConfig
NewATNConfig3 creates a new ATNConfig instance given an existing config, a state and a semantic context
func NewATNConfig4 ¶ added in v4.13.0
NewATNConfig4 creates a new ATNConfig instance given an existing config, and a state only
func NewATNConfig5 ¶ added in v4.13.0
func NewATNConfig5(state ATNState, alt int, context *PredictionContext, semanticContext SemanticContext) *ATNConfig
NewATNConfig5 creates a new ATNConfig instance given a state, alt, context and semantic context
func NewATNConfig6 ¶ added in v4.13.0
func NewATNConfig6(state ATNState, alt int, context *PredictionContext) *ATNConfig
NewATNConfig6 creates a new ATNConfig instance given a state, alt and context only
func NewLexerATNConfig1 ¶
func NewLexerATNConfig1(state ATNState, alt int, context *PredictionContext) *ATNConfig
func NewLexerATNConfig2 ¶
func NewLexerATNConfig2(c *ATNConfig, state ATNState, context *PredictionContext) *ATNConfig
func NewLexerATNConfig3 ¶
func NewLexerATNConfig3(c *ATNConfig, state ATNState, lexerActionExecutor *LexerActionExecutor) *ATNConfig
func NewLexerATNConfig4 ¶
func NewLexerATNConfig6 ¶
func NewLexerATNConfig6(state ATNState, alt int, context *PredictionContext) *ATNConfig
func (*ATNConfig) Equals ¶
func (a *ATNConfig) Equals(o Collectable[*ATNConfig]) bool
Equals is the default comparison function for an ATNConfig when no specialist implementation is required for a collection.
An ATN configuration is equal to another if both have the same state, they predict the same alternative, and syntactic/semantic contexts are the same.
func (*ATNConfig) GetContext ¶
func (a *ATNConfig) GetContext() *PredictionContext
GetContext returns the rule invocation stack associated with this configuration
func (*ATNConfig) GetReachesIntoOuterContext ¶
GetReachesIntoOuterContext returns the count of references to an outer context from this configuration
func (*ATNConfig) GetSemanticContext ¶
func (a *ATNConfig) GetSemanticContext() SemanticContext
GetSemanticContext returns the semantic context associated with this configuration
func (*ATNConfig) Hash ¶
Hash is the default hash function for a parser ATNConfig, when no specialist hash function is required for a collection
func (*ATNConfig) InitATNConfig ¶ added in v4.13.0
func (a *ATNConfig) InitATNConfig(c *ATNConfig, state ATNState, alt int, context *PredictionContext, semanticContext SemanticContext)
func (*ATNConfig) LEquals ¶ added in v4.13.0
func (a *ATNConfig) LEquals(other Collectable[*ATNConfig]) bool
LEquals is the default comparison function for Lexer ATNConfig objects, it can be used directly or via the default comparator ObjEqComparator.
func (*ATNConfig) LHash ¶ added in v4.13.0
LHash is the default hash function for Lexer ATNConfig objects, it can be used directly or via the default comparator ObjEqComparator.
func (*ATNConfig) PEquals ¶ added in v4.13.0
func (a *ATNConfig) PEquals(o Collectable[*ATNConfig]) bool
PEquals is the default comparison function for a Parser ATNConfig when no specialist implementation is required for a collection.
An ATN configuration is equal to another if both have the same state, they predict the same alternative, and syntactic/semantic contexts are the same.
func (*ATNConfig) PHash ¶ added in v4.13.0
PHash is the default hash function for a parser ATNConfig, when no specialist hash function is required for a collection
func (*ATNConfig) SetContext ¶
func (a *ATNConfig) SetContext(v *PredictionContext)
SetContext sets the rule invocation stack associated with this configuration
func (*ATNConfig) SetReachesIntoOuterContext ¶
SetReachesIntoOuterContext sets the count of references to an outer context from this configuration
type ATNConfigComparator ¶
type ATNConfigComparator[T Collectable[T]] struct { }
ATNConfigComparator is used as the comparator for the configLookup field of an ATNConfigSet and has a custom Equals() and Hash() implementation, because equality is not based on the standard Hash() and Equals() methods of the ATNConfig type.
func (*ATNConfigComparator[T]) Equals2 ¶
func (c *ATNConfigComparator[T]) Equals2(o1, o2 *ATNConfig) bool
Equals2 is a custom comparator for ATNConfigs specifically for configLookup
func (*ATNConfigComparator[T]) Hash1 ¶
func (c *ATNConfigComparator[T]) Hash1(o *ATNConfig) int
Hash1 is custom hash implementation for ATNConfigs specifically for configLookup
type ATNConfigSet ¶
type ATNConfigSet struct {
// contains filtered or unexported fields
}
ATNConfigSet is a specialized set of ATNConfig that tracks information about its elements and can combine similar configurations using a graph-structured stack.
func NewATNConfigSet ¶ added in v4.13.0
func NewATNConfigSet(fullCtx bool) *ATNConfigSet
NewATNConfigSet creates a new ATNConfigSet instance.
func NewOrderedATNConfigSet ¶
func NewOrderedATNConfigSet() *ATNConfigSet
NewOrderedATNConfigSet creates a config set with a slightly different Hash/Equal pair for use in lexers.
func (*ATNConfigSet) Add ¶
func (b *ATNConfigSet) Add(config *ATNConfig, mergeCache *JPCMap) bool
Add merges contexts with existing configs for (s, i, pi, _), where 's' is the ATNConfig.state, 'i' is the ATNConfig.alt, and 'pi' is the ATNConfig.semanticContext.
We use (s,i,pi) as the key. Updates dipsIntoOuterContext and hasSemanticContext when necessary.
func (*ATNConfigSet) AddAll ¶
func (b *ATNConfigSet) AddAll(coll []*ATNConfig) bool
func (*ATNConfigSet) Alts ¶
func (b *ATNConfigSet) Alts() *BitSet
Alts returns the combined set of alts for all the configurations in this set.
func (*ATNConfigSet) Clear ¶
func (b *ATNConfigSet) Clear()
func (*ATNConfigSet) Compare ¶ added in v4.13.0
func (b *ATNConfigSet) Compare(bs *ATNConfigSet) bool
Compare The configs are only equal if they are in the same order and their Equals function returns true. Java uses ArrayList.equals(), which requires the same order.
func (*ATNConfigSet) Contains ¶
func (b *ATNConfigSet) Contains(item *ATNConfig) bool
func (*ATNConfigSet) ContainsFast ¶
func (b *ATNConfigSet) ContainsFast(item *ATNConfig) bool
func (*ATNConfigSet) Equals ¶
func (b *ATNConfigSet) Equals(other Collectable[ATNConfig]) bool
func (*ATNConfigSet) GetPredicates ¶
func (b *ATNConfigSet) GetPredicates() []SemanticContext
func (*ATNConfigSet) GetStates ¶
func (b *ATNConfigSet) GetStates() *JStore[ATNState, Comparator[ATNState]]
GetStates returns the set of states represented by all configurations in this config set
func (*ATNConfigSet) Hash ¶
func (b *ATNConfigSet) Hash() int
func (*ATNConfigSet) OptimizeConfigs ¶
func (b *ATNConfigSet) OptimizeConfigs(interpreter *BaseATNSimulator)
func (*ATNConfigSet) String ¶
func (b *ATNConfigSet) String() string
type ATNConfigSetPair ¶
type ATNConfigSetPair struct {
// contains filtered or unexported fields
}
type ATNDeserializationOptions ¶
type ATNDeserializationOptions struct {
// contains filtered or unexported fields
}
func DefaultATNDeserializationOptions ¶
func DefaultATNDeserializationOptions() *ATNDeserializationOptions
func NewATNDeserializationOptions ¶
func NewATNDeserializationOptions(other *ATNDeserializationOptions) *ATNDeserializationOptions
func (*ATNDeserializationOptions) GenerateRuleBypassTransitions ¶
func (opts *ATNDeserializationOptions) GenerateRuleBypassTransitions() bool
func (*ATNDeserializationOptions) ReadOnly ¶
func (opts *ATNDeserializationOptions) ReadOnly() bool
func (*ATNDeserializationOptions) SetGenerateRuleBypassTransitions ¶
func (opts *ATNDeserializationOptions) SetGenerateRuleBypassTransitions(generateRuleBypassTransitions bool)
func (*ATNDeserializationOptions) SetReadOnly ¶
func (opts *ATNDeserializationOptions) SetReadOnly(readOnly bool)
func (*ATNDeserializationOptions) SetVerifyATN ¶
func (opts *ATNDeserializationOptions) SetVerifyATN(verifyATN bool)
func (*ATNDeserializationOptions) VerifyATN ¶
func (opts *ATNDeserializationOptions) VerifyATN() bool
type ATNDeserializer ¶
type ATNDeserializer struct {
// contains filtered or unexported fields
}
func NewATNDeserializer ¶
func NewATNDeserializer(options *ATNDeserializationOptions) *ATNDeserializer
func (*ATNDeserializer) Deserialize ¶
func (a *ATNDeserializer) Deserialize(data []int32) *ATN
type ATNState ¶
type ATNState interface { GetEpsilonOnlyTransitions() bool GetRuleIndex() int SetRuleIndex(int) GetNextTokenWithinRule() *IntervalSet SetNextTokenWithinRule(*IntervalSet) GetATN() *ATN SetATN(*ATN) GetStateType() int GetStateNumber() int SetStateNumber(int) GetTransitions() []Transition SetTransitions([]Transition) AddTransition(Transition, int) String() string Hash() int Equals(Collectable[ATNState]) bool }
type AbstractPredicateTransition ¶
type AbstractPredicateTransition interface { Transition IAbstractPredicateTransitionFoo() }
type ActionTransition ¶
type ActionTransition struct { BaseTransition // contains filtered or unexported fields }
func NewActionTransition ¶
func NewActionTransition(target ATNState, ruleIndex, actionIndex int, isCtxDependent bool) *ActionTransition
func (*ActionTransition) Matches ¶
func (t *ActionTransition) Matches(_, _, _ int) bool
func (*ActionTransition) String ¶
func (t *ActionTransition) String() string
type AltDict ¶
type AltDict struct {
// contains filtered or unexported fields
}
func NewAltDict ¶
func NewAltDict() *AltDict
func PredictionModeGetStateToAltMap ¶
func PredictionModeGetStateToAltMap(configs *ATNConfigSet) *AltDict
PredictionModeGetStateToAltMap gets a map from state to alt subset from a configuration set.
for each configuration c in configs: map[c.ATNConfig.state] U= c.ATNConfig.alt}
type AtomTransition ¶
type AtomTransition struct {
BaseTransition
}
AtomTransition TODO: make all transitions sets? no, should remove set edges
func NewAtomTransition ¶
func NewAtomTransition(target ATNState, intervalSet int) *AtomTransition
func (*AtomTransition) Matches ¶
func (t *AtomTransition) Matches(symbol, _, _ int) bool
func (*AtomTransition) String ¶
func (t *AtomTransition) String() string
type BailErrorStrategy ¶
type BailErrorStrategy struct {
*DefaultErrorStrategy
}
The BailErrorStrategy implementation of ANTLRErrorStrategy responds to syntax errors by immediately canceling the parse operation with a ParseCancellationException. The implementation ensures that the [ParserRuleContext//exception] field is set for all parse tree nodes that were not completed prior to encountering the error.
This error strategy is useful in the following scenarios.
Two-stage parsing: This error strategy allows the first stage of two-stage parsing to immediately terminate if an error is encountered, and immediately fall back to the second stage. In addition to avoiding wasted work by attempting to recover from errors here, the empty implementation of BailErrorStrategy.Sync improves the performance of the first stage.
Silent validation: When syntax errors are not being Reported or logged, and the parse result is simply ignored if errors occur, the BailErrorStrategy avoids wasting work on recovering from errors when the result will be ignored either way.
myparser.SetErrorHandler(NewBailErrorStrategy())
See also: [Parser.SetErrorHandler(ANTLRErrorStrategy)]
func NewBailErrorStrategy ¶
func NewBailErrorStrategy() *BailErrorStrategy
func (*BailErrorStrategy) Recover ¶
func (b *BailErrorStrategy) Recover(recognizer Parser, e RecognitionException)
Recover Instead of recovering from exception e, re-panic it wrapped in a ParseCancellationException so it is not caught by the rule func catches. Use Exception.GetCause() to get the original RecognitionException.
func (*BailErrorStrategy) RecoverInline ¶
func (b *BailErrorStrategy) RecoverInline(recognizer Parser) Token
RecoverInline makes sure we don't attempt to recover inline if the parser successfully recovers, it won't panic an exception.
func (*BailErrorStrategy) Sync ¶
func (b *BailErrorStrategy) Sync(_ Parser)
Sync makes sure we don't attempt to recover from problems in sub-rules.
type BaseATNConfigComparator ¶
type BaseATNConfigComparator[T Collectable[T]] struct { }
BaseATNConfigComparator is used as the comparator for the configLookup field of a ATNConfigSet and has a custom Equals() and Hash() implementation, because equality is not based on the standard Hash() and Equals() methods of the ATNConfig type.
func (*BaseATNConfigComparator[T]) Equals2 ¶
func (c *BaseATNConfigComparator[T]) Equals2(o1, o2 *ATNConfig) bool
Equals2 is a custom comparator for ATNConfigs specifically for baseATNConfigSet
func (*BaseATNConfigComparator[T]) Hash1 ¶
func (c *BaseATNConfigComparator[T]) Hash1(o *ATNConfig) int
Hash1 is custom hash implementation for ATNConfigs specifically for configLookup, but in fact just delegates to the standard Hash() method of the ATNConfig type.
type BaseATNSimulator ¶
type BaseATNSimulator struct {
// contains filtered or unexported fields
}
func (*BaseATNSimulator) ATN ¶
func (b *BaseATNSimulator) ATN() *ATN
func (*BaseATNSimulator) DecisionToDFA ¶
func (b *BaseATNSimulator) DecisionToDFA() []*DFA
func (*BaseATNSimulator) SharedContextCache ¶
func (b *BaseATNSimulator) SharedContextCache() *PredictionContextCache
type BaseATNState ¶
type BaseATNState struct { // NextTokenWithinRule caches lookahead during parsing. Not used during construction. NextTokenWithinRule *IntervalSet // contains filtered or unexported fields }
func NewATNState ¶ added in v4.13.0
func NewATNState() *BaseATNState
func (*BaseATNState) AddTransition ¶
func (as *BaseATNState) AddTransition(trans Transition, index int)
func (*BaseATNState) Equals ¶
func (as *BaseATNState) Equals(other Collectable[ATNState]) bool
func (*BaseATNState) GetATN ¶
func (as *BaseATNState) GetATN() *ATN
func (*BaseATNState) GetEpsilonOnlyTransitions ¶
func (as *BaseATNState) GetEpsilonOnlyTransitions() bool
func (*BaseATNState) GetNextTokenWithinRule ¶
func (as *BaseATNState) GetNextTokenWithinRule() *IntervalSet
func (*BaseATNState) GetRuleIndex ¶
func (as *BaseATNState) GetRuleIndex() int
func (*BaseATNState) GetStateNumber ¶
func (as *BaseATNState) GetStateNumber() int
func (*BaseATNState) GetStateType ¶
func (as *BaseATNState) GetStateType() int
func (*BaseATNState) GetTransitions ¶
func (as *BaseATNState) GetTransitions() []Transition
func (*BaseATNState) Hash ¶
func (as *BaseATNState) Hash() int
func (*BaseATNState) SetATN ¶
func (as *BaseATNState) SetATN(atn *ATN)
func (*BaseATNState) SetNextTokenWithinRule ¶
func (as *BaseATNState) SetNextTokenWithinRule(v *IntervalSet)
func (*BaseATNState) SetRuleIndex ¶
func (as *BaseATNState) SetRuleIndex(v int)
func (*BaseATNState) SetStateNumber ¶
func (as *BaseATNState) SetStateNumber(stateNumber int)
func (*BaseATNState) SetTransitions ¶
func (as *BaseATNState) SetTransitions(t []Transition)
func (*BaseATNState) String ¶
func (as *BaseATNState) String() string
type BaseAbstractPredicateTransition ¶
type BaseAbstractPredicateTransition struct {
BaseTransition
}
func NewBasePredicateTransition ¶
func NewBasePredicateTransition(target ATNState) *BaseAbstractPredicateTransition
func (*BaseAbstractPredicateTransition) IAbstractPredicateTransitionFoo ¶
func (a *BaseAbstractPredicateTransition) IAbstractPredicateTransitionFoo()
type BaseBlockStartState ¶
type BaseBlockStartState struct { BaseDecisionState // contains filtered or unexported fields }
BaseBlockStartState is the start of a regular (...) block.
func NewBlockStartState ¶
func NewBlockStartState() *BaseBlockStartState
type BaseDecisionState ¶
type BaseDecisionState struct { BaseATNState // contains filtered or unexported fields }
func NewBaseDecisionState ¶
func NewBaseDecisionState() *BaseDecisionState
type BaseInterpreterRuleContext ¶
type BaseInterpreterRuleContext struct {
*BaseParserRuleContext
}
func NewBaseInterpreterRuleContext ¶
func NewBaseInterpreterRuleContext(parent BaseInterpreterRuleContext, invokingStateNumber, ruleIndex int) *BaseInterpreterRuleContext
type BaseLexer ¶
type BaseLexer struct { *BaseRecognizer Interpreter ILexerATNSimulator TokenStartCharIndex int TokenStartLine int TokenStartColumn int ActionType int Virt Lexer // The most derived lexer implementation. Allows virtual method calls. // contains filtered or unexported fields }
func NewBaseLexer ¶
func NewBaseLexer(input CharStream) *BaseLexer
func (*BaseLexer) Emit ¶
Emit is the standard method called to automatically emit a token at the outermost lexical rule. The token object should point into the char buffer start..stop. If there is a text override in 'text', use that to set the token's text. Override this method to emit custom Token objects or provide a new factory. /
func (*BaseLexer) EmitToken ¶
EmitToken by default does not support multiple emits per [NextToken] invocation for efficiency reasons. Subclass and override this func, [NextToken], and [GetToken] (to push tokens into a list and pull from that list rather than a single variable as this implementation does).
func (*BaseLexer) GetAllTokens ¶
GetAllTokens returns a list of all Token objects in input char stream. Forces a load of all tokens that can be made from the input char stream.
Does not include EOF token.
func (*BaseLexer) GetCharIndex ¶
GetCharIndex returns the index of the current character of lookahead
func (*BaseLexer) GetCharPositionInLine ¶
GetCharPositionInLine returns the current position in the current line as far as the lexer is concerned.
func (*BaseLexer) GetInputStream ¶
func (b *BaseLexer) GetInputStream() CharStream
func (*BaseLexer) GetInterpreter ¶
func (b *BaseLexer) GetInterpreter() ILexerATNSimulator
func (*BaseLexer) GetSourceName ¶
func (*BaseLexer) GetText ¶
GetText returns the text Matched so far for the current token or any text override.
func (*BaseLexer) GetTokenFactory ¶
func (b *BaseLexer) GetTokenFactory() TokenFactory
func (*BaseLexer) GetTokenSourceCharStreamPair ¶
func (b *BaseLexer) GetTokenSourceCharStreamPair() *TokenSourceCharStreamPair
func (*BaseLexer) NextToken ¶
NextToken returns a token from the lexer input source i.e., Match a token on the source char stream.
func (*BaseLexer) PopMode ¶
PopMode restores the lexer mode saved by a call to [PushMode]. It is a panic error if there is no saved mode to return to.
func (*BaseLexer) PushMode ¶
PushMode saves the current lexer mode so that it can be restored later. See [PopMode], then sets the current lexer mode to the supplied mode m.
func (*BaseLexer) Recover ¶
func (b *BaseLexer) Recover(re RecognitionException)
Recover can normally Match any char in its vocabulary after Matching a token, so here we do the easy thing and just kill a character and hope it all works out. You can instead use the rule invocation stack to do sophisticated error recovery if you are in a fragment rule.
In general, lexers should not need to recover and should have rules that cover any eventuality, such as a character that makes no sense to the recognizer.
func (*BaseLexer) SetChannel ¶
func (*BaseLexer) SetInputStream ¶
func (b *BaseLexer) SetInputStream(input CharStream)
SetInputStream resets the lexer input stream and associated lexer state.
func (*BaseLexer) SetMode ¶
SetMode changes the lexer to a new mode. The lexer will use this mode from hereon in and the rules for that mode will be in force.
func (*BaseLexer) SetText ¶
SetText sets the complete text of this token; it wipes any previous changes to the text.
func (*BaseLexer) Skip ¶
func (b *BaseLexer) Skip()
Skip instructs the lexer to Skip creating a token for current lexer rule and look for another token. [NextToken] knows to keep looking when a lexer rule finishes with token set to [SKIPTOKEN]. Recall that if token==nil at end of any token rule, it creates one for you and emits it.
type BaseLexerAction ¶
type BaseLexerAction struct {
// contains filtered or unexported fields
}
func NewBaseLexerAction ¶
func NewBaseLexerAction(action int) *BaseLexerAction
func (*BaseLexerAction) Equals ¶
func (b *BaseLexerAction) Equals(other LexerAction) bool
func (*BaseLexerAction) Hash ¶
func (b *BaseLexerAction) Hash() int
type BaseParseTreeListener ¶
type BaseParseTreeListener struct{}
func (*BaseParseTreeListener) EnterEveryRule ¶
func (l *BaseParseTreeListener) EnterEveryRule(_ ParserRuleContext)
func (*BaseParseTreeListener) ExitEveryRule ¶
func (l *BaseParseTreeListener) ExitEveryRule(_ ParserRuleContext)
func (*BaseParseTreeListener) VisitErrorNode ¶
func (l *BaseParseTreeListener) VisitErrorNode(_ ErrorNode)
func (*BaseParseTreeListener) VisitTerminal ¶
func (l *BaseParseTreeListener) VisitTerminal(_ TerminalNode)
type BaseParseTreeVisitor ¶
type BaseParseTreeVisitor struct{}
func (*BaseParseTreeVisitor) Visit ¶
func (v *BaseParseTreeVisitor) Visit(tree ParseTree) interface{}
func (*BaseParseTreeVisitor) VisitChildren ¶
func (v *BaseParseTreeVisitor) VisitChildren(_ RuleNode) interface{}
func (*BaseParseTreeVisitor) VisitErrorNode ¶
func (v *BaseParseTreeVisitor) VisitErrorNode(_ ErrorNode) interface{}
func (*BaseParseTreeVisitor) VisitTerminal ¶
func (v *BaseParseTreeVisitor) VisitTerminal(_ TerminalNode) interface{}
type BaseParser ¶
type BaseParser struct { *BaseRecognizer Interpreter *ParserATNSimulator BuildParseTrees bool // contains filtered or unexported fields }
func NewBaseParser ¶
func NewBaseParser(input TokenStream) *BaseParser
NewBaseParser contains all the parsing support code to embed in parsers. Essentially most of it is error recovery stuff.
func (*BaseParser) AddParseListener ¶
func (p *BaseParser) AddParseListener(listener ParseTreeListener)
AddParseListener registers listener to receive events during the parsing process.
To support output-preserving grammar transformations (including but not limited to left-recursion removal, automated left-factoring, and optimized code generation), calls to listener methods during the parse may differ substantially from calls made by [ParseTreeWalker.DEFAULT] used after the parse is complete. In particular, rule entry and exit events may occur in a different order during the parse than after the parser. In addition, calls to certain rule entry methods may be omitted.
With the following specific exceptions, calls to listener events are deterministic, i.e. for identical input the calls to listener methods will be the same.
- Alterations to the grammar used to generate code may change the behavior of the listener calls.
- Alterations to the command line options passed to ANTLR 4 when generating the parser may change the behavior of the listener calls.
- Changing the version of the ANTLR Tool used to generate the parser may change the behavior of the listener calls.
func (*BaseParser) Consume ¶
func (p *BaseParser) Consume() Token
func (*BaseParser) DumpDFA ¶
func (p *BaseParser) DumpDFA()
DumpDFA prints the whole of the DFA for debugging
func (*BaseParser) EnterOuterAlt ¶
func (p *BaseParser) EnterOuterAlt(localctx ParserRuleContext, altNum int)
func (*BaseParser) EnterRecursionRule ¶
func (p *BaseParser) EnterRecursionRule(localctx ParserRuleContext, state, _, precedence int)
func (*BaseParser) EnterRule ¶
func (p *BaseParser) EnterRule(localctx ParserRuleContext, state, _ int)
func (*BaseParser) ExitRule ¶
func (p *BaseParser) ExitRule()
func (*BaseParser) GetATN ¶
func (p *BaseParser) GetATN() *ATN
func (*BaseParser) GetATNWithBypassAlts ¶
func (p *BaseParser) GetATNWithBypassAlts()
GetATNWithBypassAlts - the ATN with bypass alternatives is expensive to create, so we create it lazily.
func (*BaseParser) GetCurrentToken ¶
func (p *BaseParser) GetCurrentToken() Token
GetCurrentToken returns the current token at LT(1).
[Match] needs to return the current input symbol, which gets put into the label for the associated token ref e.g., x=ID.
func (*BaseParser) GetDFAStrings ¶
func (p *BaseParser) GetDFAStrings() string
GetDFAStrings returns a list of all DFA states used for debugging purposes
func (*BaseParser) GetErrorHandler ¶
func (p *BaseParser) GetErrorHandler() ErrorStrategy
func (*BaseParser) GetExpectedTokens ¶
func (p *BaseParser) GetExpectedTokens() *IntervalSet
GetExpectedTokens and returns the set of input symbols which could follow the current parser state and context, as given by [GetState] and [GetContext], respectively.
func (*BaseParser) GetExpectedTokensWithinCurrentRule ¶
func (p *BaseParser) GetExpectedTokensWithinCurrentRule() *IntervalSet
func (*BaseParser) GetInputStream ¶
func (p *BaseParser) GetInputStream() IntStream
func (*BaseParser) GetInterpreter ¶
func (p *BaseParser) GetInterpreter() *ParserATNSimulator
func (*BaseParser) GetInvokingContext ¶
func (p *BaseParser) GetInvokingContext(ruleIndex int) ParserRuleContext
func (*BaseParser) GetParseListeners ¶
func (p *BaseParser) GetParseListeners() []ParseTreeListener
func (*BaseParser) GetParserRuleContext ¶
func (p *BaseParser) GetParserRuleContext() ParserRuleContext
func (*BaseParser) GetPrecedence ¶
func (p *BaseParser) GetPrecedence() int
func (*BaseParser) GetRuleIndex ¶
func (p *BaseParser) GetRuleIndex(ruleName string) int
GetRuleIndex get a rule's index (i.e., RULE_ruleName field) or -1 if not found.
func (*BaseParser) GetRuleInvocationStack ¶
func (p *BaseParser) GetRuleInvocationStack(c ParserRuleContext) []string
GetRuleInvocationStack returns a list of the rule names in your parser instance leading up to a call to the current rule. You could override if you want more details such as the file/line info of where in the ATN a rule is invoked.
func (*BaseParser) GetSourceName ¶
func (p *BaseParser) GetSourceName() string
func (*BaseParser) GetTokenFactory ¶
func (p *BaseParser) GetTokenFactory() TokenFactory
func (*BaseParser) GetTokenStream ¶
func (p *BaseParser) GetTokenStream() TokenStream
func (*BaseParser) IsExpectedToken ¶
func (p *BaseParser) IsExpectedToken(symbol int) bool
IsExpectedToken checks whether symbol can follow the current state in the {ATN}. The behavior of p.method is equivalent to the following, but is implemented such that the complete context-sensitive follow set does not need to be explicitly constructed.
return getExpectedTokens().contains(symbol)
func (*BaseParser) Match ¶
func (p *BaseParser) Match(ttype int) Token
func (*BaseParser) MatchWildcard ¶
func (p *BaseParser) MatchWildcard() Token
func (*BaseParser) NotifyErrorListeners ¶
func (p *BaseParser) NotifyErrorListeners(msg string, offendingToken Token, err RecognitionException)
func (*BaseParser) Precpred ¶
func (p *BaseParser) Precpred(_ RuleContext, precedence int) bool
func (*BaseParser) PushNewRecursionContext ¶
func (p *BaseParser) PushNewRecursionContext(localctx ParserRuleContext, state, _ int)
func (*BaseParser) RemoveParseListener ¶
func (p *BaseParser) RemoveParseListener(listener ParseTreeListener)
RemoveParseListener removes listener from the list of parse listeners.
If listener is nil or has not been added as a parse listener, this func does nothing.
func (*BaseParser) SetErrorHandler ¶
func (p *BaseParser) SetErrorHandler(e ErrorStrategy)
func (*BaseParser) SetInputStream ¶
func (p *BaseParser) SetInputStream(input TokenStream)
func (*BaseParser) SetParserRuleContext ¶
func (p *BaseParser) SetParserRuleContext(v ParserRuleContext)
func (*BaseParser) SetTokenStream ¶
func (p *BaseParser) SetTokenStream(input TokenStream)
SetTokenStream installs input as the token stream and resets the parser.
func (*BaseParser) SetTrace ¶
func (p *BaseParser) SetTrace(trace *TraceListener)
SetTrace installs a trace listener for the parse.
During a parse it is sometimes useful to listen in on the rule entry and exit events as well as token Matches. This is for quick and dirty debugging.
func (*BaseParser) TriggerEnterRuleEvent ¶
func (p *BaseParser) TriggerEnterRuleEvent()
TriggerEnterRuleEvent notifies all parse listeners of an enter rule event.
func (*BaseParser) TriggerExitRuleEvent ¶
func (p *BaseParser) TriggerExitRuleEvent()
TriggerExitRuleEvent notifies any parse listeners of an exit rule event.
func (*BaseParser) UnrollRecursionContexts ¶
func (p *BaseParser) UnrollRecursionContexts(parentCtx ParserRuleContext)
type BaseParserRuleContext ¶
type BaseParserRuleContext struct { RuleIndex int // contains filtered or unexported fields }
func NewBaseParserRuleContext ¶
func NewBaseParserRuleContext(parent ParserRuleContext, invokingStateNumber int) *BaseParserRuleContext
func (*BaseParserRuleContext) Accept ¶
func (prc *BaseParserRuleContext) Accept(visitor ParseTreeVisitor) interface{}
func (*BaseParserRuleContext) AddChild ¶
func (prc *BaseParserRuleContext) AddChild(child RuleContext) RuleContext
func (*BaseParserRuleContext) AddErrorNode ¶
func (prc *BaseParserRuleContext) AddErrorNode(badToken Token) *ErrorNodeImpl
func (*BaseParserRuleContext) AddTokenNode ¶
func (prc *BaseParserRuleContext) AddTokenNode(token Token) *TerminalNodeImpl
func (*BaseParserRuleContext) CopyFrom ¶
func (prc *BaseParserRuleContext) CopyFrom(ctx *BaseParserRuleContext)
func (*BaseParserRuleContext) EnterRule ¶
func (prc *BaseParserRuleContext) EnterRule(_ ParseTreeListener)
EnterRule is called when any rule is entered.
func (*BaseParserRuleContext) ExitRule ¶
func (prc *BaseParserRuleContext) ExitRule(_ ParseTreeListener)
ExitRule is called when any rule is exited.
func (*BaseParserRuleContext) GetAltNumber ¶ added in v4.13.0
func (prc *BaseParserRuleContext) GetAltNumber() int
func (*BaseParserRuleContext) GetChild ¶
func (prc *BaseParserRuleContext) GetChild(i int) Tree
func (*BaseParserRuleContext) GetChildCount ¶
func (prc *BaseParserRuleContext) GetChildCount() int
func (*BaseParserRuleContext) GetChildOfType ¶
func (prc *BaseParserRuleContext) GetChildOfType(i int, childType reflect.Type) RuleContext
func (*BaseParserRuleContext) GetChildren ¶
func (prc *BaseParserRuleContext) GetChildren() []Tree
func (*BaseParserRuleContext) GetInvokingState ¶ added in v4.13.0
func (prc *BaseParserRuleContext) GetInvokingState() int
func (*BaseParserRuleContext) GetParent ¶ added in v4.13.0
func (prc *BaseParserRuleContext) GetParent() Tree
GetParent returns the combined text of all child nodes. This method only considers tokens which have been added to the parse tree.
Since tokens on hidden channels (e.g. whitespace or comments) are not added to the parse trees, they will not appear in the output of this method.
func (*BaseParserRuleContext) GetPayload ¶
func (prc *BaseParserRuleContext) GetPayload() interface{}
func (*BaseParserRuleContext) GetRuleContext ¶
func (prc *BaseParserRuleContext) GetRuleContext() RuleContext
func (*BaseParserRuleContext) GetRuleIndex ¶ added in v4.13.0
func (prc *BaseParserRuleContext) GetRuleIndex() int
func (*BaseParserRuleContext) GetSourceInterval ¶
func (prc *BaseParserRuleContext) GetSourceInterval() Interval
func (*BaseParserRuleContext) GetStart ¶
func (prc *BaseParserRuleContext) GetStart() Token
func (*BaseParserRuleContext) GetStop ¶
func (prc *BaseParserRuleContext) GetStop() Token
func (*BaseParserRuleContext) GetText ¶
func (prc *BaseParserRuleContext) GetText() string
func (*BaseParserRuleContext) GetToken ¶
func (prc *BaseParserRuleContext) GetToken(ttype int, i int) TerminalNode
func (*BaseParserRuleContext) GetTokens ¶
func (prc *BaseParserRuleContext) GetTokens(ttype int) []TerminalNode
func (*BaseParserRuleContext) GetTypedRuleContext ¶
func (prc *BaseParserRuleContext) GetTypedRuleContext(ctxType reflect.Type, i int) RuleContext
func (*BaseParserRuleContext) GetTypedRuleContexts ¶
func (prc *BaseParserRuleContext) GetTypedRuleContexts(ctxType reflect.Type) []RuleContext
func (*BaseParserRuleContext) IsEmpty ¶ added in v4.13.0
func (prc *BaseParserRuleContext) IsEmpty() bool
IsEmpty returns true if the context of b is empty.
A context is empty if there is no invoking state, meaning nobody calls current context.
func (*BaseParserRuleContext) RemoveLastChild ¶
func (prc *BaseParserRuleContext) RemoveLastChild()
RemoveLastChild is used by [EnterOuterAlt] to toss out a RuleContext previously added as we entered a rule. If we have a label, we will need to remove the generic ruleContext object.
func (*BaseParserRuleContext) SetAltNumber ¶ added in v4.13.0
func (prc *BaseParserRuleContext) SetAltNumber(_ int)
func (*BaseParserRuleContext) SetException ¶
func (prc *BaseParserRuleContext) SetException(e RecognitionException)
func (*BaseParserRuleContext) SetInvokingState ¶ added in v4.13.0
func (prc *BaseParserRuleContext) SetInvokingState(t int)
func (*BaseParserRuleContext) SetParent ¶ added in v4.13.0
func (prc *BaseParserRuleContext) SetParent(v Tree)
func (*BaseParserRuleContext) SetStart ¶
func (prc *BaseParserRuleContext) SetStart(t Token)
func (*BaseParserRuleContext) SetStop ¶
func (prc *BaseParserRuleContext) SetStop(t Token)
func (*BaseParserRuleContext) String ¶
func (prc *BaseParserRuleContext) String(ruleNames []string, stop RuleContext) string
func (*BaseParserRuleContext) ToStringTree ¶
func (prc *BaseParserRuleContext) ToStringTree(ruleNames []string, recog Recognizer) string
type BaseRecognitionException ¶
type BaseRecognitionException struct {
// contains filtered or unexported fields
}
func NewBaseRecognitionException ¶
func NewBaseRecognitionException(message string, recognizer Recognizer, input IntStream, ctx RuleContext) *BaseRecognitionException
func (*BaseRecognitionException) GetInputStream ¶
func (b *BaseRecognitionException) GetInputStream() IntStream
func (*BaseRecognitionException) GetMessage ¶
func (b *BaseRecognitionException) GetMessage() string
func (*BaseRecognitionException) GetOffendingToken ¶
func (b *BaseRecognitionException) GetOffendingToken() Token
func (*BaseRecognitionException) String ¶
func (b *BaseRecognitionException) String() string
type BaseRecognizer ¶
type BaseRecognizer struct { RuleNames []string LiteralNames []string SymbolicNames []string GrammarFileName string SynErr RecognitionException // contains filtered or unexported fields }
func NewBaseRecognizer ¶
func NewBaseRecognizer() *BaseRecognizer
func (*BaseRecognizer) Action ¶
func (b *BaseRecognizer) Action(_ RuleContext, _, _ int)
func (*BaseRecognizer) AddErrorListener ¶
func (b *BaseRecognizer) AddErrorListener(listener ErrorListener)
func (*BaseRecognizer) GetError ¶ added in v4.13.0
func (b *BaseRecognizer) GetError() RecognitionException
func (*BaseRecognizer) GetErrorHeader ¶
func (b *BaseRecognizer) GetErrorHeader(e RecognitionException) string
GetErrorHeader returns the error header, normally line/character position information.
Can be overridden in sub structs embedding BaseRecognizer.
func (*BaseRecognizer) GetErrorListenerDispatch ¶
func (b *BaseRecognizer) GetErrorListenerDispatch() ErrorListener
func (*BaseRecognizer) GetLiteralNames ¶
func (b *BaseRecognizer) GetLiteralNames() []string
func (*BaseRecognizer) GetRuleIndexMap ¶
func (b *BaseRecognizer) GetRuleIndexMap() map[string]int
GetRuleIndexMap Get a map from rule names to rule indexes.
Used for XPath and tree pattern compilation.
TODO: JI This is not yet implemented in the Go runtime. Maybe not needed.
func (*BaseRecognizer) GetRuleNames ¶
func (b *BaseRecognizer) GetRuleNames() []string
func (*BaseRecognizer) GetState ¶
func (b *BaseRecognizer) GetState() int
func (*BaseRecognizer) GetSymbolicNames ¶
func (b *BaseRecognizer) GetSymbolicNames() []string
func (*BaseRecognizer) GetTokenErrorDisplay
deprecated
func (b *BaseRecognizer) GetTokenErrorDisplay(t Token) string
GetTokenErrorDisplay shows how a token should be displayed in an error message.
The default is to display just the text, but during development you might want to have a lot of information spit out. Override in that case to use t.String() (which, for CommonToken, dumps everything about the token). This is better than forcing you to override a method in your token objects because you don't have to go modify your lexer so that it creates a NewJava type.
Deprecated: This method is not called by the ANTLR 4 Runtime. Specific implementations of [ANTLRErrorStrategy] may provide a similar feature when necessary. For example, see DefaultErrorStrategy.GetTokenErrorDisplay()
func (*BaseRecognizer) GetTokenNames ¶
func (b *BaseRecognizer) GetTokenNames() []string
func (*BaseRecognizer) GetTokenType ¶
func (b *BaseRecognizer) GetTokenType(_ string) int
GetTokenType get the token type based upon its name
func (*BaseRecognizer) HasError ¶ added in v4.13.0
func (b *BaseRecognizer) HasError() bool
func (*BaseRecognizer) Precpred ¶
func (b *BaseRecognizer) Precpred(_ RuleContext, _ int) bool
Precpred embedding structs need to override this if there are preceding predicates that the ATN interpreter needs to execute
func (*BaseRecognizer) RemoveErrorListeners ¶
func (b *BaseRecognizer) RemoveErrorListeners()
func (*BaseRecognizer) Sempred ¶
func (b *BaseRecognizer) Sempred(_ RuleContext, _ int, _ int) bool
Sempred embedding structs need to override this if there are sempreds or actions that the ATN interpreter needs to execute
func (*BaseRecognizer) SetError ¶ added in v4.13.0
func (b *BaseRecognizer) SetError(err RecognitionException)
func (*BaseRecognizer) SetState ¶
func (b *BaseRecognizer) SetState(v int)
type BaseRewriteOperation ¶
type BaseRewriteOperation struct {
// contains filtered or unexported fields
}
func (*BaseRewriteOperation) GetIndex ¶
func (op *BaseRewriteOperation) GetIndex() int
func (*BaseRewriteOperation) GetInstructionIndex ¶
func (op *BaseRewriteOperation) GetInstructionIndex() int
func (*BaseRewriteOperation) GetOpName ¶
func (op *BaseRewriteOperation) GetOpName() string
func (*BaseRewriteOperation) GetText ¶
func (op *BaseRewriteOperation) GetText() string
func (*BaseRewriteOperation) GetTokens ¶
func (op *BaseRewriteOperation) GetTokens() TokenStream
func (*BaseRewriteOperation) SetIndex ¶
func (op *BaseRewriteOperation) SetIndex(val int)
func (*BaseRewriteOperation) SetInstructionIndex ¶
func (op *BaseRewriteOperation) SetInstructionIndex(val int)
func (*BaseRewriteOperation) SetOpName ¶
func (op *BaseRewriteOperation) SetOpName(val string)
func (*BaseRewriteOperation) SetText ¶
func (op *BaseRewriteOperation) SetText(val string)
func (*BaseRewriteOperation) SetTokens ¶
func (op *BaseRewriteOperation) SetTokens(val TokenStream)
func (*BaseRewriteOperation) String ¶
func (op *BaseRewriteOperation) String() string
type BaseToken ¶
type BaseToken struct {
// contains filtered or unexported fields
}
func (*BaseToken) GetChannel ¶
func (*BaseToken) GetInputStream ¶
func (b *BaseToken) GetInputStream() CharStream
func (*BaseToken) GetSource ¶
func (b *BaseToken) GetSource() *TokenSourceCharStreamPair
func (*BaseToken) GetTokenIndex ¶
func (*BaseToken) GetTokenSource ¶
func (b *BaseToken) GetTokenSource() TokenSource
func (*BaseToken) GetTokenType ¶
func (*BaseToken) SetTokenIndex ¶
type BaseTransition ¶
type BaseTransition struct {
// contains filtered or unexported fields
}
func NewBaseTransition ¶
func NewBaseTransition(target ATNState) *BaseTransition
func (*BaseTransition) Matches ¶
func (t *BaseTransition) Matches(_, _, _ int) bool
type BasicBlockStartState ¶
type BasicBlockStartState struct {
BaseBlockStartState
}
func NewBasicBlockStartState ¶
func NewBasicBlockStartState() *BasicBlockStartState
type BasicState ¶
type BasicState struct {
BaseATNState
}
func NewBasicState ¶
func NewBasicState() *BasicState
type BitSet ¶
type BitSet struct {
// contains filtered or unexported fields
}
func NewBitSet ¶
func NewBitSet() *BitSet
NewBitSet creates a new bitwise set TODO: See if we can replace with the standard library's BitSet
func PredictionModeGetAlts ¶
PredictionModeGetAlts returns the complete set of represented alternatives for a collection of alternative subsets. This method returns the union of each BitSet in altsets, being the set of represented alternatives in altsets.
func PredictionModegetConflictingAltSubsets ¶
func PredictionModegetConflictingAltSubsets(configs *ATNConfigSet) []*BitSet
PredictionModegetConflictingAltSubsets gets the conflicting alt subsets from a configuration set.
for each configuration c in configs: map[c] U= c.ATNConfig.alt // map hash/equals uses s and x, not alt and not pred
type BlockEndState ¶
type BlockEndState struct { BaseATNState // contains filtered or unexported fields }
BlockEndState is a terminal node of a simple (a|b|c) block.
func NewBlockEndState ¶
func NewBlockEndState() *BlockEndState
type BlockStartState ¶
type BlockStartState interface { DecisionState // contains filtered or unexported methods }
type CharStream ¶
type ClosureBusy ¶ added in v4.13.0
type ClosureBusy struct {
// contains filtered or unexported fields
}
ClosureBusy is a store of ATNConfigs and is a tiny abstraction layer over a standard JStore so that we can use Lazy instantiation of the JStore, mostly to avoid polluting the stats module with a ton of JStore instances with nothing in them.
func NewClosureBusy ¶ added in v4.13.0
func NewClosureBusy(desc string) *ClosureBusy
NewClosureBusy creates a new ClosureBusy instance used to avoid infinite recursion for right-recursive rules
type Collectable ¶
type Collectable[T any] interface { Hash() int Equals(other Collectable[T]) bool }
Collectable is an interface that a struct should implement if it is to be usable as a key in these collections.
type CollectionDescriptor ¶ added in v4.13.0
type CollectionSource ¶ added in v4.13.0
type CollectionSource int
const ( UnknownCollection CollectionSource = iota ATNConfigLookupCollection ATNStateCollection DFAStateCollection ATNConfigCollection PredictionContextCollection SemanticContextCollection ClosureBusyCollection PredictionVisitedCollection MergeCacheCollection PredictionContextCacheCollection AltSetCollection ReachSetCollection )
type CommonToken ¶
type CommonToken struct {
BaseToken
}
func NewCommonToken ¶
func NewCommonToken(source *TokenSourceCharStreamPair, tokenType, channel, start, stop int) *CommonToken
type CommonTokenFactory ¶
type CommonTokenFactory struct {
// contains filtered or unexported fields
}
CommonTokenFactory is the default TokenFactory implementation.
func NewCommonTokenFactory ¶
func NewCommonTokenFactory(copyText bool) *CommonTokenFactory
func (*CommonTokenFactory) Create ¶
func (c *CommonTokenFactory) Create(source *TokenSourceCharStreamPair, ttype int, text string, channel, start, stop, line, column int) Token
type CommonTokenStream ¶
type CommonTokenStream struct {
// contains filtered or unexported fields
}
CommonTokenStream is an implementation of TokenStream that loads tokens from a TokenSource on-demand and places the tokens in a buffer to provide access to any previous token by index. This token stream ignores the value of Token.getChannel. If your parser requires the token stream filter tokens to only those on a particular channel, such as Token.DEFAULT_CHANNEL or Token.HIDDEN_CHANNEL, use a filtering token stream such a CommonTokenStream.
func NewCommonTokenStream ¶
func NewCommonTokenStream(lexer Lexer, channel int) *CommonTokenStream
NewCommonTokenStream creates a new CommonTokenStream instance using the supplied lexer to produce tokens and will pull tokens from the given lexer channel.
func (*CommonTokenStream) Consume ¶
func (c *CommonTokenStream) Consume()
func (*CommonTokenStream) Fill ¶
func (c *CommonTokenStream) Fill()
Fill gets all tokens from the lexer until EOF.
func (*CommonTokenStream) Get ¶
func (c *CommonTokenStream) Get(index int) Token
func (*CommonTokenStream) GetAllText ¶
func (c *CommonTokenStream) GetAllText() string
func (*CommonTokenStream) GetAllTokens ¶
func (c *CommonTokenStream) GetAllTokens() []Token
GetAllTokens returns all tokens currently pulled from the token source.
func (*CommonTokenStream) GetHiddenTokensToLeft ¶
func (c *CommonTokenStream) GetHiddenTokensToLeft(tokenIndex, channel int) []Token
GetHiddenTokensToLeft collects all tokens on channel to the left of the current token until we see a token on DEFAULT_TOKEN_CHANNEL. If channel is -1, it finds any non default channel token.
func (*CommonTokenStream) GetHiddenTokensToRight ¶
func (c *CommonTokenStream) GetHiddenTokensToRight(tokenIndex, channel int) []Token
GetHiddenTokensToRight collects all tokens on a specified channel to the right of the current token up until we see a token on DEFAULT_TOKEN_CHANNEL or EOF. If channel is -1, it finds any non-default channel token.
func (*CommonTokenStream) GetSourceName ¶
func (c *CommonTokenStream) GetSourceName() string
func (*CommonTokenStream) GetTextFromInterval ¶
func (c *CommonTokenStream) GetTextFromInterval(interval Interval) string
func (*CommonTokenStream) GetTextFromRuleContext ¶
func (c *CommonTokenStream) GetTextFromRuleContext(interval RuleContext) string
func (*CommonTokenStream) GetTextFromTokens ¶
func (c *CommonTokenStream) GetTextFromTokens(start, end Token) string
func (*CommonTokenStream) GetTokenSource ¶
func (c *CommonTokenStream) GetTokenSource() TokenSource
func (*CommonTokenStream) GetTokens ¶
func (c *CommonTokenStream) GetTokens(start int, stop int, types *IntervalSet) []Token
GetTokens gets all tokens from start to stop inclusive.
func (*CommonTokenStream) Index ¶
func (c *CommonTokenStream) Index() int
func (*CommonTokenStream) LA ¶
func (c *CommonTokenStream) LA(i int) int
func (*CommonTokenStream) LB ¶
func (c *CommonTokenStream) LB(k int) Token
func (*CommonTokenStream) LT ¶
func (c *CommonTokenStream) LT(k int) Token
func (*CommonTokenStream) Mark ¶
func (c *CommonTokenStream) Mark() int
func (*CommonTokenStream) NextTokenOnChannel ¶
func (c *CommonTokenStream) NextTokenOnChannel(i, _ int) int
NextTokenOnChannel returns the index of the next token on channel given a starting index. Returns i if tokens[i] is on channel. Returns -1 if there are no tokens on channel between 'i' and TokenEOF.
func (*CommonTokenStream) Release ¶
func (c *CommonTokenStream) Release(_ int)
func (*CommonTokenStream) Reset ¶ added in v4.13.0
func (c *CommonTokenStream) Reset()
func (*CommonTokenStream) Seek ¶
func (c *CommonTokenStream) Seek(index int)
func (*CommonTokenStream) SetTokenSource ¶
func (c *CommonTokenStream) SetTokenSource(tokenSource TokenSource)
SetTokenSource resets the c token stream by setting its token source.
func (*CommonTokenStream) Size ¶
func (c *CommonTokenStream) Size() int
func (*CommonTokenStream) Sync ¶
func (c *CommonTokenStream) Sync(i int) bool
Sync makes sure index i in tokens has a token and returns true if a token is located at index i and otherwise false.
type Comparator ¶
type ConsoleErrorListener ¶
type ConsoleErrorListener struct {
*DefaultErrorListener
}
func NewConsoleErrorListener ¶
func NewConsoleErrorListener() *ConsoleErrorListener
func (*ConsoleErrorListener) SyntaxError ¶
func (c *ConsoleErrorListener) SyntaxError(_ Recognizer, _ interface{}, line, column int, msg string, _ RecognitionException)
SyntaxError prints messages to System.err containing the values of line, charPositionInLine, and msg using the following format:
line <line>:<charPositionInLine> <msg>
type DFA ¶
type DFA struct {
// contains filtered or unexported fields
}
DFA represents the Deterministic Finite Automaton used by the recognizer, including all the states it can reach and the transitions between them.
func NewDFA ¶
func NewDFA(atnStartState DecisionState, decision int) *DFA
func (*DFA) Get ¶ added in v4.13.0
Get returns a state that matches s if it is present in the DFA state set. We defer to this function instead of accessing states directly so that we can implement lazy instantiation of the states JMap.
func (*DFA) Len ¶ added in v4.13.0
Len returns the number of states in d. We use this instead of accessing states directly so that we can implement lazy instantiation of the states JMap.
func (*DFA) ToLexerString ¶
type DFASerializer ¶
type DFASerializer struct {
// contains filtered or unexported fields
}
DFASerializer is a DFA walker that knows how to dump the DFA states to serialized strings.
func NewDFASerializer ¶
func NewDFASerializer(dfa *DFA, literalNames, symbolicNames []string) *DFASerializer
func (*DFASerializer) GetStateString ¶
func (d *DFASerializer) GetStateString(s *DFAState) string
func (*DFASerializer) String ¶
func (d *DFASerializer) String() string
type DFAState ¶
type DFAState struct {
// contains filtered or unexported fields
}
DFAState represents a set of possible ATN configurations. As Aho, Sethi, Ullman p. 117 says: "The DFA uses its state to keep track of all possible states the ATN can be in after reading each input symbol. That is to say, after reading input a1, a2,..an, the DFA is in a state that represents the subset T of the states of the ATN that are reachable from the ATN's start state along some path labeled a1a2..an."
In conventional NFA-to-DFA conversion, therefore, the subset T would be a bitset representing the set of states the ATN could be in. We need to track the alt predicted by each state as well, however. More importantly, we need to maintain a stack of states, tracking the closure operations as they jump from rule to rule, emulating rule invocations (method calls). I have to add a stack to simulate the proper lookahead sequences for the underlying LL grammar from which the ATN was derived.
I use a set of ATNConfig objects, not simple states. An ATNConfig is both a state (ala normal conversion) and a RuleContext describing the chain of rules (if any) followed to arrive at that state.
A DFAState may have multiple references to a particular state, but with different ATN contexts (with same or different alts) meaning that state was reached via a different set of rule invocations.
func NewDFAState ¶
func NewDFAState(stateNumber int, configs *ATNConfigSet) *DFAState
func (*DFAState) Equals ¶
func (d *DFAState) Equals(o Collectable[*DFAState]) bool
Equals returns whether d equals other. Two DFAStates are equal if their ATN configuration sets are the same. This method is used to see if a state already exists.
Because the number of alternatives and number of ATN configurations are finite, there is a finite number of DFA states that can be processed. This is necessary to show that the algorithm terminates.
Cannot test the DFA state numbers here because in ParserATNSimulator.addDFAState we need to know if any other state exists that has d exact set of ATN configurations. The stateNumber is irrelevant.
type DecisionState ¶
type DecisionState interface { ATNState // contains filtered or unexported methods }
type DefaultErrorListener ¶
type DefaultErrorListener struct { }
func NewDefaultErrorListener ¶
func NewDefaultErrorListener() *DefaultErrorListener
func (*DefaultErrorListener) ReportAmbiguity ¶
func (d *DefaultErrorListener) ReportAmbiguity(_ Parser, _ *DFA, _, _ int, _ bool, _ *BitSet, _ *ATNConfigSet)
func (*DefaultErrorListener) ReportAttemptingFullContext ¶
func (d *DefaultErrorListener) ReportAttemptingFullContext(_ Parser, _ *DFA, _, _ int, _ *BitSet, _ *ATNConfigSet)
func (*DefaultErrorListener) ReportContextSensitivity ¶
func (d *DefaultErrorListener) ReportContextSensitivity(_ Parser, _ *DFA, _, _, _ int, _ *ATNConfigSet)
func (*DefaultErrorListener) SyntaxError ¶
func (d *DefaultErrorListener) SyntaxError(_ Recognizer, _ interface{}, _, _ int, _ string, _ RecognitionException)
type DefaultErrorStrategy ¶
type DefaultErrorStrategy struct {
// contains filtered or unexported fields
}
DefaultErrorStrategy is the default implementation of ANTLRErrorStrategy used for error reporting and recovery in ANTLR parsers.
func NewDefaultErrorStrategy ¶
func NewDefaultErrorStrategy() *DefaultErrorStrategy
func (*DefaultErrorStrategy) GetErrorRecoverySet ¶ added in v4.13.0
func (d *DefaultErrorStrategy) GetErrorRecoverySet(recognizer Parser) *IntervalSet
GetErrorRecoverySet computes the error recovery set for the current rule. During rule invocation, the parser pushes the set of tokens that can follow that rule reference on the stack. This amounts to computing FIRST of what follows the rule reference in the enclosing rule. See LinearApproximator.FIRST().
This local follow set only includes tokens from within the rule i.e., the FIRST computation done by ANTLR stops at the end of a rule.
Example ¶
When you find a "no viable alt exception", the input is not consistent with any of the alternatives for rule r. The best thing to do is to consume tokens until you see something that can legally follow a call to r or any rule that called r. You don't want the exact set of viable next tokens because the input might just be missing a token--you might consume the rest of the input looking for one of the missing tokens.
Consider the grammar:
a : '[' b ']' | '(' b ')' ; b : c '^' INT ; c : ID | INT ;
At each rule invocation, the set of tokens that could follow that rule is pushed on a stack. Here are the various context-sensitive follow sets:
FOLLOW(b1_in_a) = FIRST(']') = ']' FOLLOW(b2_in_a) = FIRST(')') = ')' FOLLOW(c_in_b) = FIRST('^') = '^'
Upon erroneous input “[]”, the call chain is
a → b → c
and, hence, the follow context stack is:
Depth Follow set Start of rule execution 0 <EOF> a (from main()) 1 ']' b 2 '^' c
Notice that ')' is not included, because b would have to have been called from a different context in rule a for ')' to be included.
For error recovery, we cannot consider FOLLOW(c) (context-sensitive or otherwise). We need the combined set of all context-sensitive FOLLOW sets - the set of all tokens that could follow any reference in the call chain. We need to reSync to one of those tokens. Note that FOLLOW(c)='^' and if we reSync'd to that token, we'd consume until EOF. We need to Sync to context-sensitive FOLLOWs for a, b, and c:
{']','^'}
In this case, for input "[]", LA(1) is ']' and in the set, so we would not consume anything. After printing an error, rule c would return normally. Rule b would not find the required '^' though. At this point, it gets a mismatched token error and panics an exception (since LA(1) is not in the viable following token set). The rule exception handler tries to recover, but finds the same recovery set and doesn't consume anything. Rule b exits normally returning to rule a. Now it finds the ']' (and with the successful Match exits errorRecovery mode).
So, you can see that the parser walks up the call chain looking for the token that was a member of the recovery set.
Errors are not generated in errorRecovery mode.
ANTLR's error recovery mechanism is based upon original ideas:
Algorithms + Data Structures = Programs by Niklaus Wirth and A note on error recovery in recursive descent parsers.
Later, Josef Grosch had some good ideas in Efficient and Comfortable Error Recovery in Recursive Descent Parsers
Like Grosch I implement context-sensitive FOLLOW sets that are combined at run-time upon error to avoid overhead during parsing. Later, the runtime Sync was improved for loops/sub-rules see [Sync] docs
func (*DefaultErrorStrategy) GetExpectedTokens ¶
func (d *DefaultErrorStrategy) GetExpectedTokens(recognizer Parser) *IntervalSet
func (*DefaultErrorStrategy) GetMissingSymbol ¶
func (d *DefaultErrorStrategy) GetMissingSymbol(recognizer Parser) Token
GetMissingSymbol conjures up a missing token during error recovery.
The recognizer attempts to recover from single missing symbols. But, actions might refer to that missing symbol. For example:
x=ID {f($x)}.
The action clearly assumes that there has been an identifier Matched previously and that $x points at that token. If that token is missing, but the next token in the stream is what we want we assume that this token is missing, and we keep going. Because we have to return some token to replace the missing token, we have to conjure one up. This method gives the user control over the tokens returned for missing tokens. Mostly, you will want to create something special for identifier tokens. For literals such as '{' and ',', the default action in the parser or tree parser works. It simply creates a CommonToken of the appropriate type. The text will be the token name. If you need to change which tokens must be created by the lexer, override this method to create the appropriate tokens.
func (*DefaultErrorStrategy) GetTokenErrorDisplay ¶
func (d *DefaultErrorStrategy) GetTokenErrorDisplay(t Token) string
GetTokenErrorDisplay determines how a token should be displayed in an error message. The default is to display just the text, but during development you might want to have a lot of information spit out. Override this func in that case to use t.String() (which, for CommonToken, dumps everything about the token). This is better than forcing you to override a method in your token objects because you don't have to go modify your lexer so that it creates a new type.
func (*DefaultErrorStrategy) InErrorRecoveryMode ¶
func (d *DefaultErrorStrategy) InErrorRecoveryMode(_ Parser) bool
func (*DefaultErrorStrategy) Recover ¶
func (d *DefaultErrorStrategy) Recover(recognizer Parser, _ RecognitionException)
Recover is the default recovery implementation. It reSynchronizes the parser by consuming tokens until we find one in the reSynchronization set - loosely the set of tokens that can follow the current rule.
func (*DefaultErrorStrategy) RecoverInline ¶
func (d *DefaultErrorStrategy) RecoverInline(recognizer Parser) Token
The RecoverInline default implementation attempts to recover from the mismatched input by using single token insertion and deletion as described below. If the recovery attempt fails, this method panics with [InputMisMatchException}. TODO: Not sure that panic() is the right thing to do here - JI
EXTRA TOKEN (single token deletion) ¶
LA(1) is not what we are looking for. If LA(2) has the right token, however, then assume LA(1) is some extra spurious token and delete it. Then consume and return the next token (which was the LA(2) token) as the successful result of the Match operation.
This recovery strategy is implemented by singleTokenDeletion ¶
MISSING TOKEN (single token insertion) ¶
If current token -at LA(1) - is consistent with what could come after the expected LA(1) token, then assume the token is missing and use the parser's TokenFactory to create it on the fly. The “insertion” is performed by returning the created token as the successful result of the Match operation.
This recovery strategy is implemented by [SingleTokenInsertion].
Example ¶
For example, Input i=(3 is clearly missing the ')'. When the parser returns from the nested call to expr, it will have call the chain:
stat → expr → atom
and it will be trying to Match the ')' at this point in the derivation:
: ID '=' '(' INT ')' ('+' atom)* ';' ^
The attempt to [Match] ')' will fail when it sees ';' and call [RecoverInline]. To recover, it sees that LA(1)==';' is in the set of tokens that can follow the ')' token reference in rule atom. It can assume that you forgot the ')'.
func (*DefaultErrorStrategy) ReportError ¶
func (d *DefaultErrorStrategy) ReportError(recognizer Parser, e RecognitionException)
ReportError is the default implementation of error reporting. It returns immediately if the handler is already in error recovery mode. Otherwise, it calls [beginErrorCondition] and dispatches the Reporting task based on the runtime type of e according to the following table.
[NoViableAltException] : Dispatches the call to [ReportNoViableAlternative] [InputMisMatchException] : Dispatches the call to [ReportInputMisMatch] [FailedPredicateException] : Dispatches the call to [ReportFailedPredicate] All other types : Calls [NotifyErrorListeners] to Report the exception
func (*DefaultErrorStrategy) ReportFailedPredicate ¶
func (d *DefaultErrorStrategy) ReportFailedPredicate(recognizer Parser, e *FailedPredicateException)
ReportFailedPredicate is called by [ReportError] when the exception is a FailedPredicateException.
See also: [ReportError]
func (*DefaultErrorStrategy) ReportInputMisMatch ¶
func (d *DefaultErrorStrategy) ReportInputMisMatch(recognizer Parser, e *InputMisMatchException)
ReportInputMisMatch is called by [ReportError] when the exception is an InputMisMatchException
See also: [ReportError]
func (*DefaultErrorStrategy) ReportMatch ¶
func (d *DefaultErrorStrategy) ReportMatch(recognizer Parser)
ReportMatch is the default implementation of error matching and simply calls endErrorCondition.
func (*DefaultErrorStrategy) ReportMissingToken ¶
func (d *DefaultErrorStrategy) ReportMissingToken(recognizer Parser)
ReportMissingToken is called to report a syntax error which requires the insertion of a missing token into the input stream. At the time this method is called, the missing token has not yet been inserted. When this method returns, recognizer is in error recovery mode.
This method is called when singleTokenInsertion identifies single-token insertion as a viable recovery strategy for a mismatched input error.
The default implementation simply returns if the handler is already in error recovery mode. Otherwise, it calls beginErrorCondition to enter error recovery mode, followed by calling [NotifyErrorListeners]
func (*DefaultErrorStrategy) ReportNoViableAlternative ¶
func (d *DefaultErrorStrategy) ReportNoViableAlternative(recognizer Parser, e *NoViableAltException)
ReportNoViableAlternative is called by [ReportError] when the exception is a NoViableAltException.
See also [ReportError]
func (*DefaultErrorStrategy) ReportUnwantedToken ¶
func (d *DefaultErrorStrategy) ReportUnwantedToken(recognizer Parser)
ReportUnwantedToken is called to report a syntax error that requires the removal of a token from the input stream. At the time d method is called, the erroneous symbol is the current LT(1) symbol and has not yet been removed from the input stream. When this method returns, recognizer is in error recovery mode.
This method is called when singleTokenDeletion identifies single-token deletion as a viable recovery strategy for a mismatched input error.
The default implementation simply returns if the handler is already in error recovery mode. Otherwise, it calls beginErrorCondition to enter error recovery mode, followed by calling [NotifyErrorListeners]
func (*DefaultErrorStrategy) SingleTokenDeletion ¶
func (d *DefaultErrorStrategy) SingleTokenDeletion(recognizer Parser) Token
SingleTokenDeletion implements the single-token deletion inline error recovery strategy. It is called by [RecoverInline] to attempt to recover from mismatched input. If this method returns nil, the parser and error handler state will not have changed. If this method returns non-nil, recognizer will not be in error recovery mode since the returned token was a successful Match.
If the single-token deletion is successful, this method calls [ReportUnwantedToken] to Report the error, followed by [Consume] to actually “delete” the extraneous token. Then, before returning, [ReportMatch] is called to signal a successful Match.
The func returns the successfully Matched Token instance if single-token deletion successfully recovers from the mismatched input, otherwise nil.
func (*DefaultErrorStrategy) SingleTokenInsertion ¶
func (d *DefaultErrorStrategy) SingleTokenInsertion(recognizer Parser) bool
SingleTokenInsertion implements the single-token insertion inline error recovery strategy. It is called by [RecoverInline] if the single-token deletion strategy fails to recover from the mismatched input. If this method returns {@code true}, {@code recognizer} will be in error recovery mode.
This method determines whether single-token insertion is viable by checking if the LA(1) input symbol could be successfully Matched if it were instead the LA(2) symbol. If this method returns {@code true}, the caller is responsible for creating and inserting a token with the correct type to produce this behavior.</p>
This func returns true if single-token insertion is a viable recovery strategy for the current mismatched input.
func (*DefaultErrorStrategy) Sync ¶
func (d *DefaultErrorStrategy) Sync(recognizer Parser)
Sync is the default implementation of error strategy synchronization.
This Sync makes sure that the current lookahead symbol is consistent with what were expecting at this point in the ATN. You can call this anytime but ANTLR only generates code to check before sub-rules/loops and each iteration.
Implements Jim Idle's magic Sync mechanism in closures and optional sub-rules. E.g.:
a : Sync ( stuff Sync )* Sync : {consume to what can follow Sync}
At the start of a sub-rule upon error, Sync performs single token deletion, if possible. If it can't do that, it bails on the current rule and uses the default error recovery, which consumes until the reSynchronization set of the current rule.
If the sub-rule is optional
({@code (...)?}, {@code (...)*},
or a block with an empty alternative), then the expected set includes what follows the sub-rule.
During loop iteration, it consumes until it sees a token that can start a sub-rule or what follows loop. Yes, that is pretty aggressive. We opt to stay in the loop as long as possible.
Origins ¶
Previous versions of ANTLR did a poor job of their recovery within loops. A single mismatch token or missing token would force the parser to bail out of the entire rules surrounding the loop. So, for rule:
classfunc : 'class' ID '{' member* '}'
input with an extra token between members would force the parser to consume until it found the next class definition rather than the next member definition of the current class.
This functionality cost a bit of effort because the parser has to compare the token set at the start of the loop and at each iteration. If for some reason speed is suffering for you, you can turn off this functionality by simply overriding this method as empty:
{ }
type DiagnosticErrorListener ¶
type DiagnosticErrorListener struct { *DefaultErrorListener // contains filtered or unexported fields }
func NewDiagnosticErrorListener ¶
func NewDiagnosticErrorListener(exactOnly bool) *DiagnosticErrorListener
func (*DiagnosticErrorListener) ReportAmbiguity ¶
func (d *DiagnosticErrorListener) ReportAmbiguity(recognizer Parser, dfa *DFA, startIndex, stopIndex int, exact bool, ambigAlts *BitSet, configs *ATNConfigSet)
func (*DiagnosticErrorListener) ReportAttemptingFullContext ¶
func (d *DiagnosticErrorListener) ReportAttemptingFullContext(recognizer Parser, dfa *DFA, startIndex, stopIndex int, _ *BitSet, _ *ATNConfigSet)
func (*DiagnosticErrorListener) ReportContextSensitivity ¶
func (d *DiagnosticErrorListener) ReportContextSensitivity(recognizer Parser, dfa *DFA, startIndex, stopIndex, _ int, _ *ATNConfigSet)
type EpsilonTransition ¶
type EpsilonTransition struct { BaseTransition // contains filtered or unexported fields }
func NewEpsilonTransition ¶
func NewEpsilonTransition(target ATNState, outermostPrecedenceReturn int) *EpsilonTransition
func (*EpsilonTransition) Matches ¶
func (t *EpsilonTransition) Matches(_, _, _ int) bool
func (*EpsilonTransition) String ¶
func (t *EpsilonTransition) String() string
type ErrorListener ¶
type ErrorListener interface { SyntaxError(recognizer Recognizer, offendingSymbol interface{}, line, column int, msg string, e RecognitionException) ReportAmbiguity(recognizer Parser, dfa *DFA, startIndex, stopIndex int, exact bool, ambigAlts *BitSet, configs *ATNConfigSet) ReportAttemptingFullContext(recognizer Parser, dfa *DFA, startIndex, stopIndex int, conflictingAlts *BitSet, configs *ATNConfigSet) ReportContextSensitivity(recognizer Parser, dfa *DFA, startIndex, stopIndex, prediction int, configs *ATNConfigSet) }
type ErrorNode ¶
type ErrorNode interface { TerminalNode // contains filtered or unexported methods }
type ErrorNodeImpl ¶
type ErrorNodeImpl struct {
*TerminalNodeImpl
}
func NewErrorNodeImpl ¶
func NewErrorNodeImpl(token Token) *ErrorNodeImpl
func (*ErrorNodeImpl) Accept ¶
func (e *ErrorNodeImpl) Accept(v ParseTreeVisitor) interface{}
type ErrorStrategy ¶
type ErrorStrategy interface { RecoverInline(Parser) Token Recover(Parser, RecognitionException) Sync(Parser) InErrorRecoveryMode(Parser) bool ReportError(Parser, RecognitionException) ReportMatch(Parser) // contains filtered or unexported methods }
type FailedPredicateException ¶
type FailedPredicateException struct { *BaseRecognitionException // contains filtered or unexported fields }
FailedPredicateException indicates that a semantic predicate failed during validation. Validation of predicates occurs when normally parsing the alternative just like Matching a token. Disambiguating predicate evaluation occurs when we test a predicate during prediction.
func NewFailedPredicateException ¶
func NewFailedPredicateException(recognizer Parser, predicate string, message string) *FailedPredicateException
type FileStream ¶
type FileStream struct { InputStream // contains filtered or unexported fields }
func NewFileStream ¶
func NewFileStream(fileName string) (*FileStream, error)
func (*FileStream) GetSourceName ¶
func (f *FileStream) GetSourceName() string
type IATNSimulator ¶
type ILexerATNSimulator ¶
type ILexerATNSimulator interface { IATNSimulator Match(input CharStream, mode int) int GetCharPositionInLine() int GetLine() int GetText(input CharStream) string Consume(input CharStream) // contains filtered or unexported methods }
type InputMisMatchException ¶
type InputMisMatchException struct {
*BaseRecognitionException
}
func NewInputMisMatchException ¶
func NewInputMisMatchException(recognizer Parser) *InputMisMatchException
NewInputMisMatchException creates an exception that signifies any kind of mismatched input exceptions such as when the current input does not Match the expected token.
type InputStream ¶
type InputStream struct {
// contains filtered or unexported fields
}
func NewInputStream ¶
func NewInputStream(data string) *InputStream
NewInputStream creates a new input stream from the given string
func NewIoStream ¶ added in v4.13.0
func NewIoStream(reader io.Reader) *InputStream
NewIoStream creates a new input stream from the given io.Reader reader. Note that the reader is read completely into memory and so it must actually have a stopping point - you cannot pass in a reader on an open-ended source such as a socket for instance.
func (*InputStream) Consume ¶
func (is *InputStream) Consume()
Consume moves the input pointer to the next character in the input stream
func (*InputStream) GetSourceName ¶
func (*InputStream) GetSourceName() string
func (*InputStream) GetText ¶
func (is *InputStream) GetText(start int, stop int) string
GetText returns the text from the input stream from the start to the stop index
func (*InputStream) GetTextFromInterval ¶
func (is *InputStream) GetTextFromInterval(i Interval) string
func (*InputStream) GetTextFromTokens ¶
func (is *InputStream) GetTextFromTokens(start, stop Token) string
GetTextFromTokens returns the text from the input stream from the first character of the start token to the last character of the stop token
func (*InputStream) Index ¶
func (is *InputStream) Index() int
Index returns the current offset in to the input stream
func (*InputStream) LA ¶
func (is *InputStream) LA(offset int) int
LA returns the character at the given offset from the start of the input stream
func (*InputStream) LT ¶
func (is *InputStream) LT(offset int) int
LT returns the character at the given offset from the start of the input stream
func (*InputStream) Mark ¶
func (is *InputStream) Mark() int
Mark does nothing here as we have entire buffer
func (*InputStream) Release ¶
func (is *InputStream) Release(_ int)
Release does nothing here as we have entire buffer
func (*InputStream) Seek ¶
func (is *InputStream) Seek(index int)
Seek the input point to the provided index offset
func (*InputStream) Size ¶
func (is *InputStream) Size() int
Size returns the total number of characters in the input stream
func (*InputStream) String ¶
func (is *InputStream) String() string
String returns the entire input stream as a string
type InsertAfterOp ¶
type InsertAfterOp struct {
BaseRewriteOperation
}
InsertAfterOp distinguishes between insert after/before to do the "insert after" instructions first and then the "insert before" instructions at same index. Implementation of "insert after" is "insert before index+1".
func NewInsertAfterOp ¶
func NewInsertAfterOp(index int, text string, stream TokenStream) *InsertAfterOp
func (*InsertAfterOp) String ¶
func (op *InsertAfterOp) String() string
type InsertBeforeOp ¶
type InsertBeforeOp struct {
BaseRewriteOperation
}
func NewInsertBeforeOp ¶
func NewInsertBeforeOp(index int, text string, stream TokenStream) *InsertBeforeOp
func (*InsertBeforeOp) String ¶
func (op *InsertBeforeOp) String() string
type InterpreterRuleContext ¶
type InterpreterRuleContext interface { ParserRuleContext }
type Interval ¶
func NewInterval ¶
NewInterval creates a new interval with the given start and stop values.
func (Interval) Contains ¶
Contains returns true if the given item is contained within the interval.
type IntervalSet ¶
type IntervalSet struct {
// contains filtered or unexported fields
}
IntervalSet represents a collection of [Intervals], which may be read-only.
func NewIntervalSet ¶
func NewIntervalSet() *IntervalSet
NewIntervalSet creates a new empty, writable, interval set.
func (*IntervalSet) Equals ¶ added in v4.13.0
func (i *IntervalSet) Equals(other *IntervalSet) bool
func (*IntervalSet) GetIntervals ¶
func (i *IntervalSet) GetIntervals() []Interval
func (*IntervalSet) String ¶
func (i *IntervalSet) String() string
func (*IntervalSet) StringVerbose ¶
func (i *IntervalSet) StringVerbose(literalNames []string, symbolicNames []string, elemsAreChar bool) string
type IterativeParseTreeWalker ¶ added in v4.13.0
type IterativeParseTreeWalker struct {
*ParseTreeWalker
}
func NewIterativeParseTreeWalker ¶ added in v4.13.0
func NewIterativeParseTreeWalker() *IterativeParseTreeWalker
func (*IterativeParseTreeWalker) Walk ¶ added in v4.13.0
func (i *IterativeParseTreeWalker) Walk(listener ParseTreeListener, t Tree)
type JMap ¶
type JMap[K, V any, C Comparator[K]] struct { // contains filtered or unexported fields }
func NewJMap ¶
func NewJMap[K, V any, C Comparator[K]](comparator Comparator[K], cType CollectionSource, desc string) *JMap[K, V, C]
type JPCMap ¶ added in v4.13.0
type JPCMap struct {
// contains filtered or unexported fields
}
func NewJPCMap ¶ added in v4.13.0
func NewJPCMap(cType CollectionSource, desc string) *JPCMap
func (*JPCMap) Get ¶ added in v4.13.0
func (pcm *JPCMap) Get(k1, k2 *PredictionContext) (*PredictionContext, bool)
func (*JPCMap) Put ¶ added in v4.13.0
func (pcm *JPCMap) Put(k1, k2, v *PredictionContext)
type JPCMap2 ¶ added in v4.13.0
type JPCMap2 struct {
// contains filtered or unexported fields
}
func NewJPCMap2 ¶ added in v4.13.0
func NewJPCMap2(cType CollectionSource, desc string) *JPCMap2
func (*JPCMap2) Get ¶ added in v4.13.0
func (pcm *JPCMap2) Get(k1, k2 *PredictionContext) (*PredictionContext, bool)
func (*JPCMap2) Put ¶ added in v4.13.0
func (pcm *JPCMap2) Put(k1, k2, v *PredictionContext) (*PredictionContext, bool)
type JStatRec ¶ added in v4.13.0
type JStatRec struct { Source CollectionSource MaxSize int CurSize int Gets int GetHits int GetMisses int GetHashConflicts int GetNoEnt int Puts int PutHits int PutMisses int PutHashConflicts int MaxSlotSize int Description string CreateStack []byte }
A JStatRec is a record of a particular use of a JStore, JMap or JPCMap] collection. Typically, it will be used to look for unused collections that wre allocated anyway, problems with hash bucket clashes, and anomalies such as huge numbers of Gets with no entries found GetNoEnt. You can refer to the CollectionAnomalies() function for ideas on what can be gleaned from these statistics about collections.
type JStore ¶
type JStore[T any, C Comparator[T]] struct { // contains filtered or unexported fields }
JStore implements a container that allows the use of a struct to calculate the key for a collection of values akin to map. This is not meant to be a full-blown HashMap but just serve the needs of the ANTLR Go runtime.
For ease of porting the logic of the runtime from the master target (Java), this collection operates in a similar way to Java, in that it can use any struct that supplies a Hash() and Equals() function as the key. The values are stored in a standard go map which internally is a form of hashmap itself, the key for the go map is the hash supplied by the key object. The collection is able to deal with hash conflicts by using a simple slice of values associated with the hash code indexed bucket. That isn't particularly efficient, but it is simple, and it works. As this is specifically for the ANTLR runtime, and we understand the requirements, then this is fine - this is not a general purpose collection.
func NewJStore ¶
func NewJStore[T any, C Comparator[T]](comparator Comparator[T], cType CollectionSource, desc string) *JStore[T, C]
func (*JStore[T, C]) Get ¶
Get will return the value associated with the key - the type of the key is the same type as the value which would not generally be useful, but this is a specific thing for ANTLR where the key is generated using the object we are going to store.
func (*JStore[T, C]) Put ¶
Put will store given value in the collection. Note that the key for storage is generated from the value itself - this is specifically because that is what ANTLR needs - this would not be useful as any kind of general collection.
If the key has a hash conflict, then the value will be added to the slice of values associated with the hash, unless the value is already in the slice, in which case the existing value is returned. Value equivalence is tested by calling the equals() method on the key.
If the given value is already present in the store, then the existing value is returned as v and exists is set to true ¶
If the given value is not present in the store, then the value is added to the store and returned as v and exists is set to false.
func (*JStore[T, C]) SortedSlice ¶
type LL1Analyzer ¶
type LL1Analyzer struct {
// contains filtered or unexported fields
}
func NewLL1Analyzer ¶
func NewLL1Analyzer(atn *ATN) *LL1Analyzer
func (*LL1Analyzer) Look ¶
func (la *LL1Analyzer) Look(s, stopState ATNState, ctx RuleContext) *IntervalSet
Look computes the set of tokens that can follow s in the ATN in the specified ctx.
If ctx is nil and the end of the rule containing s is reached, [EPSILON] is added to the result set.
If ctx is not nil and the end of the outermost rule is reached, [EOF] is added to the result set.
Parameter s the ATN state, and stopState is the ATN state to stop at. This can be a BlockEndState to detect epsilon paths through a closure.
Parameter ctx is the complete parser context, or nil if the context should be ignored
The func returns the set of tokens that can follow s in the ATN in the specified ctx.
type Lexer ¶
type Lexer interface { TokenSource Recognizer Emit() Token SetChannel(int) PushMode(int) PopMode() int SetType(int) SetMode(int) }
type LexerATNSimulator ¶
type LexerATNSimulator struct { BaseATNSimulator Line int CharPositionInLine int MatchCalls int // contains filtered or unexported fields }
func NewLexerATNSimulator ¶
func NewLexerATNSimulator(recog Lexer, atn *ATN, decisionToDFA []*DFA, sharedContextCache *PredictionContextCache) *LexerATNSimulator
func (*LexerATNSimulator) Consume ¶
func (l *LexerATNSimulator) Consume(input CharStream)
func (*LexerATNSimulator) GetCharPositionInLine ¶
func (l *LexerATNSimulator) GetCharPositionInLine() int
func (*LexerATNSimulator) GetLine ¶
func (l *LexerATNSimulator) GetLine() int
func (*LexerATNSimulator) GetText ¶
func (l *LexerATNSimulator) GetText(input CharStream) string
GetText returns the text [Match]ed so far for the current token.
func (*LexerATNSimulator) GetTokenName ¶
func (l *LexerATNSimulator) GetTokenName(tt int) string
func (*LexerATNSimulator) Match ¶
func (l *LexerATNSimulator) Match(input CharStream, mode int) int
func (*LexerATNSimulator) MatchATN ¶
func (l *LexerATNSimulator) MatchATN(input CharStream) int
type LexerAction ¶
type LexerAction interface { Hash() int Equals(other LexerAction) bool // contains filtered or unexported methods }
type LexerActionExecutor ¶
type LexerActionExecutor struct {
// contains filtered or unexported fields
}
func LexerActionExecutorappend ¶
func LexerActionExecutorappend(lexerActionExecutor *LexerActionExecutor, lexerAction LexerAction) *LexerActionExecutor
LexerActionExecutorappend creates a LexerActionExecutor which executes the actions for the input LexerActionExecutor followed by a specified LexerAction. TODO: This does not match the Java code
func NewLexerActionExecutor ¶
func NewLexerActionExecutor(lexerActions []LexerAction) *LexerActionExecutor
func (*LexerActionExecutor) Equals ¶
func (l *LexerActionExecutor) Equals(other interface{}) bool
func (*LexerActionExecutor) Hash ¶
func (l *LexerActionExecutor) Hash() int
type LexerChannelAction ¶
type LexerChannelAction struct { *BaseLexerAction // contains filtered or unexported fields }
LexerChannelAction implements the channel lexer action by calling [Lexer.setChannel] with the assigned channel.
Constructs a new channel action with the specified channel value.
func NewLexerChannelAction ¶
func NewLexerChannelAction(channel int) *LexerChannelAction
NewLexerChannelAction creates a channel lexer action by calling [Lexer.setChannel] with the assigned channel.
Constructs a new channel action with the specified channel value.
func (*LexerChannelAction) Equals ¶
func (l *LexerChannelAction) Equals(other LexerAction) bool
func (*LexerChannelAction) Hash ¶
func (l *LexerChannelAction) Hash() int
func (*LexerChannelAction) String ¶
func (l *LexerChannelAction) String() string
type LexerCustomAction ¶
type LexerCustomAction struct { *BaseLexerAction // contains filtered or unexported fields }
func NewLexerCustomAction ¶
func NewLexerCustomAction(ruleIndex, actionIndex int) *LexerCustomAction
func (*LexerCustomAction) Equals ¶
func (l *LexerCustomAction) Equals(other LexerAction) bool
func (*LexerCustomAction) Hash ¶
func (l *LexerCustomAction) Hash() int
type LexerDFASerializer ¶
type LexerDFASerializer struct {
*DFASerializer
}
func NewLexerDFASerializer ¶
func NewLexerDFASerializer(dfa *DFA) *LexerDFASerializer
func (*LexerDFASerializer) String ¶
func (l *LexerDFASerializer) String() string
type LexerIndexedCustomAction ¶
type LexerIndexedCustomAction struct { *BaseLexerAction // contains filtered or unexported fields }
func NewLexerIndexedCustomAction ¶
func NewLexerIndexedCustomAction(offset int, lexerAction LexerAction) *LexerIndexedCustomAction
NewLexerIndexedCustomAction constructs a new indexed custom action by associating a character offset with a LexerAction.
Note: This class is only required for lexer actions for which [LexerAction.isPositionDependent] returns true.
The offset points into the input CharStream, relative to the token start index, at which the specified lexerAction should be executed.
func (*LexerIndexedCustomAction) Hash ¶
func (l *LexerIndexedCustomAction) Hash() int
type LexerModeAction ¶
type LexerModeAction struct { *BaseLexerAction // contains filtered or unexported fields }
LexerModeAction implements the mode lexer action by calling [Lexer.mode] with the assigned mode.
func NewLexerModeAction ¶
func NewLexerModeAction(mode int) *LexerModeAction
func (*LexerModeAction) Equals ¶
func (l *LexerModeAction) Equals(other LexerAction) bool
func (*LexerModeAction) Hash ¶
func (l *LexerModeAction) Hash() int
func (*LexerModeAction) String ¶
func (l *LexerModeAction) String() string
type LexerMoreAction ¶
type LexerMoreAction struct {
*BaseLexerAction
}
func NewLexerMoreAction ¶
func NewLexerMoreAction() *LexerMoreAction
func (*LexerMoreAction) String ¶
func (l *LexerMoreAction) String() string
type LexerNoViableAltException ¶
type LexerNoViableAltException struct { *BaseRecognitionException // contains filtered or unexported fields }
func NewLexerNoViableAltException ¶
func NewLexerNoViableAltException(lexer Lexer, input CharStream, startIndex int, deadEndConfigs *ATNConfigSet) *LexerNoViableAltException
func (*LexerNoViableAltException) String ¶
func (l *LexerNoViableAltException) String() string
type LexerPopModeAction ¶
type LexerPopModeAction struct {
*BaseLexerAction
}
LexerPopModeAction implements the popMode lexer action by calling [Lexer.popMode].
The popMode command does not have any parameters, so this action is implemented as a singleton instance exposed by LexerPopModeActionINSTANCE
func NewLexerPopModeAction ¶
func NewLexerPopModeAction() *LexerPopModeAction
func (*LexerPopModeAction) String ¶
func (l *LexerPopModeAction) String() string
type LexerPushModeAction ¶
type LexerPushModeAction struct { *BaseLexerAction // contains filtered or unexported fields }
LexerPushModeAction implements the pushMode lexer action by calling [Lexer.pushMode] with the assigned mode.
func NewLexerPushModeAction ¶
func NewLexerPushModeAction(mode int) *LexerPushModeAction
func (*LexerPushModeAction) Equals ¶
func (l *LexerPushModeAction) Equals(other LexerAction) bool
func (*LexerPushModeAction) Hash ¶
func (l *LexerPushModeAction) Hash() int
func (*LexerPushModeAction) String ¶
func (l *LexerPushModeAction) String() string
type LexerSkipAction ¶
type LexerSkipAction struct {
*BaseLexerAction
}
LexerSkipAction implements the [BaseLexerAction.Skip] lexer action by calling [Lexer.Skip].
The Skip command does not have any parameters, so this action is implemented as a singleton instance exposed by the LexerSkipActionINSTANCE.
func NewLexerSkipAction ¶
func NewLexerSkipAction() *LexerSkipAction
func (*LexerSkipAction) Equals ¶ added in v4.13.0
func (b *LexerSkipAction) Equals(other LexerAction) bool
func (*LexerSkipAction) String ¶
func (l *LexerSkipAction) String() string
String returns a string representation of the current LexerSkipAction.
type LexerTypeAction ¶
type LexerTypeAction struct { *BaseLexerAction // contains filtered or unexported fields }
Implements the {@code type} lexer action by calling {@link Lexer//setType}
with the assigned type.
func NewLexerTypeAction ¶
func NewLexerTypeAction(thetype int) *LexerTypeAction
func (*LexerTypeAction) Equals ¶
func (l *LexerTypeAction) Equals(other LexerAction) bool
func (*LexerTypeAction) Hash ¶
func (l *LexerTypeAction) Hash() int
func (*LexerTypeAction) String ¶
func (l *LexerTypeAction) String() string
type LoopEndState ¶
type LoopEndState struct { BaseATNState // contains filtered or unexported fields }
LoopEndState marks the end of a * or + loop.
func NewLoopEndState ¶
func NewLoopEndState() *LoopEndState
type Mutex ¶ added in v4.13.1
type Mutex struct {
// contains filtered or unexported fields
}
Mutex is a simple mutex implementation which just delegates to sync.Mutex, it is used to provide a mutex implementation for the antlr package, which users can turn off with the build tag -tags antlr.nomutex
type NoViableAltException ¶
type NoViableAltException struct { *BaseRecognitionException // contains filtered or unexported fields }
func NewNoViableAltException ¶
func NewNoViableAltException(recognizer Parser, input TokenStream, startToken Token, offendingToken Token, deadEndConfigs *ATNConfigSet, ctx ParserRuleContext) *NoViableAltException
NewNoViableAltException creates an exception indicating that the parser could not decide which of two or more paths to take based upon the remaining input. It tracks the starting token of the offending input and also knows where the parser was in the various paths when the error.
Reported by [ReportNoViableAlternative]
type NotSetTransition ¶
type NotSetTransition struct {
SetTransition
}
func NewNotSetTransition ¶
func NewNotSetTransition(target ATNState, set *IntervalSet) *NotSetTransition
func (*NotSetTransition) Matches ¶
func (t *NotSetTransition) Matches(symbol, minVocabSymbol, maxVocabSymbol int) bool
func (*NotSetTransition) String ¶
func (t *NotSetTransition) String() string
type OR ¶
type OR struct {
// contains filtered or unexported fields
}
func NewOR ¶
func NewOR(a, b SemanticContext) *OR
func (*OR) Equals ¶
func (o *OR) Equals(other Collectable[SemanticContext]) bool
type ObjEqComparator ¶
type ObjEqComparator[T Collectable[T]] struct{}
ObjEqComparator is the equivalent of the Java ObjectEqualityComparator, which is the default instance of Equality comparator. We do not have inheritance in Go, only interfaces, so we use generics to enforce some type safety and avoid having to implement this for every type that we want to perform comparison on.
This comparator works by using the standard Hash() and Equals() methods of the type T that is being compared. Which allows us to use it in any collection instance that does not require a special hash or equals implementation.
func (*ObjEqComparator[T]) Equals2 ¶
func (c *ObjEqComparator[T]) Equals2(o1, o2 T) bool
Equals2 delegates to the Equals() method of type T
func (*ObjEqComparator[T]) Hash1 ¶
func (c *ObjEqComparator[T]) Hash1(o T) int
Hash1 delegates to the Hash() method of type T
type ParseCancellationException ¶
type ParseCancellationException struct { }
func NewParseCancellationException ¶
func NewParseCancellationException() *ParseCancellationException
func (ParseCancellationException) GetInputStream ¶ added in v4.13.0
func (p ParseCancellationException) GetInputStream() IntStream
func (ParseCancellationException) GetMessage ¶ added in v4.13.0
func (p ParseCancellationException) GetMessage() string
func (ParseCancellationException) GetOffendingToken ¶ added in v4.13.0
func (p ParseCancellationException) GetOffendingToken() Token
type ParseTree ¶
type ParseTree interface { SyntaxTree Accept(Visitor ParseTreeVisitor) interface{} GetText() string ToStringTree([]string, Recognizer) string }
func TreesDescendants ¶
func TreesFindAllTokenNodes ¶
func TreesfindAllNodes ¶
func TreesfindAllRuleNodes ¶
type ParseTreeListener ¶
type ParseTreeListener interface { VisitTerminal(node TerminalNode) VisitErrorNode(node ErrorNode) EnterEveryRule(ctx ParserRuleContext) ExitEveryRule(ctx ParserRuleContext) }
type ParseTreeVisitor ¶
type ParseTreeVisitor interface { Visit(tree ParseTree) interface{} VisitChildren(node RuleNode) interface{} VisitTerminal(node TerminalNode) interface{} VisitErrorNode(node ErrorNode) interface{} }
type ParseTreeWalker ¶
type ParseTreeWalker struct { }
func NewParseTreeWalker ¶
func NewParseTreeWalker() *ParseTreeWalker
func (*ParseTreeWalker) EnterRule ¶
func (p *ParseTreeWalker) EnterRule(listener ParseTreeListener, r RuleNode)
EnterRule enters a grammar rule by first triggering the generic event ParseTreeListener.[EnterEveryRule] then by triggering the event specific to the given parse tree node
func (*ParseTreeWalker) ExitRule ¶
func (p *ParseTreeWalker) ExitRule(listener ParseTreeListener, r RuleNode)
ExitRule exits a grammar rule by first triggering the event specific to the given parse tree node then by triggering the generic event ParseTreeListener.ExitEveryRule
func (*ParseTreeWalker) Walk ¶
func (p *ParseTreeWalker) Walk(listener ParseTreeListener, t Tree)
Walk performs a walk on the given parse tree starting at the root and going down recursively with depth-first search. On each node, [EnterRule] is called before recursively walking down into child nodes, then [ExitRule] is called after the recursive call to wind up.
type Parser ¶
type Parser interface { Recognizer GetInterpreter() *ParserATNSimulator GetTokenStream() TokenStream GetTokenFactory() TokenFactory GetParserRuleContext() ParserRuleContext SetParserRuleContext(ParserRuleContext) Consume() Token GetParseListeners() []ParseTreeListener GetErrorHandler() ErrorStrategy SetErrorHandler(ErrorStrategy) GetInputStream() IntStream GetCurrentToken() Token GetExpectedTokens() *IntervalSet NotifyErrorListeners(string, Token, RecognitionException) IsExpectedToken(int) bool GetPrecedence() int GetRuleInvocationStack(ParserRuleContext) []string }
type ParserATNSimulator ¶
type ParserATNSimulator struct { BaseATNSimulator // contains filtered or unexported fields }
func NewParserATNSimulator ¶
func NewParserATNSimulator(parser Parser, atn *ATN, decisionToDFA []*DFA, sharedContextCache *PredictionContextCache) *ParserATNSimulator
func (*ParserATNSimulator) AdaptivePredict ¶
func (p *ParserATNSimulator) AdaptivePredict(parser *BaseParser, input TokenStream, decision int, outerContext ParserRuleContext) int
func (*ParserATNSimulator) GetAltThatFinishedDecisionEntryRule ¶
func (p *ParserATNSimulator) GetAltThatFinishedDecisionEntryRule(configs *ATNConfigSet) int
func (*ParserATNSimulator) GetPredictionMode ¶
func (p *ParserATNSimulator) GetPredictionMode() int
func (*ParserATNSimulator) GetTokenName ¶
func (p *ParserATNSimulator) GetTokenName(t int) string
func (*ParserATNSimulator) ReportAmbiguity ¶
func (p *ParserATNSimulator) ReportAmbiguity(dfa *DFA, _ *DFAState, startIndex, stopIndex int, exact bool, ambigAlts *BitSet, configs *ATNConfigSet)
ReportAmbiguity reports and ambiguity in the parse, which shows that the parser will explore a different route.
If context-sensitive parsing, we know it's an ambiguity not a conflict or error, but we can report it to the developer so that they can see that this is happening and can take action if they want to.
func (*ParserATNSimulator) ReportAttemptingFullContext ¶
func (p *ParserATNSimulator) ReportAttemptingFullContext(dfa *DFA, conflictingAlts *BitSet, configs *ATNConfigSet, startIndex, stopIndex int)
func (*ParserATNSimulator) ReportContextSensitivity ¶
func (p *ParserATNSimulator) ReportContextSensitivity(dfa *DFA, prediction int, configs *ATNConfigSet, startIndex, stopIndex int)
func (*ParserATNSimulator) SetPredictionMode ¶
func (p *ParserATNSimulator) SetPredictionMode(v int)
type ParserRuleContext ¶
type ParserRuleContext interface { RuleContext SetException(RecognitionException) AddTokenNode(token Token) *TerminalNodeImpl AddErrorNode(badToken Token) *ErrorNodeImpl EnterRule(listener ParseTreeListener) ExitRule(listener ParseTreeListener) SetStart(Token) GetStart() Token SetStop(Token) GetStop() Token AddChild(child RuleContext) RuleContext RemoveLastChild() }
type PlusBlockStartState ¶
type PlusBlockStartState struct { BaseBlockStartState // contains filtered or unexported fields }
PlusBlockStartState is the start of a (A|B|...)+ loop. Technically it is a decision state; we don't use it for code generation. Somebody might need it, it is included for completeness. In reality, PlusLoopbackState is the real decision-making node for A+.
func NewPlusBlockStartState ¶
func NewPlusBlockStartState() *PlusBlockStartState
type PlusLoopbackState ¶
type PlusLoopbackState struct {
BaseDecisionState
}
PlusLoopbackState is a decision state for A+ and (A|B)+. It has two transitions: one to the loop back to start of the block, and one to exit.
func NewPlusLoopbackState ¶
func NewPlusLoopbackState() *PlusLoopbackState
type PrecedencePredicate ¶
type PrecedencePredicate struct {
// contains filtered or unexported fields
}
func NewPrecedencePredicate ¶
func NewPrecedencePredicate(precedence int) *PrecedencePredicate
func PrecedencePredicatefilterPrecedencePredicates ¶
func PrecedencePredicatefilterPrecedencePredicates(set *JStore[SemanticContext, Comparator[SemanticContext]]) []*PrecedencePredicate
func (*PrecedencePredicate) Equals ¶
func (p *PrecedencePredicate) Equals(other Collectable[SemanticContext]) bool
func (*PrecedencePredicate) Hash ¶
func (p *PrecedencePredicate) Hash() int
func (*PrecedencePredicate) String ¶
func (p *PrecedencePredicate) String() string
type PrecedencePredicateTransition ¶
type PrecedencePredicateTransition struct { BaseAbstractPredicateTransition // contains filtered or unexported fields }
func NewPrecedencePredicateTransition ¶
func NewPrecedencePredicateTransition(target ATNState, precedence int) *PrecedencePredicateTransition
func (*PrecedencePredicateTransition) Matches ¶
func (t *PrecedencePredicateTransition) Matches(_, _, _ int) bool
func (*PrecedencePredicateTransition) String ¶
func (t *PrecedencePredicateTransition) String() string
type PredPrediction ¶
type PredPrediction struct {
// contains filtered or unexported fields
}
PredPrediction maps a predicate to a predicted alternative.
func NewPredPrediction ¶
func NewPredPrediction(pred SemanticContext, alt int) *PredPrediction
func (*PredPrediction) String ¶
func (p *PredPrediction) String() string
type Predicate ¶
type Predicate struct {
// contains filtered or unexported fields
}
func NewPredicate ¶
func (*Predicate) Equals ¶
func (p *Predicate) Equals(other Collectable[SemanticContext]) bool
type PredicateTransition ¶
type PredicateTransition struct { BaseAbstractPredicateTransition // contains filtered or unexported fields }
func NewPredicateTransition ¶
func NewPredicateTransition(target ATNState, ruleIndex, predIndex int, isCtxDependent bool) *PredicateTransition
func (*PredicateTransition) Matches ¶
func (t *PredicateTransition) Matches(_, _, _ int) bool
func (*PredicateTransition) String ¶
func (t *PredicateTransition) String() string
type PredictionContext ¶
type PredictionContext struct {
// contains filtered or unexported fields
}
PredictionContext is a go idiomatic implementation of PredictionContext that does not rty to emulate inheritance from Java, and can be used without an interface definition. An interface is not required because no user code will ever need to implement this interface.
func NewArrayPredictionContext ¶
func NewArrayPredictionContext(parents []*PredictionContext, returnStates []int) *PredictionContext
func NewBaseSingletonPredictionContext ¶
func NewBaseSingletonPredictionContext(parent *PredictionContext, returnState int) *PredictionContext
func NewEmptyPredictionContext ¶
func NewEmptyPredictionContext() *PredictionContext
func SingletonBasePredictionContextCreate ¶
func SingletonBasePredictionContextCreate(parent *PredictionContext, returnState int) *PredictionContext
func (*PredictionContext) ArrayEquals ¶ added in v4.13.0
func (p *PredictionContext) ArrayEquals(o Collectable[*PredictionContext]) bool
func (*PredictionContext) Equals ¶
func (p *PredictionContext) Equals(other Collectable[*PredictionContext]) bool
func (*PredictionContext) GetParent ¶
func (p *PredictionContext) GetParent(i int) *PredictionContext
func (*PredictionContext) GetReturnStates ¶ added in v4.13.0
func (p *PredictionContext) GetReturnStates() []int
func (*PredictionContext) Hash ¶
func (p *PredictionContext) Hash() int
func (*PredictionContext) SingletonEquals ¶ added in v4.13.0
func (p *PredictionContext) SingletonEquals(other Collectable[*PredictionContext]) bool
func (*PredictionContext) String ¶
func (p *PredictionContext) String() string
func (*PredictionContext) Type ¶ added in v4.13.0
func (p *PredictionContext) Type() int
type PredictionContextCache ¶
type PredictionContextCache struct {
// contains filtered or unexported fields
}
PredictionContextCache is Used to cache PredictionContext objects. It is used for the shared context cash associated with contexts in DFA states. This cache can be used for both lexers and parsers.
func NewPredictionContextCache ¶
func NewPredictionContextCache() *PredictionContextCache
func (*PredictionContextCache) Get ¶
func (p *PredictionContextCache) Get(ctx *PredictionContext) (*PredictionContext, bool)
type ProxyErrorListener ¶
type ProxyErrorListener struct { *DefaultErrorListener // contains filtered or unexported fields }
func NewProxyErrorListener ¶
func NewProxyErrorListener(delegates []ErrorListener) *ProxyErrorListener
func (*ProxyErrorListener) ReportAmbiguity ¶
func (p *ProxyErrorListener) ReportAmbiguity(recognizer Parser, dfa *DFA, startIndex, stopIndex int, exact bool, ambigAlts *BitSet, configs *ATNConfigSet)
func (*ProxyErrorListener) ReportAttemptingFullContext ¶
func (p *ProxyErrorListener) ReportAttemptingFullContext(recognizer Parser, dfa *DFA, startIndex, stopIndex int, conflictingAlts *BitSet, configs *ATNConfigSet)
func (*ProxyErrorListener) ReportContextSensitivity ¶
func (p *ProxyErrorListener) ReportContextSensitivity(recognizer Parser, dfa *DFA, startIndex, stopIndex, prediction int, configs *ATNConfigSet)
func (*ProxyErrorListener) SyntaxError ¶
func (p *ProxyErrorListener) SyntaxError(recognizer Recognizer, offendingSymbol interface{}, line, column int, msg string, e RecognitionException)
type RangeTransition ¶
type RangeTransition struct { BaseTransition // contains filtered or unexported fields }
func NewRangeTransition ¶
func NewRangeTransition(target ATNState, start, stop int) *RangeTransition
func (*RangeTransition) Matches ¶
func (t *RangeTransition) Matches(symbol, _, _ int) bool
func (*RangeTransition) String ¶
func (t *RangeTransition) String() string
type RecognitionException ¶
type Recognizer ¶
type Recognizer interface { GetLiteralNames() []string GetSymbolicNames() []string GetRuleNames() []string Sempred(RuleContext, int, int) bool Precpred(RuleContext, int) bool GetState() int SetState(int) Action(RuleContext, int, int) AddErrorListener(ErrorListener) RemoveErrorListeners() GetATN() *ATN GetErrorListenerDispatch() ErrorListener HasError() bool GetError() RecognitionException SetError(RecognitionException) }
type ReplaceOp ¶
type ReplaceOp struct { BaseRewriteOperation LastIndex int }
ReplaceOp tries to replace range from x..y with (y-x)+1 ReplaceOp instructions.
func NewReplaceOp ¶
func NewReplaceOp(from, to int, text string, stream TokenStream) *ReplaceOp
type RewriteOperation ¶
type RewriteOperation interface { // Execute the rewrite operation by possibly adding to the buffer. // Return the index of the next token to operate on. Execute(buffer *bytes.Buffer) int String() string GetInstructionIndex() int GetIndex() int GetText() string GetOpName() string GetTokens() TokenStream SetInstructionIndex(val int) SetIndex(int) SetText(string) SetOpName(string) SetTokens(TokenStream) }
type RuleContext ¶
type RuleContext interface { RuleNode GetInvokingState() int SetInvokingState(int) GetRuleIndex() int IsEmpty() bool GetAltNumber() int SetAltNumber(altNumber int) String([]string, RuleContext) string }
RuleContext is a record of a single rule invocation. It knows which context invoked it, if any. If there is no parent context, then naturally the invoking state is not valid. The parent link provides a chain upwards from the current rule invocation to the root of the invocation tree, forming a stack.
We actually carry no information about the rule associated with this context (except when parsing). We keep only the state number of the invoking state from the ATN submachine that invoked this. Contrast this with the s pointer inside ParserRuleContext that tracks the current state being "executed" for the current rule.
The parent contexts are useful for computing lookahead sets and getting error information.
These objects are used during parsing and prediction. For the special case of parsers, we use the struct ParserRuleContext, which embeds a RuleContext.
@see ParserRuleContext
type RuleNode ¶
type RuleNode interface { ParseTree GetRuleContext() RuleContext }
type RuleStartState ¶
type RuleStartState struct { BaseATNState // contains filtered or unexported fields }
func NewRuleStartState ¶
func NewRuleStartState() *RuleStartState
type RuleStopState ¶
type RuleStopState struct {
BaseATNState
}
RuleStopState is the last node in the ATN for a rule, unless that rule is the start symbol. In that case, there is one transition to EOF. Later, we might encode references to all calls to this rule to compute FOLLOW sets for error handling.
func NewRuleStopState ¶
func NewRuleStopState() *RuleStopState
type RuleTransition ¶
type RuleTransition struct { BaseTransition // contains filtered or unexported fields }
func NewRuleTransition ¶
func NewRuleTransition(ruleStart ATNState, ruleIndex, precedence int, followState ATNState) *RuleTransition
func (*RuleTransition) Matches ¶
func (t *RuleTransition) Matches(_, _, _ int) bool
type SemCComparator ¶
type SemCComparator[T Collectable[T]] struct{}
type SemanticContext ¶
type SemanticContext interface { Equals(other Collectable[SemanticContext]) bool Hash() int String() string // contains filtered or unexported methods }
SemanticContext is a tree structure used to record the semantic context in which
an ATN configuration is valid. It's either a single predicate, a conjunction p1 && p2, or a sum of products p1 || p2. I have scoped the AND, OR, and Predicate subclasses of [SemanticContext] within the scope of this outer ``class''
func SemanticContextandContext ¶
func SemanticContextandContext(a, b SemanticContext) SemanticContext
func SemanticContextorContext ¶
func SemanticContextorContext(a, b SemanticContext) SemanticContext
type SetTransition ¶
type SetTransition struct {
BaseTransition
}
func NewSetTransition ¶
func NewSetTransition(target ATNState, set *IntervalSet) *SetTransition
func (*SetTransition) Matches ¶
func (t *SetTransition) Matches(symbol, _, _ int) bool
func (*SetTransition) String ¶
func (t *SetTransition) String() string
type SimState ¶
type SimState struct {
// contains filtered or unexported fields
}
func NewSimState ¶
func NewSimState() *SimState
type StarBlockStartState ¶
type StarBlockStartState struct {
BaseBlockStartState
}
StarBlockStartState is the block that begins a closure loop.
func NewStarBlockStartState ¶
func NewStarBlockStartState() *StarBlockStartState
type StarLoopEntryState ¶
type StarLoopEntryState struct { BaseDecisionState // contains filtered or unexported fields }
func NewStarLoopEntryState ¶
func NewStarLoopEntryState() *StarLoopEntryState
type StarLoopbackState ¶
type StarLoopbackState struct {
BaseATNState
}
func NewStarLoopbackState ¶
func NewStarLoopbackState() *StarLoopbackState
type SyntaxTree ¶
type TerminalNode ¶
type TerminalNodeImpl ¶
type TerminalNodeImpl struct {
// contains filtered or unexported fields
}
func NewTerminalNodeImpl ¶
func NewTerminalNodeImpl(symbol Token) *TerminalNodeImpl
func (*TerminalNodeImpl) Accept ¶
func (t *TerminalNodeImpl) Accept(v ParseTreeVisitor) interface{}
func (*TerminalNodeImpl) GetChild ¶
func (t *TerminalNodeImpl) GetChild(_ int) Tree
func (*TerminalNodeImpl) GetChildCount ¶
func (t *TerminalNodeImpl) GetChildCount() int
func (*TerminalNodeImpl) GetChildren ¶
func (t *TerminalNodeImpl) GetChildren() []Tree
func (*TerminalNodeImpl) GetParent ¶
func (t *TerminalNodeImpl) GetParent() Tree
func (*TerminalNodeImpl) GetPayload ¶
func (t *TerminalNodeImpl) GetPayload() interface{}
func (*TerminalNodeImpl) GetSourceInterval ¶
func (t *TerminalNodeImpl) GetSourceInterval() Interval
func (*TerminalNodeImpl) GetSymbol ¶
func (t *TerminalNodeImpl) GetSymbol() Token
func (*TerminalNodeImpl) GetText ¶
func (t *TerminalNodeImpl) GetText() string
func (*TerminalNodeImpl) SetChildren ¶
func (t *TerminalNodeImpl) SetChildren(_ []Tree)
func (*TerminalNodeImpl) SetParent ¶
func (t *TerminalNodeImpl) SetParent(tree Tree)
func (*TerminalNodeImpl) String ¶
func (t *TerminalNodeImpl) String() string
func (*TerminalNodeImpl) ToStringTree ¶
func (t *TerminalNodeImpl) ToStringTree(_ []string, _ Recognizer) string
type Token ¶
type Token interface { GetSource() *TokenSourceCharStreamPair GetTokenType() int GetChannel() int GetStart() int GetStop() int GetLine() int GetColumn() int GetText() string SetText(s string) GetTokenIndex() int SetTokenIndex(v int) GetTokenSource() TokenSource GetInputStream() CharStream String() string }
type TokenFactory ¶
type TokenFactory interface {
Create(source *TokenSourceCharStreamPair, ttype int, text string, channel, start, stop, line, column int) Token
}
TokenFactory creates CommonToken objects.
type TokenSource ¶
type TokenSource interface { NextToken() Token Skip() More() GetLine() int GetCharPositionInLine() int GetInputStream() CharStream GetSourceName() string GetTokenFactory() TokenFactory // contains filtered or unexported methods }
type TokenSourceCharStreamPair ¶
type TokenSourceCharStreamPair struct {
// contains filtered or unexported fields
}
type TokenStream ¶
type TokenStream interface { IntStream LT(k int) Token Reset() Get(index int) Token GetTokenSource() TokenSource SetTokenSource(TokenSource) GetAllText() string GetTextFromInterval(Interval) string GetTextFromRuleContext(RuleContext) string GetTextFromTokens(Token, Token) string }
type TokenStreamRewriter ¶
type TokenStreamRewriter struct {
// contains filtered or unexported fields
}
func NewTokenStreamRewriter ¶
func NewTokenStreamRewriter(tokens TokenStream) *TokenStreamRewriter
func (*TokenStreamRewriter) AddToProgram ¶
func (tsr *TokenStreamRewriter) AddToProgram(name string, op RewriteOperation)
func (*TokenStreamRewriter) Delete ¶
func (tsr *TokenStreamRewriter) Delete(programName string, from, to int)
func (*TokenStreamRewriter) DeleteDefault ¶
func (tsr *TokenStreamRewriter) DeleteDefault(from, to int)
func (*TokenStreamRewriter) DeleteDefaultPos ¶
func (tsr *TokenStreamRewriter) DeleteDefaultPos(index int)
func (*TokenStreamRewriter) DeleteProgram ¶
func (tsr *TokenStreamRewriter) DeleteProgram(programName string)
DeleteProgram Reset the program so that no instructions exist
func (*TokenStreamRewriter) DeleteProgramDefault ¶
func (tsr *TokenStreamRewriter) DeleteProgramDefault()
func (*TokenStreamRewriter) DeleteToken ¶
func (tsr *TokenStreamRewriter) DeleteToken(programName string, from, to Token)
func (*TokenStreamRewriter) DeleteTokenDefault ¶
func (tsr *TokenStreamRewriter) DeleteTokenDefault(from, to Token)
func (*TokenStreamRewriter) GetLastRewriteTokenIndex ¶
func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndex(programName string) int
func (*TokenStreamRewriter) GetLastRewriteTokenIndexDefault ¶
func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndexDefault() int
func (*TokenStreamRewriter) GetProgram ¶
func (tsr *TokenStreamRewriter) GetProgram(name string) []RewriteOperation
func (*TokenStreamRewriter) GetText ¶
func (tsr *TokenStreamRewriter) GetText(programName string, interval Interval) string
GetText returns the text from the original tokens altered per the instructions given to this rewriter.
func (*TokenStreamRewriter) GetTextDefault ¶
func (tsr *TokenStreamRewriter) GetTextDefault() string
GetTextDefault returns the text from the original tokens altered per the instructions given to this rewriter.
func (*TokenStreamRewriter) GetTokenStream ¶
func (tsr *TokenStreamRewriter) GetTokenStream() TokenStream
func (*TokenStreamRewriter) InitializeProgram ¶
func (tsr *TokenStreamRewriter) InitializeProgram(name string) []RewriteOperation
func (*TokenStreamRewriter) InsertAfter ¶
func (tsr *TokenStreamRewriter) InsertAfter(programName string, index int, text string)
func (*TokenStreamRewriter) InsertAfterDefault ¶
func (tsr *TokenStreamRewriter) InsertAfterDefault(index int, text string)
func (*TokenStreamRewriter) InsertAfterToken ¶
func (tsr *TokenStreamRewriter) InsertAfterToken(programName string, token Token, text string)
func (*TokenStreamRewriter) InsertBefore ¶
func (tsr *TokenStreamRewriter) InsertBefore(programName string, index int, text string)
func (*TokenStreamRewriter) InsertBeforeDefault ¶
func (tsr *TokenStreamRewriter) InsertBeforeDefault(index int, text string)
func (*TokenStreamRewriter) InsertBeforeToken ¶
func (tsr *TokenStreamRewriter) InsertBeforeToken(programName string, token Token, text string)
func (*TokenStreamRewriter) Replace ¶
func (tsr *TokenStreamRewriter) Replace(programName string, from, to int, text string)
func (*TokenStreamRewriter) ReplaceDefault ¶
func (tsr *TokenStreamRewriter) ReplaceDefault(from, to int, text string)
func (*TokenStreamRewriter) ReplaceDefaultPos ¶
func (tsr *TokenStreamRewriter) ReplaceDefaultPos(index int, text string)
func (*TokenStreamRewriter) ReplaceToken ¶
func (tsr *TokenStreamRewriter) ReplaceToken(programName string, from, to Token, text string)
func (*TokenStreamRewriter) ReplaceTokenDefault ¶
func (tsr *TokenStreamRewriter) ReplaceTokenDefault(from, to Token, text string)
func (*TokenStreamRewriter) ReplaceTokenDefaultPos ¶
func (tsr *TokenStreamRewriter) ReplaceTokenDefaultPos(index Token, text string)
func (*TokenStreamRewriter) Rollback ¶
func (tsr *TokenStreamRewriter) Rollback(programName string, instructionIndex int)
Rollback the instruction stream for a program so that the indicated instruction (via instructionIndex) is no longer in the stream. UNTESTED!
func (*TokenStreamRewriter) RollbackDefault ¶
func (tsr *TokenStreamRewriter) RollbackDefault(instructionIndex int)
func (*TokenStreamRewriter) SetLastRewriteTokenIndex ¶
func (tsr *TokenStreamRewriter) SetLastRewriteTokenIndex(programName string, i int)
type TokensStartState ¶
type TokensStartState struct {
BaseDecisionState
}
TokensStartState is the Tokens rule start state linking to each lexer rule start state.
func NewTokensStartState ¶
func NewTokensStartState() *TokensStartState
type TraceListener ¶
type TraceListener struct {
// contains filtered or unexported fields
}
func NewTraceListener ¶
func NewTraceListener(parser *BaseParser) *TraceListener
func (*TraceListener) EnterEveryRule ¶
func (t *TraceListener) EnterEveryRule(ctx ParserRuleContext)
func (*TraceListener) ExitEveryRule ¶
func (t *TraceListener) ExitEveryRule(ctx ParserRuleContext)
func (*TraceListener) VisitErrorNode ¶
func (t *TraceListener) VisitErrorNode(_ ErrorNode)
func (*TraceListener) VisitTerminal ¶
func (t *TraceListener) VisitTerminal(node TerminalNode)
type Transition ¶
type Tree ¶
type Tree interface { GetParent() Tree SetParent(Tree) GetPayload() interface{} GetChild(i int) Tree GetChildCount() int GetChildren() []Tree }
func TreesGetChildren ¶
TreesGetChildren returns am ordered list of all children of this node
func TreesgetAncestors ¶
TreesgetAncestors returns a list of all ancestors of this node. The first node of list is the root and the last node is the parent of this node.
type VisitEntry ¶ added in v4.13.0
type VisitEntry struct {
// contains filtered or unexported fields
}
type VisitList ¶ added in v4.13.0
type VisitList struct {
// contains filtered or unexported fields
}
type VisitRecord ¶ added in v4.13.0
type VisitRecord struct {
// contains filtered or unexported fields
}
func NewVisitRecord ¶ added in v4.13.0
func NewVisitRecord() *VisitRecord
NewVisitRecord returns a new VisitRecord instance from the pool if available. Note that this "map" uses a pointer as a key because we are emulating the behavior of IdentityHashMap in Java, which uses the `==` operator to compare whether the keys are equal, which means is the key the same reference to an object rather than is it .equals() to another object.
func (*VisitRecord) Get ¶ added in v4.13.0
func (vr *VisitRecord) Get(k *PredictionContext) (*PredictionContext, bool)
func (*VisitRecord) Put ¶ added in v4.13.0
func (vr *VisitRecord) Put(k, v *PredictionContext) (*PredictionContext, bool)
func (*VisitRecord) Release ¶ added in v4.13.0
func (vr *VisitRecord) Release()
type WildcardTransition ¶
type WildcardTransition struct {
BaseTransition
}
func NewWildcardTransition ¶
func NewWildcardTransition(target ATNState) *WildcardTransition
func (*WildcardTransition) Matches ¶
func (t *WildcardTransition) Matches(symbol, minVocabSymbol, maxVocabSymbol int) bool
func (*WildcardTransition) String ¶
func (t *WildcardTransition) String() string
Source Files ¶
- antlrdoc.go
- atn.go
- atn_config.go
- atn_config_set.go
- atn_deserialization_options.go
- atn_deserializer.go
- atn_simulator.go
- atn_state.go
- atn_type.go
- char_stream.go
- common_token_factory.go
- common_token_stream.go
- comparators.go
- configuration.go
- dfa.go
- dfa_serializer.go
- dfa_state.go
- diagnostic_error_listener.go
- error_listener.go
- error_strategy.go
- errors.go
- file_stream.go
- input_stream.go
- int_stream.go
- interval_set.go
- jcollect.go
- lexer.go
- lexer_action.go
- lexer_action_executor.go
- lexer_atn_simulator.go
- ll1_analyzer.go
- mutex.go
- nostatistics.go
- parser.go
- parser_atn_simulator.go
- parser_rule_context.go
- prediction_context.go
- prediction_context_cache.go
- prediction_mode.go
- recognizer.go
- rule_context.go
- semantic_context.go
- stats_data.go
- token.go
- token_source.go
- token_stream.go
- tokenstream_rewriter.go
- trace_listener.go
- transition.go
- tree.go
- trees.go
- utils.go