Documentation ¶
Overview ¶
Package lex implements lexical analysis for the assembler.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func HistLine ¶
func HistLine() int32
HistLine reports the cumulative source line number of the token, for use in the Prog structure for the linker. (It's always handling the instruction from the current lex line.) It returns int32 because that's what type ../asm prefers.
func IsRegisterShift ¶
IsRegisterShift reports whether the token is one of the ARM register shift operators.
Types ¶
type Input ¶
type Input struct { Stack // contains filtered or unexported fields }
Input is the main input: a stack of readers and some macro definitions. It also handles #include processing (by pushing onto the input stack) and parses and instantiates macro definitions.
func (*Input) Push ¶
func (in *Input) Push(r TokenReader)
type Macro ¶
type Macro struct {
// contains filtered or unexported fields
}
A Macro represents the definition of a #defined macro.
type ScanToken ¶
type ScanToken rune
A ScanToken represents an input item. It is a simple wrapping of rune, as returned by text/scanner.Scanner, plus a couple of extra values.
const ( // Asm defines some two-character lexemes. We make up // a rune/ScanToken value for them - ugly but simple. LSH ScanToken = -1000 - iota // << Left shift. RSH // >> Logical right shift. ARR // -> Used on ARM for shift type 3, arithmetic right shift. ROT // @> Used on ARM for shift type 4, rotate right. )
type Slice ¶
type Slice struct {
// contains filtered or unexported fields
}
A Slice reads from a slice of Tokens.
type Stack ¶
type Stack struct {
// contains filtered or unexported fields
}
A Stack is a stack of TokenReaders. As the top TokenReader hits EOF, it resumes reading the next one down.
func (*Stack) Push ¶
func (s *Stack) Push(tr TokenReader)
Push adds tr to the top (end) of the input stack. (Popping happens automatically.)
type Token ¶
type Token struct { ScanToken // contains filtered or unexported fields }
A Token is a scan token plus its string value. A macro is stored as a sequence of Tokens with spaces stripped.
type TokenReader ¶
type TokenReader interface { // Next returns the next token. Next() ScanToken // The following methods all refer to the most recent token returned by Next. // Text returns the original string representation of the token. Text() string // File reports the source file name of the token. File() string // Line reports the source line number of the token. Line() int // Col reports the source column number of the token. Col() int // SetPos sets the file and line number. SetPos(line int, file string) // Close does any teardown required. Close() }
A TokenReader is like a reader, but returns lex tokens of type Token. It also can tell you what the text of the most recently returned token is, and where it was found. The underlying scanner elides all spaces except newline, so the input looks like a stream of Tokens; original spacing is lost but we don't need it.