lexer

package
v0.10.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 19, 2024 License: MIT Imports: 4 Imported by: 0

Documentation

Overview

The goal for our lexer is not to create a compiler or intrepreter. It's primary purpose is to scan the raw source code as a series of characters and group them into tokens. The implementation will ignore many tokens that other scanner/lexers would not.

The types of tokens that we are concered about: Single line comment tokens (such as // for languages that have adopted c like comment syntax or # for python) Multi line comment tokens (such as /* for c adopted languages and ”' & """ for python) End of file tokens

We should check for string tokens, but we do not need to create the token or store the lexeme. The reason for checking strings is so we can prevent certain edge cases from happening. One example could be where a string contains characters that could be denoted as a comment. For C like languages that could be a string such as "/*" or "//". We don't want the lexer to create tokens for strings that may contain comment syntax.

Each language that is supported will need to satisfy the LexingManager interface and support tokenizing methods for Comments and Strings. This will allow each implementation to utilize the comment notation that is specific to a language.

Index

Constants

View Source
const (
	SINGLE_LINE_COMMENT = iota
	MULTI_LINE_COMMENT
	STRING
	EOF
)

Variables

View Source
var (
	ASTERISK       byte = '*'
	BACK_TICK      byte = '`'
	BACKWARD_SLASH byte = '\\'
	FORWARD_SLASH  byte = '/'
	HASH           byte = '#'
	QUOTE          byte = '\''
	DOUBLE_QUOTE   byte = '"'
	NEWLINE        byte = '\n'
	TAB            byte = '\t'
	WHITESPACE     byte = ' '
)

Functions

func IsAdoptedFromC

func IsAdoptedFromC(ext string) bool

Types

type CLexer

type CLexer struct{}

func (*CLexer) AnalyzeToken

func (cl *CLexer) AnalyzeToken(lex *Lexer) error

func (*CLexer) Comment

func (cl *CLexer) Comment(lex *Lexer) error

func (*CLexer) MultiLineComment

func (cl *CLexer) MultiLineComment(lex *Lexer) error

func (*CLexer) ParseCommentTokens

func (cl *CLexer) ParseCommentTokens(lex *Lexer, annotation []byte) ([]Comment, error)

func (*CLexer) SingleLineComment

func (cl *CLexer) SingleLineComment(lex *Lexer) error

func (*CLexer) String

func (cl *CLexer) String(lex *Lexer, delim byte) error

type Comment

type Comment struct {
	Title          []byte
	Description    []byte
	TokenIndex     int
	Source         []byte
	SourceFileName string
}

func (*Comment) Prepare

func (c *Comment) Prepare(fileName string, index int)

func (*Comment) Push added in v0.10.3

func (c *Comment) Push(comments *[]Comment, fileName string, tokenIndex int)

func (*Comment) Validate

func (c *Comment) Validate() bool

type Lexer

type Lexer struct {
	Source   []byte
	FileName string
	Tokens   []Token
	Start    int
	Current  int
	Line     int
	Manager  LexingManager
}

func NewLexer

func NewLexer(src []byte, fileName string) (*Lexer, error)

func (*Lexer) AnalyzeTokens

func (l *Lexer) AnalyzeTokens() ([]Token, error)

type LexingManager

type LexingManager interface {
	AnalyzeToken(lexer *Lexer) error
	String(lexer *Lexer, delim byte) error
	Comment(lexer *Lexer) error
	ParseCommentTokens(lexer *Lexer, annotation []byte) ([]Comment, error)
}

func NewLexingManager

func NewLexingManager(ext string) (LexingManager, error)

type Token

type Token struct {
	TokenType      TokenType
	Lexeme         []byte
	Line           int
	StartByteIndex int
	EndByteIndex   int
}

func (*Token) ParseMultiLineCommentToken added in v0.10.3

func (t *Token) ParseMultiLineCommentToken(annotation []byte, trim func(r rune) bool) Comment

func (*Token) ParseSingleLineCommentToken added in v0.10.3

func (t *Token) ParseSingleLineCommentToken(annotation []byte, trim func(r rune) bool) Comment

type TokenType

type TokenType = int

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL