Documentation ¶
Index ¶
- func ReadFile(filename string) []rune
- func RememberSourceFile(contents []rune, filename string)
- type Lexer
- type Token
- func CreateToken(value string, kind TokenKind, span print2.TextSpan) Token
- func CreateTokenReal(buffer string, real interface{}, kind TokenKind, span print2.TextSpan) Token
- func CreateTokenSpaced(value string, kind TokenKind, span print2.TextSpan, spaced bool) Token
- func Lex(code []rune, filename string) []Token
- func LexInternal(code []rune, filename string, treatHashtagsAsComments bool) []Token
- type TokenKind
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ReadFile ¶
ReadFile reads the file and returns a byte array ([]byte) // nah fam we usin runes only handles NotExist and Permission error btw
func RememberSourceFile ¶
Types ¶
type Lexer ¶
type Lexer struct { Code []rune File string Line int Column int Index int Tokens []Token TreatHashtagAsComment bool }
Lexer : Lexer struct for lexing :GentlemenSphere:
type Token ¶
type Token struct { Value string RealValue interface{} Kind TokenKind Span print2.TextSpan SpaceAfter bool }
Token stores information about lexical structures in the text
func CreateToken ¶
CreateToken returns a Token created from the arguments provided
func CreateTokenReal ¶
CreateTokenReal the majority of the code base uses CreateToken, however, the Token struct has a "RealValue" which should store the true value of a Token. For example, NumberToken (TokenKind) created using CreateToken will only store its string value and not its real number value. CreateTokenReal will store the converted type (so NumberToken actually stores a number).
func CreateTokenSpaced ¶
CreateTokenSpaced just another constructor to not have to include the spaced bool every time
func LexInternal ¶
type TokenKind ¶
type TokenKind string
TokenKind basically an enum containing all token types. TokenKind has been changed from int to string for better debugging.
const ( // Keywords VarKeyword TokenKind = "var (Keyword)" SetKeyword TokenKind = "set (Keyword)" ToKeyword TokenKind = "to (Keyword)" IfKeyword TokenKind = "if (Keyword)" ElseKeyword TokenKind = "else (Keyword)" TrueKeyword TokenKind = "true (Keyword)" FalseKeyword TokenKind = "false (Keyword)" FunctionKeyword TokenKind = "function (Keyword)" ClassKeyword TokenKind = "class (Keyword)" FromKeyword TokenKind = "from (Keyword)" ForKeyword TokenKind = "for (Keyword)" ReturnKeyword TokenKind = "return (Keyword)" WhileKeyword TokenKind = "while (Keyword)" ContinueKeyword TokenKind = "continue (keyword)" BreakKeyword TokenKind = "break (Keyword)" MakeKeyword TokenKind = "make (Keyword)" PackageKeyword TokenKind = "package (keyword)" UseKeyword TokenKind = "use (keyword)" AliasKeyword TokenKind = "alias (keyword)" ExternalKeyword TokenKind = "external (keyword)" CVariadicKeyword TokenKind = "c_variadic (keyword)" CAdaptedKeyword TokenKind = "c_adapted (keyword)" RefKeyword TokenKind = "ref (keyword)" DerefKeyword TokenKind = "deref (keyword)" StructKeyword TokenKind = "struct (keyword)" LambdaKeyword TokenKind = "lambda (keyword)" ThisKeyword TokenKind = "this (keyword)" MainKeyword TokenKind = "main (keyword)" EnumKeyword TokenKind = "enum (keyword)" // Tokens EOF TokenKind = "EndOfFile" IdToken TokenKind = "Identifier" StringToken TokenKind = "String" NativeStringToken TokenKind = "NativeString" NumberToken TokenKind = "Number" // Symbol Tokens PlusToken TokenKind = "Plus '+'" ModulusToken TokenKind = "Modulus '%'" MinusToken TokenKind = "Minus '-'" StarToken TokenKind = "Star '*'" SlashToken TokenKind = "Slash '/'" EqualsToken TokenKind = "Equals '='" NotToken TokenKind = "Not '!'" NotEqualsToken TokenKind = "Not Equals '!='" CommaToken TokenKind = "Comma ','" GreaterThanToken TokenKind = "GreaterThanToken '>'" LessThanToken TokenKind = "LessThanToken '<'" GreaterEqualsToken TokenKind = "GreaterEqualsToken '>='" LessEqualsToken TokenKind = "LessEqualsToken '<='" AmpersandToken TokenKind = "AmpersandToken '&'" AmpersandsToken TokenKind = "AmpersandsToken '&&'" PipeToken TokenKind = "PipeToken '|'" PipesToken TokenKind = "PipesToken '||'" HatToken TokenKind = "HatToken '^'" AssignToken TokenKind = "AssignToken '<-'" AccessToken TokenKind = "AccessToken '->'" ShiftLeftToken TokenKind = "ShiftLeftToken '<<'" ShiftRightToken TokenKind = "ShiftRightToken '>>'" OpenBraceToken TokenKind = "OpenBrace '{'" CloseBraceToken TokenKind = "Closebrace '}'" OpenBracketToken TokenKind = "OpenBracket '['" CloseBracketToken TokenKind = "CloseBracket ']'" OpenParenthesisToken TokenKind = "OpenParenthesis '('" CloseParenthesisToken TokenKind = "CloseParenthesis ')'" QuestionMarkToken TokenKind = "QuestionMark '?'" ColonToken TokenKind = "Colon ':'" PackageToken TokenKind = "Package '::'" HashtagToken TokenKind = "Hashtag '#'" BadToken TokenKind = "Token Error (BadToken)" // Naughty ;) Semicolon TokenKind = "Semicolon ';'" // Used to separate statements (for now... ) )
seems like we will have to set the type for every single one because if not go will think they are just strings...
func CheckIfKeyword ¶
CheckIfKeyword used by Lexer.getId to convert an identifier Token to a keyword Token