shlex

package
v0.13.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 22, 2024 License: Apache-2.0, MIT Imports: 5 Imported by: 0

Documentation

Overview

Package shlex provides a simple lexical analysis like Unix shell.

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNoClosing = errors.New("no closing quotation")
	ErrNoEscaped = errors.New("no escaped character")
)

Functions

func Split

func Split(s string, posix bool) ([]string, error)

Split splits a string according to posix or non-posix rules.

Types

type DefaultTokenizer

type DefaultTokenizer struct{}

DefaultTokenizer implements a simple tokenizer like Unix shell.

func (*DefaultTokenizer) IsEscape

func (t *DefaultTokenizer) IsEscape(r rune) bool

func (*DefaultTokenizer) IsEscapedQuote

func (t *DefaultTokenizer) IsEscapedQuote(r rune) bool

func (*DefaultTokenizer) IsQuote

func (t *DefaultTokenizer) IsQuote(r rune) bool

func (*DefaultTokenizer) IsWhitespace

func (t *DefaultTokenizer) IsWhitespace(r rune) bool

func (*DefaultTokenizer) IsWord

func (t *DefaultTokenizer) IsWord(r rune) bool

type Lexer

type Lexer struct {
	// contains filtered or unexported fields
}

Lexer represents a lexical analyzer.

func NewLexer

func NewLexer(r io.Reader, posix, whitespacesplit bool) *Lexer

NewLexer creates a new Lexer reading from io.Reader. This Lexer has a DefaultTokenizer according to posix and whitespacesplit rules.

func NewLexerString

func NewLexerString(s string, posix, whitespacesplit bool) *Lexer

NewLexerString creates a new Lexer reading from a string. This Lexer has a DefaultTokenizer according to posix and whitespacesplit rules.

func (*Lexer) SetTokenizer

func (l *Lexer) SetTokenizer(t Tokenizer)

SetTokenizer sets a Tokenizer.

func (*Lexer) Split

func (l *Lexer) Split() ([]string, error)

type Tokenizer

type Tokenizer interface {
	IsWord(rune) bool
	IsWhitespace(rune) bool
	IsQuote(rune) bool
	IsEscape(rune) bool
	IsEscapedQuote(rune) bool
}

Tokenizer is the interface that classifies a token according to words, whitespaces, quotations, escapes and escaped quotations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL