segment

package module
v0.9.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 19, 2022 License: Apache-2.0 Imports: 4 Imported by: 161

README

segment

Tests

A Go library for performing Unicode Text Segmentation as described in Unicode Standard Annex #29

Features

  • Currently only segmentation at Word Boundaries is supported.

License

Apache License Version 2.0

Usage

The functionality is exposed in two ways:

  1. You can use a bufio.Scanner with the SplitWords implementation of SplitFunc. The SplitWords function will identify the appropriate word boundaries in the input text and the Scanner will return tokens at the appropriate place.

    scanner := bufio.NewScanner(...)
    scanner.Split(segment.SplitWords)
    for scanner.Scan() {
    	tokenBytes := scanner.Bytes()
    }
    if err := scanner.Err(); err != nil {
    	t.Fatal(err)
    }
    
  2. Sometimes you would also like information returned about the type of token. To do this we have introduce a new type named Segmenter. It works just like Scanner but additionally a token type is returned.

    segmenter := segment.NewWordSegmenter(...)
    for segmenter.Segment() {
    	tokenBytes := segmenter.Bytes())
    	tokenType := segmenter.Type()
    }
    if err := segmenter.Err(); err != nil {
    	t.Fatal(err)
    }
    

Choosing Implementation

By default segment does NOT use the fastest runtime implementation. The reason is that it adds approximately 5s to compilation time and may require more than 1GB of ram on the machine performing compilation.

However, you can choose to build with the fastest runtime implementation by passing the build tag as follows:

	-tags 'prod'

Generating Code

Several components in this package are generated.

  1. Several Ragel rules files are generated from Unicode properties files.
  2. Ragel machine is generated from the Ragel rules.
  3. Test tables are generated from the Unicode test files.

All of these can be generated by running:

	go generate

Fuzzing

There is support for fuzzing the segment library with go-fuzz.

  1. Install go-fuzz if you haven't already:

    go get github.com/dvyukov/go-fuzz/go-fuzz
    go get github.com/dvyukov/go-fuzz/go-fuzz-build
    
  2. Build the package with go-fuzz:

    go-fuzz-build github.com/blevesearch/segment
    
  3. Convert the Unicode provided test cases into the initial corpus for go-fuzz:

    go test -v -run=TestGenerateWordSegmentFuzz -tags gofuzz_generate
    
  4. Run go-fuzz:

    go-fuzz -bin=segment-fuzz.zip -workdir=workdir
    

Status

Build Status

Coverage Status

GoDoc

Documentation

Overview

Package segment is a library for performing Unicode Text Segmentation as described in Unicode Standard Annex #29 http://www.unicode.org/reports/tr29/

Currently only segmentation at Word Boundaries is supported.

The functionality is exposed in two ways:

1. You can use a bufio.Scanner with the SplitWords implementation of SplitFunc. The SplitWords function will identify the appropriate word boundaries in the input text and the Scanner will return tokens at the appropriate place.

scanner := bufio.NewScanner(...)
scanner.Split(segment.SplitWords)
for scanner.Scan() {
	tokenBytes := scanner.Bytes()
}
if err := scanner.Err(); err != nil {
	t.Fatal(err)
}

2. Sometimes you would also like information returned about the type of token. To do this we have introduce a new type named Segmenter. It works just like Scanner but additionally a token type is returned.

segmenter := segment.NewWordSegmenter(...)
for segmenter.Segment() {
	tokenBytes := segmenter.Bytes())
	tokenType := segmenter.Type()
}
if err := segmenter.Err(); err != nil {
	t.Fatal(err)
}

Index

Constants

View Source
const (
	None = iota
	Number
	Letter
	Kana
	Ideo
)

Word Types

View Source
const (
	// Maximum size used to buffer a token. The actual maximum token size
	// may be smaller as the buffer may need to include, for instance, a newline.
	MaxScanTokenSize = 64 * 1024
)

Variables

View Source
var (
	ErrTooLong         = errors.New("bufio.Segmenter: token too long")
	ErrNegativeAdvance = errors.New("bufio.Segmenter: SplitFunc returns negative advance count")
	ErrAdvanceTooFar   = errors.New("bufio.Segmenter: SplitFunc returns advance count beyond input")
)

Errors returned by Segmenter.

View Source
var ParseError = fmt.Errorf("unicode word segmentation parse error")
View Source
var RagelFlags = "-T1"

Functions

func SegmentWords

func SegmentWords(data []byte, atEOF bool) (int, []byte, int, error)

func SegmentWordsDirect

func SegmentWordsDirect(data []byte, val [][]byte, types []int) ([][]byte, []int, int, error)

func SplitWords

func SplitWords(data []byte, atEOF bool) (int, []byte, error)

Types

type SegmentFunc

type SegmentFunc func(data []byte, atEOF bool) (advance int, token []byte, segmentType int, err error)

SegmentFunc is the signature of the segmenting function used to tokenize the input. The arguments are an initial substring of the remaining unprocessed data and a flag, atEOF, that reports whether the Reader has no more data to give. The return values are the number of bytes to advance the input and the next token to return to the user, plus an error, if any. If the data does not yet hold a complete token, for instance if it has no newline while scanning lines, SegmentFunc can return (0, nil, nil) to signal the Segmenter to read more data into the slice and try again with a longer slice starting at the same point in the input.

If the returned error is non-nil, segmenting stops and the error is returned to the client.

The function is never called with an empty data slice unless atEOF is true. If atEOF is true, however, data may be non-empty and, as always, holds unprocessed text.

type Segmenter

type Segmenter struct {
	// contains filtered or unexported fields
}

Segmenter provides a convenient interface for reading data such as a file of newline-delimited lines of text. Successive calls to the Segment method will step through the 'tokens' of a file, skipping the bytes between the tokens. The specification of a token is defined by a split function of type SplitFunc; the default split function breaks the input into lines with line termination stripped. Split functions are defined in this package for scanning a file into lines, bytes, UTF-8-encoded runes, and space-delimited words. The client may instead provide a custom split function.

Segmenting stops unrecoverably at EOF, the first I/O error, or a token too large to fit in the buffer. When a scan stops, the reader may have advanced arbitrarily far past the last token. Programs that need more control over error handling or large tokens, or must run sequential scans on a reader, should use bufio.Reader instead.

func NewSegmenter

func NewSegmenter(r io.Reader) *Segmenter

NewSegmenter returns a new Segmenter to read from r. Defaults to segment using SegmentWords

func NewSegmenterDirect

func NewSegmenterDirect(buf []byte) *Segmenter

NewSegmenterDirect returns a new Segmenter to work directly with buf. Defaults to segment using SegmentWords

func NewWordSegmenter

func NewWordSegmenter(r io.Reader) *Segmenter

NewWordSegmenter returns a new Segmenter to read from r.

func NewWordSegmenterDirect

func NewWordSegmenterDirect(buf []byte) *Segmenter

NewWordSegmenterDirect returns a new Segmenter to work directly with buf.

func (*Segmenter) Bytes

func (s *Segmenter) Bytes() []byte

Bytes returns the most recent token generated by a call to Segment. The underlying array may point to data that will be overwritten by a subsequent call to Segment. It does no allocation.

func (*Segmenter) Err

func (s *Segmenter) Err() error

Err returns the first non-EOF error that was encountered by the Segmenter.

func (*Segmenter) Segment

func (s *Segmenter) Segment() bool

Segment advances the Segmenter to the next token, which will then be available through the Bytes or Text method. It returns false when the scan stops, either by reaching the end of the input or an error. After Segment returns false, the Err method will return any error that occurred during scanning, except that if it was io.EOF, Err will return nil.

func (*Segmenter) SetSegmenter

func (s *Segmenter) SetSegmenter(segmenter SegmentFunc)

SetSegmenter sets the segment function for the Segmenter. If called, it must be called before Segment.

func (*Segmenter) Text

func (s *Segmenter) Text() string

Text returns the most recent token generated by a call to Segment as a newly allocated string holding its bytes.

func (*Segmenter) Type

func (s *Segmenter) Type() int

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL