Documentation ¶
Index ¶
- type ColumnScanner
- type Option
- func Comma(comma rune) Option
- func Comment(comment rune) Option
- func ContinueOnError(shouldContinue bool) Option
- func FieldsPerRecord(fields int) Option
- func LazyQuotes(lazy bool) Option
- func ReuseRecord(reuse bool) Option
- func SkipHeaderRecord() Option
- func SkipRecords(count int) Option
- func TrimLeadingSpace(trim bool) Option
- type Scanner
- type StructScanner
- type Writer
- func (w *Writer) WriteFields(fields ...interface{}) error
- func (w *Writer) WriteFormattedFields(format string, fields ...interface{}) error
- func (w *Writer) WriteStream(records chan []string) error
- func (w *Writer) WriteStringers(fields ...fmt.Stringer) error
- func (w *Writer) WriteStrings(fields ...string) error
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ColumnScanner ¶
type ColumnScanner struct { *Scanner // contains filtered or unexported fields }
func NewColumnScanner ¶
func NewColumnScanner(reader io.Reader, options ...Option) (*ColumnScanner, error)
func (*ColumnScanner) Column ¶
func (cs *ColumnScanner) Column(column string) string
func (*ColumnScanner) Header ¶
func (cs *ColumnScanner) Header() []string
type Option ¶
type Option func(*Scanner)
Option is a func type received by NewScanner. Each one allows configuration of the scanner and/or its internal *csv.Reader.
func ContinueOnError ¶
ContinueOnError controls scanner behavior in error scenarios. If true is passed, continue scanning until io.EOF is reached. If false is passed (default), any error encountered during scanning will result in the next call to Scan returning false and the Scanner may be considered dead. See Scanner.Error() for the exact error (before the next call to Scanner.Scan()). See https://golang.org/pkg/encoding/csv/#pkg-variables and https://golang.org/pkg/encoding/csv/#ParseError for more information regarding possible error values.
func FieldsPerRecord ¶
func LazyQuotes ¶
func ReuseRecord ¶
func SkipHeaderRecord ¶
func SkipHeaderRecord() Option
func SkipRecords ¶
func TrimLeadingSpace ¶
type Scanner ¶
type Scanner struct {
// contains filtered or unexported fields
}
Scanner wraps a csv.Reader via an API similar to that of bufio.Scanner.
Example ¶
package main import ( "fmt" "log" "strings" "github.com/i3oges/scanners/csv" ) func main() { in := strings.Join([]string{ `first_name,last_name,username`, `"Rob","Pike",rob`, `Ken,Thompson,ken`, `"Robert","Griesemer","gri"`, }, "\n") scanner := csv.NewScanner(strings.NewReader(in)) for scanner.Scan() { fmt.Println(scanner.Record()) } if err := scanner.Error(); err != nil { log.Panic(err) } }
Output: [first_name last_name username] [Rob Pike rob] [Ken Thompson ken] [Robert Griesemer gri]
Example (Options) ¶
This example shows how csv.Scanner can be configured to handle other types of CSV files.
package main import ( "fmt" "log" "strings" "github.com/i3oges/scanners/csv" ) func main() { in := strings.Join([]string{ `first_name;last_name;username`, `"Rob";"Pike";rob`, `# lines beginning with a # character are ignored`, `Ken;Thompson;ken`, `"Robert";"Griesemer";"gri"`, }, "\n") scanner := csv.NewScanner(strings.NewReader(in), csv.Comma(';'), csv.Comment('#')) for scanner.Scan() { fmt.Println(scanner.Record()) } if err := scanner.Error(); err != nil { log.Panic(err) } }
Output: [first_name last_name username] [Rob Pike rob] [Ken Thompson ken] [Robert Griesemer gri]
func NewScanner ¶
NewScanner returns a scanner configured with the provided options.
func (*Scanner) Error ¶
Error returns the last non-nil error produced by Scan (if there was one). It will not ever return io.EOF. s method may be called at any point during or after scanning but the underlying err will be reset by each call to Scan.
func (*Scanner) Record ¶
Record returns the most recent record generated by a call to Scan as a []string. See *csv.Reader.ReuseRecord for details on the strategy for reusing the underlying array: https://golang.org/pkg/encoding/csv/#Reader
func (*Scanner) Scan ¶
Scan advances the Scanner to the next record, which will then be available through the Record method. It returns false when the scan stops, either by reaching the end of the input or an error. After Scan returns false, the Error method will return any error that occurred during scanning, except that if it was io.EOF, Error will return nil.
type StructScanner ¶
type StructScanner struct {
*ColumnScanner
}
StructScanner also does things
func NewStructScanner ¶
func NewStructScanner(reader io.Reader, options ...Option) (*StructScanner, error)
NewStructScanner does things
func (*StructScanner) Populate ¶
func (scanner *StructScanner) Populate(v interface{}) error
Populate scans values into a struct
type Writer ¶
Writer wraps a csv.Writer.
func NewWriter ¶
NewWriter accepts a target io.Writer and an optional comma rune and builds a Writer with an internal csv.Writer.
func (*Writer) WriteFields ¶
WriteFields accepts zero or more interface{} values and converts them to strings using fmt.Sprint and writes them as a single record to the underlying csv.Writer. Make sure you are comfortable with whatever the default format is for each field value you provide.
func (*Writer) WriteFormattedFields ¶
WriteFormattedFields accepts a format string for 0 or more fields which will be passed to fmt.Sprintf before being written as a single record to the underlying csv.Writer.
func (*Writer) WriteStream ¶
WriteStream accepts a chan []string and ranges over it, passing each []string as a record to the underlying csv.Writer. Like it's counterpart (csv.Writer.WriteAll) it calls Flush() if all records are written without error. It is assumed that the channel is or will be closed by the caller or a separate goroutine, otherwise w call will block indefinitely.
func (*Writer) WriteStringers ¶
WriteStringers accepts zero or more fmt.Stinger values and converts them to strings by calling their String() method and writes them as a single record to the underlying csv.Writer.
func (*Writer) WriteStrings ¶
WriteStrings accepts zero or more string values and writes them as a single record to the underlying csv.Writer. IMHO, it's how csv.Writer.Write should have been defined.