csv

package
v1.0.0-beta15 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 30, 2022 License: Apache-2.0 Imports: 11 Imported by: 1

Documentation

Index

Constants

View Source
const CSV = ls.LS + "csv/"

Variables

View Source
var ErrMultipleNodesMatched = errors.New("Multiple nodes match query")

Functions

func Import

func Import(attributeID string, terms []TermSpec, startRow, nRows int, idRows []int, entityID string, required string, input [][]string) (*ls.Layer, error)

Import a CSV schema. The CSV file is organized as columns, one column for base attribute names, and other columns for overlays. CSV does not support nested attributes. Returns an array of Layer objects

func ImportSchema

func ImportSchema(ctx *ls.Context, rows [][]string, context map[string]interface{}) ([]*ls.Layer, error)

ImportSchema imports a schema from a CSV file. The CSV file is organized as follows:

valueType determines the schema header start

valueType, v
entityIdFields, f, f, ...

 @id,   @type,      <term>,     <term>
layerId, Schema,,...
layerId, Overlay,  true,        true   --> true means include this attribute in overlay
attrId, Object,   termValue, termValue
attrId, Value,   termValue, termValue
 ...

The terms are expanded using the JSON-LD context given.

Types

type ErrColIndexOutOfBounds

type ErrColIndexOutOfBounds struct {
	For   string
	Index int
}

func (ErrColIndexOutOfBounds) Error

func (e ErrColIndexOutOfBounds) Error() string

type ErrInvalidID

type ErrInvalidID struct {
	Row int
}

func (ErrInvalidID) Error

func (e ErrInvalidID) Error() string

type Parser

type Parser struct {
	OnlySchemaAttributes bool
	IngestNullValues     bool
	SchemaNode           graph.Node
	ColumnNames          []string
}

func (Parser) ParseDoc

func (ing Parser) ParseDoc(context *ls.Context, baseID string, row []string) (ls.ParsedDocNode, error)

type TermSpec

type TermSpec struct {
	// The term
	Term string `json:"term"`
	// If nonempty, this template is used to build the term contents
	// with {{.term}}, and {{.row}} in template context. {{.term}} gives
	// the Term, and {{.row}} gives the cells of the current row
	TermTemplate string `json:"template"`
	// Is property an array
	Array bool `json:"array"`
	// Array separator character
	ArraySeparator string `json:"separator"`
}

type Writer

type Writer struct {
	// openCypher query giving the root nodes for each row of data. This
	// should be of the form:
	//
	//  match (n ...) return n
	//
	// If empty, all root nodes of the graph are included in the output
	RowRootQuery string `json:"rowQuery" yaml:"rowQuery"`

	// The column names in the output. If the column name does not have
	// a column query, then the column query is assumed to be
	//
	//  match (root)-[]->(n:DocumentNode {attributeName: <attributeName>}) return n
	Columns []WriterColumn `json:"columns" yaml:"columns"`
}

Writer writes CSV output.

The writer specifies how to interpret the input graph. The output object specifies an opencypher query that determines each row of data.

func (*Writer) BuildRow

func (wr *Writer) BuildRow(root graph.Node) ([]string, error)

func (*Writer) WriteHeader

func (wr *Writer) WriteHeader(writer *csv.Writer) error

WriteHeader writes the header to the given writer

func (*Writer) WriteRow

func (wr *Writer) WriteRow(writer *csv.Writer, root graph.Node) error

func (*Writer) WriteRows

func (wr *Writer) WriteRows(writer *csv.Writer, g graph.Graph) error

type WriterColumn

type WriterColumn struct {
	Name string `json:"name" yaml:"name"`
	// Optional openCypher queries for each column. The map key is the
	// column name, and the map value is an opencypher query that is
	// evaluated with `root` node set to the current root node.
	Query string `json:"query" yaml:"query"`
	// contains filtered or unexported fields
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL