Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func MergeResultStreams ¶
func MergeResultStreams(ctx context.Context, args QueryArgs, vshardResults []*ResultStream, resultStream stream.ServerStream)
TODO: take context MergeResultStreams is responsible to (1) merge result streams and (2) maintain sort order (if sorted) from the streams (each stream is assumed to be in-order already)
func MergeResultStreamsUnique ¶
func MergeResultStreamsUnique(ctx context.Context, args QueryArgs, pkeyFields []string, vshardResults []*ResultStream, resultStream stream.ServerStream)
MergeResultStreams is responsible to (1) merge result streams uniquely based on pkey and (2) maintain sort order (if sorted) from the streams (each stream is assumed to be in-order already)
Types ¶
type Query ¶
Query is the struct which contains the entire query to run, this includes both the function to run and the args associated
type QueryArgs ¶
type QueryArgs struct { // Shared options DB string `json:"db"` Collection string `json:"collection"` ShardInstance string `json:"shard_instance,omitempty"` // Fields defines a list of fields for Projections Fields []string `json:"fields"` // TODO: rename? AggregationFields map[string][]aggregation.AggregationType `json:"aggregation_fields"` // Sort + SortReverse control the ordering of results Sort []string `json:"sort"` // TODO: change to ints? SortReverse []bool `json:"sort_reverse"` // Limit is how many records will be returned in the result Limit uint64 `json:"limit"` // TODO: name skip? // TODO: if offset is set without a sort, then it is meaningless -- we need to error out // Offset controls the offset for returning results. This will exclude `Offset` // number of records from the "front" of the results Offset uint64 `json:"offset"` // Record types (TODO: record struct) PKey record.Record `json:"pkey"` Record record.Record `json:"record,omitempty"` Records []record.Record `json:"records,omitempty"` // TODO struct? // RecordOp is a map of operations to apply to the record (incr, decr, etc.) RecordOp map[string]interface{} `json:"record_op"` // TODO; type for the filter itself // Filter is the conditions to match data on Filter interface{} `json:"filter"` // Join defines what data we should pull in addition to the record defined in `Collection` Join interface{} `json:"join"` }
TODO: add meta TODO: func to validate the mix of arguments
type QueryType ¶
type QueryType string
TODO: method to know if it is stream or not QueryType is the list of all query functions dataman supports
const ( Get QueryType = "get" Set QueryType = "set" Insert QueryType = "insert" InsertMany QueryType = "insert_many" Update QueryType = "update" Delete QueryType = "delete" Filter QueryType = "filter" Aggregate QueryType = "aggregate" // Stream types: responses that will return a stream of results FilterStream QueryType = "filter_stream" )
type Result ¶
type Result struct { Return []record.Record `json:"return"` Errors []string `json:"errors,omitempty"` // TODO: pointer to the right thing ValidationError interface{} `json:"validation_error,omitempty"` Meta map[string]interface{} `json:"meta,omitempty"` }
Encapsulate a result from the datastore
func MergeAggregateResult ¶
Merge multiple aggregation results together
func MergeResult ¶
Merge multiple results together
type ResultStream ¶
type ResultStream struct { // ClientStream -- this is the actual data that is coming back from the DB Stream stream.ClientStream `json:"-"` // TODO: do we need? Errors []string `json:"errors,omitempty"` // TODO: does this make sens in the result itself? Meta map[string]interface{} `json:"meta,omitempty"` // contains filtered or unexported fields }
Encapsulate a streaming result from the datastore
func (*ResultStream) AddTransformation ¶
func (r *ResultStream) AddTransformation(t ResultStreamItemTransformation) error
func (*ResultStream) Close ¶
func (r *ResultStream) Close() error
func (*ResultStream) Err ¶
func (r *ResultStream) Err() error