reindexer

package module
v1.10.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 30, 2018 License: Apache-2.0 Imports: 17 Imported by: 25

README

Reindexer

GoDoc Build Status Build Status

Reindexer is an embeddable, in-memory, document-oriented database with a high-level Query builder interface.

Reindexer's goal is to provide fast search with complex queries. We at Restream weren't happy with Elasticsearch and created Reindexer as a more performant alternative.

The core is written in C++ and the application level API is in Go.

Table of contents:

Features

Key features:

  • Sortable indices
  • Aggregation queries
  • Indices on array fields
  • Complex primary keys
  • Composite indices
  • Join operations
  • Full-text search
  • Up to 64 indices for one namespace
  • ORM-like query interface
  • SQL queries
Performance

Performance has been our top priority from the start, and we think we managed to get it pretty good. Benchmarks show that Reindexer's performance is on par with a typical key-value database. On a single CPU core, we get:

  • up to 500K queries/sec for queries SELECT * FROM items WHERE id='?'
  • up to 50K queries/sec for queries SELECT * FROM items WHERE year > 2010 AND name = 'string' AND id IN (....)
  • up to 20K queries/sec for queries SELECT * FROM items WHERE year > 2010 AND name = 'string' JOIN subitems ON ...

See benchmarking results and more details in benchmarking section

Memory Consumption

Reindexer aims to consume as little memory as possible; most queries are processed without memory allocs at all.

To achieve that, several optimizations are employed, both on the C++ and Go level:

  • Documents and indices are stored in dense binary C++ structs, so they don't impose any load on Go's garbage collector.

  • String duplicates are merged.

  • Memory overhead is about 32 bytes per document + ≈4-16 bytes per each search index.

  • There is an object cache on the Go level for deserialized documents produced after query execution. Future queries use pre-deserialized documents, which cuts repeated deserialization and allocation costs

  • The Query interface uses sync.Pool for reusing internal structures and buffers. Combining of these techings let's Reindexer execute most of queries without any allocations.

Reindexer has internal full text search engine. Full text search usage documentation and examples are here

Disk Storage

Reindexer can store documents to and load documents from disk via LevelDB. Documents are written to the storage backend asynchronously by large batches automatically in background.

When a namespace is created, all its documents are stored into RAM, so the queries on these documents run entirely in in-memory mode.

Usage

Here is complete example of basic Reindexer usage:

package main

// Import package
import (
	"fmt"
	"math/rand"

	"github.com/restream/reindexer"
	// choose how the Reindexer binds to the app (in this case "builtin," which means link Reindexer as a static library)
	_ "github.com/restream/reindexer/bindings/builtin"

	// OR link Reindexer as static library with bundled server.
	// _ "github.com/restream/reindexer/bindings/builtinserver"
	// "github.com/restream/reindexer/bindings/builtinserver/config"

)

// Define struct with reindex tags
type Item struct {
	ID       int64  `reindex:"id,,pk"`    // 'id' is primary key
	Name     string `reindex:"name"`      // add index by 'name' field
	Articles []int  `reindex:"articles"`  // add index by articles 'articles' array
	Year     int    `reindex:"year,tree"` // add sortable index by 'year' field
}

func main() {
	// Init a database instance and choose the binding (builtin)
	db := reindexer.NewReindex("builtin:///tmp/reindex/testdb")

	// OR - Init a database instance and choose the binding (connect to server)
	// db := reindexer.NewReindex("cproto://127.0.0.1:6534/testdb")

	// OR - Init a database instance and choose the binding (builtin, with bundled server)
	// serverConfig := config.DefaultServerConfig ()
	// db := reindexer.NewReindex("builtinserver://testdb",reindexer.WithServerConfig(100*time.Second, serverConfig))

	// Create new namespace with name 'items', which will store structs of type 'Item'
	db.OpenNamespace("items", reindexer.DefaultNamespaceOptions(), Item{})

	// Generate dataset
	for i := 0; i < 100000; i++ {
		err := db.Upsert("items", &Item{
			ID:       int64(i),
			Name:     "Vasya",
			Articles: []int{rand.Int() % 100, rand.Int() % 100},
			Year:     2000 + rand.Int()%50,
		})
		if err != nil {
			panic(err)
		}
	}

	// Query a single document
	elem, found := db.Query("items").
		Where("id", reindexer.EQ, 40).
		Get()

	if found {
		item := elem.(*Item)
		fmt.Println("Found document:", *item)
	}

	// Query multiple documents
	query := db.Query("items").
		Sort("year", false).                          // Sort results by 'year' field in ascending order
		WhereString("name", reindexer.EQ, "Vasya").   // 'name' must be 'Vasya'
		WhereInt("year", reindexer.GT, 2020).         // 'year' must be greater than 2020
		WhereInt("articles", reindexer.SET, 6, 1, 8). // 'articles' must contain one of [6,1,8]
		Limit(10).                                    // Return maximum 10 documents
		Offset(0).                                    // from 0 position
		ReqTotal()                                    // Calculate the total count of matching documents

	// Execute the query and return an iterator
	iterator := query.Exec()
	// Iterator must be closed
	defer iterator.Close()

	fmt.Println("Found", iterator.TotalCount(), "total documents, first", iterator.Count(), "documents:")

	// Iterate over results
	for iterator.Next() {
		// Get the next document and cast it to a pointer
		elem := iterator.Object().(*Item)
		fmt.Println(*elem)
	}
	// Check the error
	if err := iterator.Error(); err != nil {
		panic(err)
	}
}
SQL compatible interface

As alternative to Query builder Reindexer provides SQL compatible query interface. Here is sample of SQL interface usage:

    ...
	iterator := db.ExecSQL ("SELECT * FROM items WHERE name='Vasya' AND year > 2020 AND articles IN (6,1,8) ORDER BY year LIMIT 10")
    ...

Please note, that Query builder interface is prefferable way: It have more features, and faster than SQL interface

Installation

Reindexer can run in 3 different modes:

  • embeded (builtin) Reindexer is embeded into application as static library, and does not reuqire separate server proccess.
  • embeded with server (builtinserver) Reindexer is embeded into application as static library, and start server. In this mode other clients can connect to application via cproto or http.
  • standalone Reindexer run as standalone server, application connects to Reindexer via network
Installation for server mode
  1. Install Reindexer Server
  2. go get -a github.com/restream/reindexer
Official docker image

The simplest way to get reindexer server, is pulling & run docker image from dockerhub.

docker run -p9088:9088 -p6534:6534 -it reindexer/reindexer

Dockerfile

Installation for embeded mode
Prerequirements

Reindexer's core is written in C++11 and uses LevelDB as the storage backend, so the Cmake, C++11 toolchain and LevelDB must be installed before installing Reindexer.

To build Reindexer, g++ 4.8+, clang 3.3+ or mingw64 is required.

Get Reindexer
go get -a github.com/restream/reindexer
bash $GOPATH/src/github.com/restream/reindexer/dependencies.sh
go generate github.com/restream/reindexer/bindings/builtin
# Optional (build builtin server binding)
go generate github.com/restream/reindexer/bindings/builtinserver

Advanced Usage

Index Types and Their Capabilites

Internally, structs are split into two parts:

  • indexed fields, marked with reindex struct tag
  • tuple of non-indexed fields

Queries are possible only on the indexed fields, marked with reindex tag. The reindex tag contains the index name, type, and additional options:

reindex:"<name>[[,<type>],<opts>]"

  • name – index name.
  • type – index type:
    • hash – fast select by EQ and SET match. Does not allow sorting results by field. Used by default. Allows slow and uneffecient sorting by field
    • tree – fast select by RANGE, GT, and LT matches. A bit slower for EQ and SET matches than hash index. Allows fast sorting results by field.
    • text – full text search index. Usage details of full text search is described here
    • - – column index. Can't perform fast select because it's implemented with full-scan technic. Has the smallest memory overhead.
  • opts – additional index options:
    • pk – field is part of a primary key. Struct must have at least 1 field tagged with pk
    • composite – create composite index. The field type must be an empty struct: struct{}.
    • joined – field is a recipient for join. The field type must be []*SubitemType.
    • dense - reduce index size. For hash and tree it will save 8 bytes per unique key value. For - it will save 4-8 bytes per each element. Useful for indexes with high sectivity, but for tree and hash indexes with low selectivity can seriously decrease update performance. Also dense will slow down wide fullscan queries on - indexes, due to lack of CPU cache optimization.
    • sparse - Row (document) contains a value of Sparse index only in case if it's set on purpose - there are no empty (or default) records of this type of indexes in the row (document). It allows to save RAM but it will cost you performance - it works a bit slower than regular indexes.
    • collate_numeric - create string index that provides values order in numeric sequence. The field type must be a string.
    • collate_ascii - create case-insensitive string index works with ASCII. The field type must be a string.
    • collate_utf8 - create case-insensitive string index works with UTF8. The field type must be a string.
    • collate_custom=<ORDER> - create custom order string index. The field type must be a string. <ORDER> is sequence of letters, which defines sort order.
Nested Structs

By default Reindexer scans all nested structs and adds their fields to the namespace (as well as indexes specified).

type Actor struct {
	Name string `reindex:"actor_name"`
}

type BaseItem struct {
	ID int64 `reindex:"id,hash,pk"`
}

type ComplexItem struct {
	BaseItem         // Index fields of BaseItem will be added to reindex
	actor    []Actor // Index fields of Actor will be added to reindex as arrays
	Name     string  `reindex:"name"`
	Year     int     `reindex:"year,tree"`
	parent   *Item   `reindex:"-"` // Index fields of parent will NOT be added to reindex
}
Join

Reindexer can join documents from multiple namespaces into a single result:

type Actor struct {
	ID        int    `reindex:"id"`
	Name      string `reindex:"name"`
	IsVisible bool   `reindex:"is_visible"`
}

type ItemWithJoin struct {
	ID        int      `reindex:"id"`
	Name      string   `reindex:"name"`
	ActorsIDs []int    `reindex:"actors_ids"`
	Actors    []*Actor `reindex:"actors,,joined"`
}
....
    
	query := db.Query("items_with_join").Join(
		db.Query("actors").
			WhereBool("is_visible", reindexer.EQ, true),
		"actors",
	).On("id", reindexer.SET, "actors_ids")

	query.Exec ()

In this example, Reindexer uses reflection under the hood to create Actor slice and copy Actor struct.

Joinable interface

To avoid using reflection, Item can implement Joinable interface. If that implemented, Reindexer uses this instead of the slow reflection-based implementation. This increases overall performance by 10-20%, and reduces the amount of allocations.

// Joinable interface implementation.
// Join adds items from the joined namespace to the `ItemWithJoin` object.
// When calling Joinable interface, additional context variable can be passed to implement extra logic in Join.
func (item *ItemWithJoin) Join(field string, subitems []interface{}, context interface{}) {

	switch field {
	case "actors":
		for _, joinItem := range subitems {
			item.Actors = append(item.Actors, joinItem.(*Actor))
		}
	}
}
Complex Primary Keys and Composite Indexes

A Document can have multiple fields as a primary key. To enable this feature add composite index to struct. Composite index is an index that involves multiple fields, it can be used instead of several separate indexes.

type Item struct {
	ID    int64 `reindex:"id"`     // 'id' is a part of a primary key
	SubID int   `reindex:"sub_id"` // 'sub_id' is a part of a primary key
	// Fields
	//	....
	// Composite index
	_ struct{} `reindex:"id+sub_id,,composite,pk"`
}

OR

type Item struct {
	ID       int64 `reindex:"id,-"`         // 'id' is a part of primary key, WITHOUT personal searchable index
	SubID    int   `reindex:"sub_id,-"`     // 'sub_id' is a part of a primary key, WITHOUT a personal searchable index
	SubSubID int   `reindex:"sub_sub_id,-"` // 'sub_sub_id' is a part of a primary key WITHOUT a personal searchable index

	// Fields
	// ....

	// Composite index
	_ struct{} `reindex:"id+sub_id+sub_sub_id,,composite,pk"`
}

Also composite indexes are useful for sorting results by multiple fields:

type Item struct {
	ID     int64 `reindex:"id,,pk"`
	Rating int   `reindex:"rating"`
	Year   int   `reindex:"year"`

	// Composite index
	_ struct{} `reindex:"rating+year,tree,composite"`
}

...
	// Sort query resuls by rating first, then by year
	query := db.Query("items").Sort("rating+year", true)

	// Sort query resuls by rating first, then by year, and put item where rating == 5 and year == 2010 first
	query := db.Query("items").Sort("rating+year", true,[]interface{}{5,2010})

For make query to the composite index, pass []interface{} to .WhereComposite function of Query builder:

	// Get results where rating == 5 and year == 2010
	query := db.Query("items").WhereComposite("rating+year", reindexer.EQ,[]interface{}{5,2010})
Aggregations

Reindexer allows to do aggregation queries. Currently Average and Sum aggregations are supported. To support aggregation Query has method: Aggregate should be called before Query execution - to ask reindexer calculate aggregation.

To get aggregation results Iterator had method AggResults, it is available after query execution, and returns slice of reults.

There are 3 aggregations availavle

  • AggMax - get maximum field value
  • AggMin - get manimum field value
  • AggSum - get sum field value
  • AggAvg - get averatge field value
  • AggFacet - get field facet value

	iterator := db.Query ("items").Aggregate ("name",reindexer.AggFacet).Exec ()

	aggRes := iterator.AggResults()[0]

	for facet := range aggRes.Facets {
		fmt.Printf ("%s -> %d",facet.Value, facet.Count)
	}

Atomic on update functions

There are atomic functions, which executes under namespace lock, and therefore guarantes data consistency:

  • serial - sequence of integer, useful for uniq ID generation
  • timestamp - current time stamp of operation, useful for data syncronisation

These functions can be passed to Upsert/Insert/Update in 3-rd and next arguments.

   // set ID field from serial generator   
   db.Insert ("items",&item,"id=serial()")

   // set current timestamp in nanoseconds to updated_at field 
   db.Update ("items",&item,"updated_at=now(NSEC)")

   // set current timestamp and ID
   db.Upsert ("items",&item,"updated_at=now(NSEC)","id=serial()")

Direct JSON operations
Upsert data in JSON format

If source data is available in JSON format, then it is possible to improve performance of Upsert/Delete operations by directly passing JSON to reindexer. JSON deserialization will be done by C++ code, without extra allocs/deserialization in Go code.

Upsert or Delete functions can process JSON just by passing []byte argument with json

	json := []byte (`{"id":1,"name":"test"}`)
	db.Upsert  ("items",json)

It is just faster equalent of:

	item := &Item{}
	json.Unmarshal ([]byte (`{"id":1,"name":"test"}`),item)
	db.Upsert ("items",item)
Get Query results in JSON format

In case of requiment to serialize results of Query in JSON format, then it is possible to improve performance by directly obtaining results in JSON format from reindexer. JSON serialization will be done by C++ code, without extra allocs/serialization in Go code.

...		
	iterator := db.Query("items").
		Select ("id","name").        // Filter output JSON: Select only "id" and "name" fields of items, another fields will be ommited
		Limit (1).
		ExecToJson ("root_object")   // Name of root object of output JSON

	json,err := iterator.FetchAll()
	// Check the error
	if err != nil {
		panic(err)
	}
	fmt.Printf ("%s\n",string (json))
...

This code will print something like:

{"root_object":[{"id":1,"name":"test"}]}
Using object cache

To avoid race conditions, by default object cache is turned off and all objects are allocated and deserialized from reindexer internal format (called CJSON) per each query. The deserialization is uses reflection, so it's speed is not optimal (in fact CJSON deserialization is ~3-10x faster than JSON, and ~1.2x faster than GOB), but perfrormance is still seriously limited by reflection overhead.

There are 2 ways to enable object cache:

  • Provide DeepCopy interface
  • Ask query return shared objects from cache
DeepCopy interface

If object is implements DeepCopy intreface, then reindexer will turn on object cache and use DeepCopy interface to copy objects from cache to query results. The DeepCopy interface is responsible to make deep copy of source object.

Here is sample of DeepCopy interface implementation

func (item *Item) DeepCopy () interface {} {
	copyItem := &Item{
		ID: item.ID,
		Name: item.Name,
		Articles: make ([]int,cap (item.Articles),len (item.Articles)),
		Year: item.Year,
	}
	copy (copyItem.Articles,item.Articles)
	return copyItem
}

There are availbale code generation tool gencopy, which can automatically generate DeepCopy interface for structs.

Get shared objects from object cache (USE WITH CAUTION)

To speed up queries and do not allocate new objects per each query it is possible ask query return objects directly from object cache. For enable this behaviour, call AllowUnsafe(true) on Iterator.

WARNING: when used AllowUnsafe(true) queries returns shared pointers to structs in object cache. Therefore application MUST NOT modify returned objects.

	res, err := db.Query("items").WhereInt ("id",reindexer.EQ,1).Exec().AllowUnsafe(true).FetchAll()
	if err != nil {
		panic (err)
	}

	if len (res) > 1 {
		// item is SHARED pointer to struct in object cache
		item = res[0].(*Item)

		// It's OK - fmt.Printf will not modify item
		fmt.Printf ("%v",item)

		// It's WRONG - can race, and will corrupt data in object cache
		item.Name = "new name"
	}

Logging, debug and profiling

Turn on logger

Reindexer logger can be turned on by db.SetLogger() method, just like in this snippet of code:

type Logger struct {
}
func (Logger) Printf(level int, format string, msg ...interface{}) {
	log.Printf(format, msg...)
}
...
	db.SetLogger (Logger{})
Debug queries

Another useful feature is debug print of processed Queries. To debug print queries details there are 2 methods:

  • db.SetDefaultQueryDebug(namespace string,level int) - it globally enables print details of all queries by namespace

  • query.Debug(level int) - print details of query execution level is level of verbosity:

  • reindexer.INFO - will print only query conditions

  • reindexer.TRACE - will print query conditions and execution details with timings

  • query.Explain () - calculate and store query execution details.

  • iterator.GetExplainResults () - return query execution details

Profiling

Because reindexer core is written in C++ all calls to reindexer and their memory consumption are not visible for go profiler. To profile reindexer core there are cgo profiler available. cgo profiler now is part of reindexer, but it can be used with any another cgo code.

Usage of cgo profiler is very similar with usage of go profiler.

  1. Add import:
import _ "github.com/restream/reindexer/pprof"
  1. If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function:
go func() {
	log.Println(http.ListenAndServe("localhost:6060", nil))
}()
  1. Run application with envirnoment variable HEAPPROFILE=/tmp/pprof
  2. Then use the pprof tool to look at the heap profile:
pprof -symbolize remote http://localhost:6060/debug/cgo/pprof/heap

Maintenance

For maintenance and work with data, stored in reindexer database there are 2 methods available:

  • Web interface
  • Command line tool
Web interface

Reindexer server and builtinserver binding mode are coming with Web UI out-of-the box. To open web UI just start reindexer server or application with builrinserver mode, and open http://server-ip:9088/face in browser

Command line tool

To work with database from command line you can use reindexer command line tool Command line tool have the following functions

  • Backup whole database into text file or console.
  • Make queries to database
  • Modify documents and DB metadata

Command line tool can run in 2 modes. With server via network, and in server-less mode, directly with storage.

Dump and restore database

To dump and restore database in normal way there reindexer command line tool is used

Backup whole database into single backup file:

reindexer_tool --dsn cproto://127.0.0.1:6534/mydb --command '\dump' --output mydb.rxdump

Restore database from backup file:

reindexer_tool --dsn cproto://127.0.0.1:6534/mydb --filename mydb.rxdump

Integration with other program languages

A list of connectors for work with Reindexer via other program languages (TBC later):

Pyreindexer
  1. Pyreindexer for Python (version >=3.6 is required). For setup run:
pip3 install git+https://github.com/Restream/reindexer.git

Limitations and known issues

Currently Reindexer is stable and production ready, but it is still a work in progress, so there are some limitations and issues:

  • Internal C++ API is not stabilized and is subject to change.

Getting help

You can get help in several ways:

  1. Join Reindexer Telegram group
  2. Write an issue

Documentation

Index

Constants

View Source
const (
	ConfigNamespaceName           = "#config"
	MemstatsNamespaceName         = "#memstats"
	NamespacesNamespaceName       = "#namespaces"
	PerfstatsNamespaceName        = "#perfstats"
	QueriesperfstatsNamespaceName = "#queriesperfstats"
)
View Source
const (
	QuerySelectFunction = bindings.QuerySelectFunction
	QueryEqualPosition  = bindings.QueryEqualPosition
)

Constants for query serialization

View Source
const (
	CollateNone    = bindings.CollateNone
	CollateASCII   = bindings.CollateASCII
	CollateUTF8    = bindings.CollateUTF8
	CollateNumeric = bindings.CollateNumeric
	CollateCustom  = bindings.CollateCustom
)
View Source
const (
	// Equal '='
	EQ = bindings.EQ
	// Greater '>'
	GT = bindings.GT
	// Lower '<'
	LT = bindings.LT
	// Greater or equal '>=' (GT|EQ)
	GE = bindings.GE
	// Lower or equal '<'
	LE = bindings.LE
	// One of set 'IN []'
	SET = bindings.SET
	// All of set
	ALLSET = bindings.ALLSET
	// In range
	RANGE = bindings.RANGE
	// Any value
	ANY = bindings.ANY
	// Empty value (usualy zero len array)
	EMPTY = bindings.EMPTY
)

Condition types

View Source
const (
	// ERROR Log level
	ERROR = bindings.ERROR
	// WARNING Log level
	WARNING = bindings.WARNING
	// INFO Log level
	INFO = bindings.INFO
	// TRACE Log level
	TRACE = bindings.TRACE
)
View Source
const (
	AggAvg   = bindings.AggAvg
	AggSum   = bindings.AggSum
	AggFacet = bindings.AggFacet
	AggMin   = bindings.AggMin
	AggMax   = bindings.AggMax
)

Variables

View Source
var (
	ErrEmptyNamespace = errors.New("rq: empty namespace name")
	ErrEmptyFieldName = errors.New("rq: empty field name in filter")
	ErrCondType       = errors.New("rq: cond type not found")
	ErrOpInvalid      = errors.New("rq: op is invalid")
	ErrNoPK           = errors.New("rq: No pk field in struct")
	ErrWrongType      = errors.New("rq: Wrong type of item")
	ErrMustBePointer  = errors.New("rq: Argument must be a pointer to element, not element")
	ErrNotFound       = errors.New("rq: Not found")
	ErrDeepCopyType   = errors.New("rq: DeepCopy() returns wrong type")
)

Functions

func GetCondType

func GetCondType(name string) (int, error)

func WithCgoLimit added in v1.9.3

func WithCgoLimit(cgoLimit int) interface{}

func WithConnPoolSize added in v1.9.3

func WithConnPoolSize(connPoolSize int) interface{}

func WithRetryAttempts added in v1.9.5

func WithRetryAttempts(read int, write int) interface{}

func WithServerConfig added in v1.9.6

func WithServerConfig(startupTimeout time.Duration, serverConfig *config.ServerConfig) interface{}

Types

type AggregationResult added in v1.10.0

type AggregationResult struct {
	Name   string  `json:"name"`
	Value  float64 `json:"value"`
	Facets []struct {
		Value string `json:"value"`
		Count int    `json:"count"`
	} `json:"facets"`
}

type DBConfigItem added in v1.9.3

type DBConfigItem struct {
	Type       string                `json:"type"`
	Profiling  *DBProfilingConfig    `json:"profiling,omitempty"`
	LogQueries *[]DBLogQueriesConfig `json:"log_queries,omitempty"`
}

type DBLogQueriesConfig added in v1.9.3

type DBLogQueriesConfig struct {
	Namespace string `json:"namespace"`
	LogLevel  string `json:"log_level"`
}

type DBProfilingConfig added in v1.9.3

type DBProfilingConfig struct {
	QueriesThresholdUS int  `json:"queries_threshold_us"`
	MemStats           bool `json:"memstats"`
	PerfStats          bool `json:"perfstats"`
	QueriesPerfStats   bool `json:"queriesperfstats"`
}

type DeepCopy

type DeepCopy interface {
	DeepCopy() interface{}
}

type ExplainResults added in v1.10.0

type ExplainResults struct {
	TotalUs       int    `json:"total_us"`
	PrepareUs     int    `json:"prepare_us"`
	IndexesUs     int    `json:"indexes_us"`
	PostprocessUS int    `json:"postprocess_us"`
	LoopUs        int    `json:"loop_us"`
	SortIndex     string `json:"sort_index"`
	Selectors     []struct {
		Field       string  `json:"field"`
		Method      string  `json:"method"`
		Keys        int     `json:"keys"`
		Comparators int     `json:"comparators"`
		Cost        float32 `json:"cost"`
		Matched     int     `json:"matched"`
	} `json:"selectors"`
}

ExplainResults presents query plan

type FtFastConfig

type FtFastConfig struct {
	// boost of bm25 ranking. default value 1.
	Bm25Boost float64 `json:"bm25_boost"`
	// weight of bm25 rank in final rank.
	// 0: bm25 will not change final rank.
	// 1: bm25 will affect to final rank in 0 - 100% range
	Bm25Weight float64 `json:"bm25_weight"`
	// boost of search query term distance in found document. default vaule 1
	DistanceBoost float64 `json:"distance_boost"`
	// weight of search query terms distance in found document in final rank.
	// 0: distance will not change final rank.
	// 1: distance will affect to final rank in 0 - 100% range
	DistanceWeight float64 `json:"distance_weight"`
	// boost of search query term length. default value 1
	TermLenBoost float64 `json:"term_len_boost"`
	// weight of search query term length in final rank.
	// 0: term length will not change final rank.
	// 1: term length will affect to final rank in 0 - 100% range
	TermLenWeight float64 `json:"term_len_weight"`
	// Minimum rank of found documents
	MinRelevancy float64 `json:"min_relevancy"`
	// Maximum possible typos in word.
	// 0: typos is disabled, words with typos will not match
	// N: words with N possible typos will match
	// It is not recommended to set more than 1 possible typo: It will serously increase RAM usage, and decrease search speed
	MaxTyposInWord int `json:"max_typos_in_word"`
	// Maximum word length for building and matching variants with typos. Default value is 15
	MaxTypoLen int `json:"max_typo_len"`
	// Maximum documents which will be processed in merge query results
	// Default value is 20000. Increasing this value may refine ranking
	// of queries with high frequency words
	MergeLimit int `json:"merge_limit"`
	// List of used stemmers
	Stemmers []string `json:"stemmers"`
	// Enable translit variants processing
	EnableTranslit bool `json:"enable_translit"`
	// Enable wrong keyboard layout variants processing
	EnableKbLayout bool `json:"enable_kb_layout"`
	// List of stop words. Words from this list will be ignored in documents and queries
	StopWords []string `json:"stop_words"`
	// Log level of full text search engine
	LogLevel int `json:"log_level"`
	// Enable search by numbers as words and backwards
	EnableNumbersSearch bool `json:"enable_numbers_search"`
	// Extra symbols, which will be threated as parts of word to addition to letters and digits
	ExtraWordSymbols string `json:"extra_word_symbols"`
}

FtFastConfig configurarion of FullText search index

func DefaultFtFastConfig

func DefaultFtFastConfig() FtFastConfig

type FtFuzzyConfig

type FtFuzzyConfig struct {
	// max proc geting from src reqest
	MaxSrcProc float64 `json:"max_src_proc"`
	// max proc geting from dst reqest
	//usualy maxDstProc = 100 -MaxSrcProc but it's not nessary
	MaxDstProc float64 `json:"max_dst_proc"`
	// increse proc when found pos that are near between  source and dst string (0.0001-2)
	PosSourceBoost float64 `json:"pos_source_boost"`
	// Minim coof for pos that are neaer in src and dst (0.0001-2)
	PosSourceDistMin float64 `json:"pos_source_dist_min"`
	// increse proc when found pos that are near in source string (0.0001-2)
	PosSourceDistBoost float64 `json:"pos_source_dist_boost"`
	// increse proc when found pos that are near in dst string (0.0001-2)
	PosDstBoost float64 `json:"pos_dst_boost"`
	// decrese proc when found  not full thregramm - only start and end (0.0001-2)
	StartDecreeseBoost float64 `json:"start_decreese_boost"`
	// base decrese proc when found  not full thregramm - only start and end (0.0001-2)
	StartDefaultDecreese float64 `json:"start_default_decreese"`
	// Min relevance to show reqest
	MinOkProc float64 `json:"min_ok_proc"`
	// size of gramm (1-10)- for example
	//terminator BufferSize=3 __t _te ter erm rmi ...
	//terminator BufferSize=4 __te _ter term ermi rmin
	BufferSize int `json:"buffer_size"`
	// size of space in start and end of gramm (0-9) - for example
	//terminator SpaceSize=2 __t _te ter   ... tor or_ r__
	//terminator SpaceSize=1 _te  ter  ... tor or_
	SpaceSize int `json:"space_size"`
	// Maximum documents which will be processed in merge query results
	// Default value is 20000. Increasing this value may refine ranking
	// of queries with high frequency words
	MergeLimit int `json:"merge_limit"`
	// List of used stemmers
	Stemmers []string `json:"stemmers"`
	// Enable translit variants processing
	EnableTranslit bool `json:"enable_translit"`
	// Enable wrong keyboard layout variants processing
	EnableKbLayout bool `json:"enable_kb_layout"`
	// List of stop words. Words from this list will be ignored in documents and queries
	StopWords []string `json:"stop_words"`
	// Log level of full text search engine
	LogLevel int `json:"log_level"`
	// Extra symbols, which will be threated as parts of word to addition to letters and digits
	ExtraWordSymbols string `json:"extra_word_symbols"`
}

FtFuzzyConfig configurarion of FuzzyFullText search index

func DefaultFtFuzzyConfig

func DefaultFtFuzzyConfig() FtFuzzyConfig

type IndexDef added in v1.10.0

type IndexDef bindings.IndexDef

Index definition struct

type IndexDescription

type IndexDescription struct {
	IndexDef

	IsSortable bool     `json:"is_sortable"`
	IsFulltext bool     `json:"is_fulltext"`
	Conditions []string `json:"conditions"`
}

type Iterator

type Iterator struct {
	// contains filtered or unexported fields
}

Iterator presents query results

func (*Iterator) AggResults

func (it *Iterator) AggResults() (v []AggregationResult)

AggResults returns aggregation results (if present)

func (*Iterator) AllowUnsafe

func (it *Iterator) AllowUnsafe(allow bool) *Iterator

AllowUnsafe takes bool, that enable or disable unsafe behavior.

When AllowUnsafe is true and object cache is enabled resulting objects will not be copied for each query. That means possible race conditions. But it's good speedup, without overhead for copying.

By default reindexer guarantees that every object its safe to use in multithread.

func (*Iterator) Close

func (it *Iterator) Close()

Close closes the iterator and freed CGO resources

func (*Iterator) Count

func (it *Iterator) Count() int

Count returns count if query results

func (*Iterator) Error

func (it *Iterator) Error() error

Error returns query error if it's present.

func (*Iterator) FetchAll

func (it *Iterator) FetchAll() (items []interface{}, err error)

FetchAll returns all query results as slice []interface{} and closes the iterator.

func (*Iterator) FetchAllWithRank

func (it *Iterator) FetchAllWithRank() (items []interface{}, ranks []int, err error)

FetchAllWithRank returns resulting slice of objects and slice of objects ranks. Closes iterator after use.

func (*Iterator) FetchOne

func (it *Iterator) FetchOne() (item interface{}, err error)

FetchOne returns first element and closes the iterator. When it's impossible (count is 0) err will be ErrNotFound.

func (*Iterator) GetAggreatedValue

func (it *Iterator) GetAggreatedValue(idx int) float64

GetAggreatedValue - Return aggregation sum of field

func (*Iterator) GetExplainResults added in v1.10.0

func (it *Iterator) GetExplainResults() (*ExplainResults, error)

GetExplainResults returns JSON bytes with explain results

func (*Iterator) HasRank

func (it *Iterator) HasRank() bool

HasRank indicates if this iterator has info about search ranks.

func (*Iterator) JoinedObjects

func (it *Iterator) JoinedObjects(field string) (objects []interface{}, err error)

JoinedObjects returns objects slice, that result of join for the given field

func (*Iterator) Next

func (it *Iterator) Next() (hasNext bool)

Next moves iterator pointer to the next element. Returns bool, that indicates the availability of the next elements.

func (*Iterator) Object

func (it *Iterator) Object() interface{}

Object returns current object. Will panic when pointer was not moved, Next() must be called before.

func (*Iterator) Rank

func (it *Iterator) Rank() int

Rank returns current object search rank. Will panic when pointer was not moved, Next() must be called before.

func (*Iterator) TotalCount

func (it *Iterator) TotalCount() int

TotalCount returns total count of objects (ignoring conditions of limit and offset)

type JSONIterator

type JSONIterator struct {
	// contains filtered or unexported fields
}

JSONIterator its iterator, but results presents as json documents

func (*JSONIterator) Close

func (it *JSONIterator) Close()

Close closes the iterator.

func (*JSONIterator) Count

func (it *JSONIterator) Count() int

Count returns count if query results

func (*JSONIterator) Error

func (it *JSONIterator) Error() error

Error returns query error if it's present.

func (*JSONIterator) FetchAll

func (it *JSONIterator) FetchAll() (json []byte, err error)

FetchAll returns bytes slice it's JSON array with results

func (*JSONIterator) GetExplainResults added in v1.10.0

func (it *JSONIterator) GetExplainResults() (*ExplainResults, error)

GetExplainResults returns JSON bytes with explain results

func (*JSONIterator) JSON

func (it *JSONIterator) JSON() (json []byte)

JSON returns JSON bytes with current document

func (*JSONIterator) Next

func (it *JSONIterator) Next() bool

Next moves iterator pointer to the next element. Returns bool, that indicates the availability of the next elements.

type JoinHandler

type JoinHandler func(field string, item interface{}, subitems []interface{}) (isContinue bool)

JoinHandler it's function for handle join results. Returns bool, that indicates is values will be applied to structs.

type Joinable

type Joinable interface {
	Join(field string, subitems []interface{}, context interface{})
}

Interface for append joined items

type Logger

type Logger interface {
	Printf(level int, fmt string, msg ...interface{})
}

Logger interface for reindexer

type NamespaceDescription

type NamespaceDescription struct {
	Name           string             `json:"name"`
	Indexes        []IndexDescription `json:"indexes"`
	StorageEnabled bool               `json:"storage_enabled"`
}

type NamespaceMemStat added in v1.9.3

type NamespaceMemStat struct {
	Name            string `json:"name"`
	StorageError    string `json:"storage_error"`
	StoragePath     string `json:"storage_path"`
	StorageOK       bool   `json:"storage_ok"`
	UpdatedUnixNano int64  `json:"updated_unix_nano"`
	ItemsCount      int64  `json:"items_count,omitempty"`
	EmptyItemsCount int64  `json:"empty_items_count"`
	DataSize        int64  `json:"data_size"`
	Total           struct {
		DataSize    int `json:"data_size"`
		IndexesSize int `json:"indexes_size"`
		CacheSize   int `json:"cache_size"`
	}
}

type NamespaceOptions

type NamespaceOptions struct {
	// contains filtered or unexported fields
}

NamespaceOptions is options for namespace

func DefaultNamespaceOptions

func DefaultNamespaceOptions() *NamespaceOptions

DefaultNamespaceOptions return defailt namespace options

func (*NamespaceOptions) CacheAggressive added in v1.9.2

func (opts *NamespaceOptions) CacheAggressive() *NamespaceOptions

func (*NamespaceOptions) CacheOff added in v1.9.2

func (opts *NamespaceOptions) CacheOff() *NamespaceOptions

func (*NamespaceOptions) CacheOn added in v1.9.2

func (opts *NamespaceOptions) CacheOn() *NamespaceOptions

func (*NamespaceOptions) DropOnFileFormatError

func (opts *NamespaceOptions) DropOnFileFormatError() *NamespaceOptions

func (*NamespaceOptions) DropOnIndexesConflict

func (opts *NamespaceOptions) DropOnIndexesConflict() *NamespaceOptions

func (*NamespaceOptions) NoStorage

func (opts *NamespaceOptions) NoStorage() *NamespaceOptions

type NamespacePerfStat added in v1.9.3

type NamespacePerfStat struct {
	Name    string   `json:"name"`
	Updates PerfStat `json:"updates"`
	Selects PerfStat `json:"selects"`
}

type PerfStat added in v1.9.3

type PerfStat struct {
	TotalQueriesCount    int64 `json:"total_queries_count"`
	TotalAvgLatencyUs    int64 `json:"total_avg_latency_us"`
	TotalAvgLockTimeUs   int64 `json:"total_avg_lock_time_us"`
	LastSecQPS           int64 `json:"last_sec_qps"`
	LastSecAvgLatencyUs  int64 `json:"last_sec_avg_latency_us"`
	LastSecAvgLockTimeUs int64 `json:"last_sec_avg_lock_time_us"`
}

type Query

type Query struct {
	Namespace string
	// contains filtered or unexported fields
}

Query to DB object

func (*Query) Aggregate

func (q *Query) Aggregate(index string, aggType int) *Query

Aggregate - Return aggregation of field

func (*Query) CachedTotal

func (q *Query) CachedTotal(totalNames ...string) *Query

CachedTotal Request cached total items calculation

func (*Query) Debug

func (q *Query) Debug(level int) *Query

Debug - Set debug level

func (*Query) Delete

func (q *Query) Delete() (int, error)

Delete will execute query, and delete items, matches query On sucess return number of deleted elements

func (*Query) Distinct

func (q *Query) Distinct(distinctIndex string) *Query

Distinct - Return only items with uniq value of field

func (*Query) EqualPosition added in v1.10.0

func (q *Query) EqualPosition(fields ...string) *Query

Adds equal position fields to arrays

func (*Query) Exec

func (q *Query) Exec() *Iterator

Exec will execute query, and return slice of items

func (*Query) ExecToJson

func (q *Query) ExecToJson(jsonRoots ...string) *JSONIterator

ExecAsJson will execute query, and return iterator

func (*Query) Explain added in v1.10.0

func (q *Query) Explain() *Query

Explain - Request explain for query

func (*Query) FetchCount added in v1.5.0

func (q *Query) FetchCount(n int) *Query

FetchCount sets the number of items that will be fetched by one operation When n <= 0 query will fetch all results in one operation

func (*Query) Functions added in v1.9.2

func (q *Query) Functions(fields ...string) *Query

Select add filter to fields of result's objects

func (*Query) Get

func (q *Query) Get() (item interface{}, found bool)

Get will execute query, and return 1 st item, panic on error

func (*Query) GetJson

func (q *Query) GetJson() (json []byte, found bool)

Get will execute query, and return 1 st item, panic on error

func (*Query) InnerJoin

func (q *Query) InnerJoin(q2 *Query, field string) *Query

InnerJoin joins 2 queries - items from 1-st query are expanded with data from joined query

func (*Query) Join

func (q *Query) Join(q2 *Query, field string) *Query

Join joins 2 queries, alias to LeftJoin

func (*Query) JoinHandler

func (q *Query) JoinHandler(field string, handler JoinHandler) *Query

JoinHandler sets handler for join results

func (*Query) LeftJoin

func (q *Query) LeftJoin(q2 *Query, field string) *Query

LeftJoin joins 2 queries = - items from 1-st query are filtered and expanded with data from 2-nd query

func (*Query) Limit

func (q *Query) Limit(limitItems int) *Query

Limit - Set limit (count) of returned items

func (*Query) Match

func (q *Query) Match(index string, keys ...string) *Query

WhereString - Add where condition to DB query with string args

func (*Query) Merge

func (q *Query) Merge(q2 *Query) *Query

Merge 2 queries

func (*Query) MustExec

func (q *Query) MustExec() *Iterator

MustExec will execute query, and return iterator, panic on error

func (*Query) Not

func (q *Query) Not() *Query

Not - next condition will added with NOT AND

func (*Query) Offset

func (q *Query) Offset(startOffset int) *Query

Offset - Set start offset of returned items

func (*Query) On

func (q *Query) On(index string, condition int, joinIndex string) *Query

On Add Join condition

func (*Query) Or

func (q *Query) Or() *Query

OR - next condition will added with OR

func (*Query) ReqTotal

func (q *Query) ReqTotal(totalNames ...string) *Query

ReqTotal Request total items calculation

func (*Query) Select

func (q *Query) Select(fields ...string) *Query

Select add filter to fields of result's objects

func (*Query) SetContext

func (q *Query) SetContext(ctx interface{}) *Query

SetContext set interface, which will be passed to Joined interface

func (*Query) Sort

func (q *Query) Sort(sortIndex string, desc bool, values ...interface{}) *Query

Sort - Apply sort order to returned from query items If values argument specified, then items equal to values, if found will be placed in the top positions For composite indexes values must be []interface{}, with value of each subindex

func (*Query) Where

func (q *Query) Where(index string, condition int, keys interface{}) *Query

Where - Add where condition to DB query For composite indexes keys must be []interface{}, with value of each subindex

func (*Query) WhereBool

func (q *Query) WhereBool(index string, condition int, keys ...bool) *Query

WhereString - Add where condition to DB query with bool args

func (*Query) WhereComposite added in v1.9.2

func (q *Query) WhereComposite(index string, condition int, keys ...interface{}) *Query

WhereComposite - Add where condition to DB query with interface args for composite indexes

func (*Query) WhereDouble

func (q *Query) WhereDouble(index string, condition int, keys ...float64) *Query

WhereDouble - Add where condition to DB query with float args

func (*Query) WhereInt

func (q *Query) WhereInt(index string, condition int, keys ...int) *Query

WhereInt - Add where condition to DB query with int args

func (*Query) WhereInt32 added in v1.10.0

func (q *Query) WhereInt32(index string, condition int, keys ...int32) *Query

WhereInt - Add where condition to DB query with int args

func (*Query) WhereInt64

func (q *Query) WhereInt64(index string, condition int, keys ...int64) *Query

WhereInt64 - Add where condition to DB query with int64 args

func (*Query) WhereString

func (q *Query) WhereString(index string, condition int, keys ...string) *Query

WhereString - Add where condition to DB query with string args

type QueryPerfStat added in v1.9.3

type QueryPerfStat struct {
	Query string `json:"query"`
	PerfStat
}

type Reindexer

type Reindexer struct {
	// contains filtered or unexported fields
}

Reindexer The reindxer state struct

func NewReindex

func NewReindex(dsn string, options ...interface{}) *Reindexer

NewReindex Create new instanse of Reindexer DB Returns pointer to created instance

func (*Reindexer) AddIndex added in v1.9.7

func (db *Reindexer) AddIndex(namespace string, indexDef ...IndexDef) error

AddIndex - add index.

func (*Reindexer) BeginTx

func (db *Reindexer) BeginTx(namespace string) (*Tx, error)

BeginTx - start update transaction

func (*Reindexer) Close added in v1.9.7

func (db *Reindexer) Close()

func (*Reindexer) CloseNamespace

func (db *Reindexer) CloseNamespace(namespace string) error

CloseNamespace - close namespace, but keep storage

func (*Reindexer) ConfigureIndex

func (db *Reindexer) ConfigureIndex(namespace, index string, config interface{}) error

ConfigureIndex - congigure index. [[deprecated]]. Use UpdateIndex insted config argument must be struct with index configuration

func (*Reindexer) Delete

func (db *Reindexer) Delete(namespace string, item interface{}, precepts ...string) error

Delete - remove item from namespace Item must be the same type as item passed to OpenNamespace, or []byte with json data

func (*Reindexer) DescribeNamespace

func (db *Reindexer) DescribeNamespace(namespace string) (*NamespaceDescription, error)

DescribeNamespace makes a 'SELECT * FROM #namespaces' query to database. Return NamespaceDescription results, error

func (*Reindexer) DescribeNamespaces

func (db *Reindexer) DescribeNamespaces() ([]*NamespaceDescription, error)

DescribeNamespaces makes a 'SELECT * FROM #namespaces' query to database. Return NamespaceDescription results, error

func (*Reindexer) DropIndex added in v1.9.3

func (db *Reindexer) DropIndex(namespace, index string) error

DropIndex - drop index.

func (*Reindexer) DropNamespace

func (db *Reindexer) DropNamespace(namespace string) error

DropNamespace - drop whole namespace from DB

func (*Reindexer) EnableStorage

func (db *Reindexer) EnableStorage(storagePath string) error

EnableStorage enables persistent storage of data [[deprecated]] storage path should be passed as DSN part to reindexer.NewReindex (""), e.g. reindexer.NewReindexer ("builtin:///tmp/reindex")

func (*Reindexer) ExecSQL

func (db *Reindexer) ExecSQL(query string) *Iterator

ExecSQL make query to database. Query is SQL statement Return Iterator

func (*Reindexer) ExecSQLToJSON

func (db *Reindexer) ExecSQLToJSON(query string) *JSONIterator

func (*Reindexer) GetMeta added in v1.10.0

func (db *Reindexer) GetMeta(namespace, key string) ([]byte, error)

func (*Reindexer) GetNamespaceMemStat added in v1.9.3

func (db *Reindexer) GetNamespaceMemStat(namespace string) (*NamespaceMemStat, error)

GetNamespaceMemStat makes a 'SELECT * FROM #memstat' query to database. Return NamespaceMemStat results, error

func (*Reindexer) GetNamespacesMemStat added in v1.9.3

func (db *Reindexer) GetNamespacesMemStat() ([]*NamespaceMemStat, error)

GetNamespacesMemStat makes a 'SELECT * FROM #memstats' query to database. Return NamespaceMemStat results, error

func (*Reindexer) GetStats

func (db *Reindexer) GetStats() bindings.Stats

GetStats Get local thread reindexer usage stats [[deprecated]]

func (*Reindexer) GetUpdatedAt

func (db *Reindexer) GetUpdatedAt(namespace string) (*time.Time, error)

GetUpdatedAt - get updated at time of namespace

func (*Reindexer) Insert

func (db *Reindexer) Insert(namespace string, item interface{}, precepts ...string) (int, error)

Insert item to namespace. Item must be the same type as item passed to OpenNamespace, or []byte with json data Return 0, if no item was inserted, 1 if item was inserted

func (*Reindexer) MustBeginTx

func (db *Reindexer) MustBeginTx(namespace string) *Tx

MustBeginTx - start update transaction, panic on error

func (*Reindexer) OpenNamespace

func (db *Reindexer) OpenNamespace(namespace string, opts *NamespaceOptions, s interface{}) (err error)

OpenNamespace Open or create new namespace and indexes based on passed struct. IndexDef fields of struct are marked by `reindex:` tag

func (*Reindexer) Ping

func (db *Reindexer) Ping() error

Ping checks connection with reindexer

func (*Reindexer) PutMeta added in v1.10.0

func (db *Reindexer) PutMeta(namespace, key string, data []byte) error

func (*Reindexer) Query

func (db *Reindexer) Query(namespace string) *Query

Query Create new Query for building request

func (*Reindexer) QueryFrom

func (db *Reindexer) QueryFrom(d dsl.DSL) (*Query, error)

func (*Reindexer) ResetStats

func (db *Reindexer) ResetStats()

ResetStats Reset local thread reindexer usage stats [[deprecated]]

func (*Reindexer) SetDefaultQueryDebug

func (db *Reindexer) SetDefaultQueryDebug(namespace string, level int)

SetDefaultQueryDebug sets default debug level for queries to namespaces

func (*Reindexer) SetLogger

func (db *Reindexer) SetLogger(log Logger)

SetLogger sets logger interface for output reindexer logs

func (*Reindexer) Status added in v1.10.0

func (db *Reindexer) Status() error

Status will return current db status

func (*Reindexer) Update

func (db *Reindexer) Update(namespace string, item interface{}, precepts ...string) (int, error)

Update item to namespace. Item must be the same type as item passed to OpenNamespace, or []byte with json data Return 0, if no item was updated, 1 if item was updated

func (*Reindexer) UpdateIndex added in v1.9.7

func (db *Reindexer) UpdateIndex(namespace string, indexDef IndexDef) error

UpdateIndex - update index.

func (*Reindexer) Upsert

func (db *Reindexer) Upsert(namespace string, item interface{}, precepts ...string) error

Upsert (Insert or Update) item to index Item must be the same type as item passed to OpenNamespace, or []byte with json

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx Is pseudo transaction object. Rollback is not implemented yet, and data will always updated

func (*Tx) Commit

func (tx *Tx) Commit(updatedAt *time.Time) error

Commit apply changes

func (*Tx) Delete

func (tx *Tx) Delete(s interface{}) error

Delete - remove item by id from namespace

func (*Tx) DeleteJSON

func (tx *Tx) DeleteJSON(json []byte) error

DeleteJSON - remove item by id from namespace

func (*Tx) Insert

func (tx *Tx) Insert(s interface{}) (int, error)

Insert (only) item to index

func (*Tx) MustCommit

func (tx *Tx) MustCommit(updatedAt *time.Time)

func (*Tx) Rollback

func (tx *Tx) Rollback() error

Rollback update

func (*Tx) Update

func (tx *Tx) Update(s interface{}) (int, error)

func (*Tx) Upsert

func (tx *Tx) Upsert(s interface{}) error

Upsert (Insert or Update) item to index

func (*Tx) UpsertJSON

func (tx *Tx) UpsertJSON(json []byte) error

UpsertJSON (Insert or Update) item to index

Directories

Path Synopsis
test

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL