Documentation ¶
Overview ¶
Package bleve is a library for indexing and searching text.
Example Opening New Index, Indexing Data
message := struct{ Id: "example" From: "marty.schoch@gmail.com", Body: "bleve indexing is easy", } mapping := bleve.NewIndexMapping() index, _ := bleve.New("example.bleve", mapping) index.Index(message.Id, message)
Example Opening Existing Index, Searching Data
index, _ := bleve.Open("example.bleve") query := bleve.NewQueryStringQuery("bleve") searchRequest := bleve.NewSearchRequest(query) searchResult, _ := index.Search(searchRequest)
Index ¶
- Constants
- Variables
- func DumpQuery(m *IndexMapping, query Query) (string, error)
- func NewBoolFieldQuery(val bool) *boolFieldQuery
- func NewBooleanQuery(must []Query, should []Query, mustNot []Query) *booleanQuery
- func NewBooleanQueryMinShould(must []Query, should []Query, mustNot []Query, minShould float64) *booleanQuery
- func NewConjunctionQuery(conjuncts []Query) *conjunctionQuery
- func NewDateRangeInclusiveQuery(start, end *string, startInclusive, endInclusive *bool) *dateRangeQuery
- func NewDateRangeQuery(start, end *string) *dateRangeQuery
- func NewDisjunctionQuery(disjuncts []Query) *disjunctionQuery
- func NewDisjunctionQueryMin(disjuncts []Query, min float64) *disjunctionQuery
- func NewDocIDQuery(ids []string) *docIDQuery
- func NewFuzzyQuery(term string) *fuzzyQuery
- func NewIndexAlias(indexes ...Index) *indexAliasImpl
- func NewMatchAllQuery() *matchAllQuery
- func NewMatchNoneQuery() *matchNoneQuery
- func NewMatchPhraseQuery(matchPhrase string) *matchPhraseQuery
- func NewMatchQuery(match string) *matchQuery
- func NewMatchQueryOperator(match string, operator MatchQueryOperator) *matchQuery
- func NewNumericRangeInclusiveQuery(min, max *float64, minInclusive, maxInclusive *bool) *numericRangeQuery
- func NewNumericRangeQuery(min, max *float64) *numericRangeQuery
- func NewPhraseQuery(terms []string, field string) *phraseQuery
- func NewPrefixQuery(prefix string) *prefixQuery
- func NewQueryStringQuery(query string) *queryStringQuery
- func NewRegexpQuery(regexp string) *regexpQuery
- func NewTermQuery(term string) *termQuery
- func NewWildcardQuery(wildcard string) *wildcardQuery
- func SetLog(l *log.Logger)
- type Batch
- type Classifier
- type DocumentMapping
- func (dm *DocumentMapping) AddFieldMapping(fm *FieldMapping)
- func (dm *DocumentMapping) AddFieldMappingsAt(property string, fms ...*FieldMapping)
- func (dm *DocumentMapping) AddSubDocumentMapping(property string, sdm *DocumentMapping)
- func (dm *DocumentMapping) UnmarshalJSON(data []byte) error
- func (dm *DocumentMapping) Validate(cache *registry.Cache) error
- type Error
- type FacetRequest
- type FacetsRequest
- type FieldMapping
- type HighlightRequest
- type Index
- func New(path string, mapping *IndexMapping) (Index, error)
- func NewMemOnly(mapping *IndexMapping) (Index, error)
- func NewUsing(path string, mapping *IndexMapping, indexType string, kvstore string, ...) (Index, error)
- func Open(path string) (Index, error)
- func OpenUsing(path string, runtimeConfig map[string]interface{}) (Index, error)
- type IndexAlias
- type IndexErrMap
- type IndexMapping
- func (im *IndexMapping) AddCustomAnalyzer(name string, config map[string]interface{}) error
- func (im *IndexMapping) AddCustomCharFilter(name string, config map[string]interface{}) error
- func (im *IndexMapping) AddCustomDateTimeParser(name string, config map[string]interface{}) error
- func (im *IndexMapping) AddCustomTokenFilter(name string, config map[string]interface{}) error
- func (im *IndexMapping) AddCustomTokenMap(name string, config map[string]interface{}) error
- func (im *IndexMapping) AddCustomTokenizer(name string, config map[string]interface{}) error
- func (im *IndexMapping) AddDocumentMapping(doctype string, dm *DocumentMapping)
- func (im *IndexMapping) AnalyzeText(analyzerName string, text []byte) (analysis.TokenStream, error)
- func (im *IndexMapping) FieldAnalyzer(field string) string
- func (im *IndexMapping) UnmarshalJSON(data []byte) error
- func (im *IndexMapping) Validate() error
- type IndexStat
- type IndexStats
- type MatchQueryOperator
- type Query
- type SearchRequest
- type SearchResult
- type SearchStatus
Examples ¶
- DocumentMapping.AddFieldMapping
- DocumentMapping.AddFieldMappingsAt
- DocumentMapping.AddSubDocumentMapping
- FacetRequest.AddDateTimeRange
- FacetRequest.AddNumericRange
- Index (Indexing)
- New
- NewBooleanQuery
- NewBooleanQueryMinShould
- NewConjunctionQuery
- NewDisjunctionQuery
- NewDisjunctionQueryMin
- NewFacetRequest
- NewHighlight
- NewHighlightWithStyle
- NewMatchAllQuery
- NewMatchNoneQuery
- NewMatchPhraseQuery
- NewMatchQuery
- NewMatchQueryOperator
- NewNumericRangeInclusiveQuery
- NewNumericRangeQuery
- NewPhraseQuery
- NewPrefixQuery
- NewQueryStringQuery
- NewSearchRequest
- NewTermQuery
- SearchRequest.AddFacet
- SearchRequest.SortBy
- SearchRequest.SortByCustom
Constants ¶
const ( // Document must satisfy AT LEAST ONE of term searches. MatchQueryOperatorOr = 0 // Document must satisfy ALL of term searches. MatchQueryOperatorAnd = 1 )
Variables ¶
var ( IndexDynamic = true StoreDynamic = true )
control the default behavior for dynamic fields (those not explicitly mapped)
var Config *configuration
Config contains library level configuration
var MappingJSONStrict = false
Functions ¶
func DumpQuery ¶
func DumpQuery(m *IndexMapping, query Query) (string, error)
DumpQuery returns a string representation of the query tree, where query string queries have been expanded into base queries. The output format is meant for debugging purpose and may change in the future.
func NewBoolFieldQuery ¶
func NewBoolFieldQuery(val bool) *boolFieldQuery
NewBoolFieldQuery creates a new Query for boolean fields
func NewBooleanQuery ¶
NewBooleanQuery creates a compound Query composed of several other Query objects. Result documents must satisfy ALL of the must Queries. Result documents must satisfy NONE of the must not Queries. Result documents that ALSO satisfy any of the should Queries will score higher.
Example ¶
must := make([]Query, 1) mustNot := make([]Query, 1) must[0] = NewMatchQuery("one") mustNot[0] = NewMatchQuery("great") query := NewBooleanQuery(must, nil, mustNot) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 1
func NewBooleanQueryMinShould ¶
func NewBooleanQueryMinShould(must []Query, should []Query, mustNot []Query, minShould float64) *booleanQuery
NewBooleanQueryMinShould is the same as NewBooleanQuery, only it offers control of the minimum number of should queries that must be satisfied.
Example ¶
should := make([]Query, 2) should[0] = NewMatchQuery("great") should[1] = NewMatchQuery("one") query := NewBooleanQueryMinShould(nil, should, nil, float64(2)) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 2
func NewConjunctionQuery ¶
func NewConjunctionQuery(conjuncts []Query) *conjunctionQuery
NewConjunctionQuery creates a new compound Query. Result documents must satisfy all of the queries.
Example ¶
conjuncts := make([]Query, 2) conjuncts[0] = NewMatchQuery("great") conjuncts[1] = NewMatchQuery("one") query := NewConjunctionQuery(conjuncts) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 2
func NewDateRangeInclusiveQuery ¶
func NewDateRangeInclusiveQuery(start, end *string, startInclusive, endInclusive *bool) *dateRangeQuery
NewDateRangeInclusiveQuery creates a new Query for ranges of date values. Date strings are parsed using the DateTimeParser configured in the
top-level config.QueryDateTimeParser
Either, but not both endpoints can be nil. startInclusive and endInclusive control inclusion of the endpoints.
func NewDateRangeQuery ¶
func NewDateRangeQuery(start, end *string) *dateRangeQuery
NewDateRangeQuery creates a new Query for ranges of date values. Date strings are parsed using the DateTimeParser configured in the
top-level config.QueryDateTimeParser
Either, but not both endpoints can be nil.
func NewDisjunctionQuery ¶
func NewDisjunctionQuery(disjuncts []Query) *disjunctionQuery
NewDisjunctionQuery creates a new compound Query. Result documents satisfy at least one Query.
Example ¶
disjuncts := make([]Query, 2) disjuncts[0] = NewMatchQuery("great") disjuncts[1] = NewMatchQuery("named") query := NewDisjunctionQuery(disjuncts) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(len(searchResults.Hits))
Output: 2
func NewDisjunctionQueryMin ¶
NewDisjunctionQueryMin creates a new compound Query. Result documents satisfy at least min Queries.
Example ¶
disjuncts := make([]Query, 2) disjuncts[0] = NewMatchQuery("great") disjuncts[1] = NewMatchQuery("named") query := NewDisjunctionQueryMin(disjuncts, float64(2)) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(len(searchResults.Hits))
Output: 0
func NewDocIDQuery ¶
func NewDocIDQuery(ids []string) *docIDQuery
NewDocIDQuery creates a new Query object returning indexed documents among the specified set. Combine it with ConjunctionQuery to restrict the scope of other queries output.
func NewFuzzyQuery ¶
func NewFuzzyQuery(term string) *fuzzyQuery
NewFuzzyQuery creates a new Query which finds documents containing terms within a specific fuzziness of the specified term. The default fuzziness is 2.
The current implementation uses Levenshtein edit distance as the fuzziness metric.
func NewIndexAlias ¶
func NewIndexAlias(indexes ...Index) *indexAliasImpl
NewIndexAlias creates a new IndexAlias over the provided Index objects.
func NewMatchAllQuery ¶
func NewMatchAllQuery() *matchAllQuery
NewMatchAllQuery creates a Query which will match all documents in the index.
Example ¶
// finds all documents in the index query := NewMatchAllQuery() searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(len(searchResults.Hits))
Output: 2
func NewMatchNoneQuery ¶
func NewMatchNoneQuery() *matchNoneQuery
NewMatchNoneQuery creates a Query which will not match any documents in the index.
Example ¶
// matches no documents in the index query := NewMatchNoneQuery() searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(len(searchResults.Hits))
Output: 0
func NewMatchPhraseQuery ¶
func NewMatchPhraseQuery(matchPhrase string) *matchPhraseQuery
NewMatchPhraseQuery creates a new Query object for matching phrases in the index. An Analyzer is chosen based on the field. Input text is analyzed using this analyzer. Token terms resulting from this analysis are used to build a search phrase. Result documents must match this phrase. Queried field must have been indexed with IncludeTermVectors set to true.
Example ¶
// finds all documents with the given phrase in the index query := NewMatchPhraseQuery("nameless one") searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 2
func NewMatchQuery ¶
func NewMatchQuery(match string) *matchQuery
NewMatchQuery creates a Query for matching text. An Analyzer is chosen based on the field. Input text is analyzed using this analyzer. Token terms resulting from this analysis are used to perform term searches. Result documents must satisfy at least one of these term searches.
Example ¶
// finds documents with fields fully matching the given query text query := NewMatchQuery("named one") searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 1
func NewMatchQueryOperator ¶ added in v0.4.0
func NewMatchQueryOperator(match string, operator MatchQueryOperator) *matchQuery
NewMatchQuery creates a Query for matching text. An Analyzer is chosen based on the field. Input text is analyzed using this analyzer. Token terms resulting from this analysis are used to perform term searches. Result documents must satisfy term searches according to given operator.
Example ¶
query := NewMatchQueryOperator("great one", MatchQueryOperatorAnd) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 2
func NewNumericRangeInclusiveQuery ¶
func NewNumericRangeInclusiveQuery(min, max *float64, minInclusive, maxInclusive *bool) *numericRangeQuery
NewNumericRangeInclusiveQuery creates a new Query for ranges of numeric values. Either, but not both endpoints can be nil. Control endpoint inclusion with inclusiveMin, inclusiveMax.
Example ¶
value1 := float64(10) value2 := float64(100) v1incl := false v2incl := false query := NewNumericRangeInclusiveQuery(&value1, &value2, &v1incl, &v2incl) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 3
func NewNumericRangeQuery ¶
func NewNumericRangeQuery(min, max *float64) *numericRangeQuery
NewNumericRangeQuery creates a new Query for ranges of numeric values. Either, but not both endpoints can be nil. The minimum value is inclusive. The maximum value is exclusive.
Example ¶
value1 := float64(11) value2 := float64(100) data := struct{ Priority float64 }{Priority: float64(15)} data2 := struct{ Priority float64 }{Priority: float64(10)} err = example_index.Index("document id 3", data) if err != nil { panic(err) } err = example_index.Index("document id 4", data2) if err != nil { panic(err) } query := NewNumericRangeQuery(&value1, &value2) searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 3
func NewPhraseQuery ¶
NewPhraseQuery creates a new Query for finding exact term phrases in the index. The provided terms must exist in the correct order, at the correct index offsets, in the specified field. Queried field must have been indexed with IncludeTermVectors set to true.
Example ¶
// finds all documents with the given phrases in the given field in the index query := NewPhraseQuery([]string{"nameless", "one"}, "Name") searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 2
func NewPrefixQuery ¶
func NewPrefixQuery(prefix string) *prefixQuery
NewPrefixQuery creates a new Query which finds documents containing terms that start with the specified prefix.
Example ¶
// finds all documents with terms having the given prefix in the index query := NewPrefixQuery("name") searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(len(searchResults.Hits))
Output: 2
func NewQueryStringQuery ¶
func NewQueryStringQuery(query string) *queryStringQuery
NewQueryStringQuery creates a new Query used for finding documents that satisfy a query string. The query string is a small query language for humans.
Example ¶
query := NewQueryStringQuery("+one -great") searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 1
func NewRegexpQuery ¶
func NewRegexpQuery(regexp string) *regexpQuery
NewRegexpQuery creates a new Query which finds documents containing terms that match the specified regular expression.
func NewTermQuery ¶
func NewTermQuery(term string) *termQuery
NewTermQuery creates a new Query for finding an exact term match in the index.
Example ¶
query := NewTermQuery("great") searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 2
func NewWildcardQuery ¶
func NewWildcardQuery(wildcard string) *wildcardQuery
NewWildcardQuery creates a new Query which finds documents containing terms that match the specified wildcard. In the wildcard pattern '*' will match any sequence of 0 or more characters, and '?' will match any single character.
Types ¶
type Batch ¶
type Batch struct {
// contains filtered or unexported fields
}
A Batch groups together multiple Index and Delete operations you would like performed at the same time. The Batch structure is NOT thread-safe. You should only perform operations on a batch from a single thread at a time. Once batch execution has started, you may not modify it.
func (*Batch) Delete ¶
Delete adds the specified delete operation to the batch. NOTE: the bleve Index is not updated until the batch is executed.
func (*Batch) DeleteInternal ¶
SetInternal adds the specified delete internal operation to the batch. NOTE: the bleve Index is not updated until the batch is executed.
func (*Batch) Index ¶
Index adds the specified index operation to the batch. NOTE: the bleve Index is not updated until the batch is executed.
func (*Batch) Reset ¶
func (b *Batch) Reset()
Reset returns a Batch to the empty state so that it can be re-used in the future.
func (*Batch) SetInternal ¶
SetInternal adds the specified set internal operation to the batch. NOTE: the bleve Index is not updated until the batch is executed.
type Classifier ¶
type Classifier interface {
Type() string
}
A Classifier is an interface describing any object which knows how to identify its own type.
type DocumentMapping ¶
type DocumentMapping struct { Enabled bool `json:"enabled"` Dynamic bool `json:"dynamic"` Properties map[string]*DocumentMapping `json:"properties,omitempty"` Fields []*FieldMapping `json:"fields,omitempty"` DefaultAnalyzer string `json:"default_analyzer"` }
A DocumentMapping describes how a type of document should be indexed. As documents can be hierarchical, named sub-sections of documents are mapped using the same structure in the Properties field. Each value inside a document can be indexed 0 or more ways. These index entries are called fields and are stored in the Fields field. Entire sections of a document can be ignored or excluded by setting Enabled to false. If not explicitly mapped, default mapping operations are used. To disable this automatic handling, set Dynamic to false.
func NewDocumentDisabledMapping ¶
func NewDocumentDisabledMapping() *DocumentMapping
NewDocumentDisabledMapping returns a new document mapping that will not perform any indexing.
func NewDocumentMapping ¶
func NewDocumentMapping() *DocumentMapping
NewDocumentMapping returns a new document mapping with all the default values.
func NewDocumentStaticMapping ¶
func NewDocumentStaticMapping() *DocumentMapping
NewDocumentStaticMapping returns a new document mapping that will not automatically index parts of a document without an explicit mapping.
func (*DocumentMapping) AddFieldMapping ¶
func (dm *DocumentMapping) AddFieldMapping(fm *FieldMapping)
AddFieldMapping adds the provided FieldMapping for this section of the document.
Example ¶
// you can only add field mapping to those properties which already have a document mapping documentMapping := NewDocumentMapping() subDocumentMapping := NewDocumentMapping() documentMapping.AddSubDocumentMapping("Property", subDocumentMapping) fieldMapping := NewTextFieldMapping() fieldMapping.Analyzer = "en" subDocumentMapping.AddFieldMapping(fieldMapping) fmt.Println(len(documentMapping.Properties["Property"].Fields))
Output: 1
func (*DocumentMapping) AddFieldMappingsAt ¶
func (dm *DocumentMapping) AddFieldMappingsAt(property string, fms ...*FieldMapping)
AddFieldMappingsAt adds one or more FieldMappings at the named sub-document. If the named sub-document doesn't yet exist it is created for you. This is a convenience function to make most common mappings more concise. Otherwise, you would:
subMapping := NewDocumentMapping() subMapping.AddFieldMapping(fieldMapping) parentMapping.AddSubDocumentMapping(property, subMapping)
Example ¶
// you can only add field mapping to those properties which already have a document mapping documentMapping := NewDocumentMapping() subDocumentMapping := NewDocumentMapping() documentMapping.AddSubDocumentMapping("NestedProperty", subDocumentMapping) fieldMapping := NewTextFieldMapping() fieldMapping.Analyzer = "en" documentMapping.AddFieldMappingsAt("NestedProperty", fieldMapping) fmt.Println(len(documentMapping.Properties["NestedProperty"].Fields))
Output: 1
func (*DocumentMapping) AddSubDocumentMapping ¶
func (dm *DocumentMapping) AddSubDocumentMapping(property string, sdm *DocumentMapping)
AddSubDocumentMapping adds the provided DocumentMapping as a sub-mapping for the specified named subsection.
Example ¶
// adds a document mapping for a property in a document // useful for mapping nested documents documentMapping := NewDocumentMapping() subDocumentMapping := NewDocumentMapping() documentMapping.AddSubDocumentMapping("Property", subDocumentMapping) fmt.Println(len(documentMapping.Properties))
Output: 1
func (*DocumentMapping) UnmarshalJSON ¶
func (dm *DocumentMapping) UnmarshalJSON(data []byte) error
UnmarshalJSON offers custom unmarshaling with optional strict validation
type Error ¶
type Error int
Error represents a more strongly typed bleve error for detecting and handling specific types of errors.
const ( ErrorIndexPathExists Error = iota ErrorIndexPathDoesNotExist ErrorIndexMetaMissing ErrorIndexMetaCorrupt ErrorDisjunctionFewerThanMinClauses ErrorBooleanQueryNeedsMustOrShouldOrNotMust ErrorNumericQueryNoBounds ErrorPhraseQueryNoTerms ErrorUnknownQueryType ErrorUnknownStorageType ErrorIndexClosed ErrorAliasMulti ErrorAliasEmpty ErrorUnknownIndexType ErrorEmptyID ErrorIndexReadInconsistency )
Constant Error values which can be compared to determine the type of error
type FacetRequest ¶
type FacetRequest struct { Size int `json:"size"` Field string `json:"field"` NumericRanges []*numericRange `json:"numeric_ranges,omitempty"` DateTimeRanges []*dateTimeRange `json:"date_ranges,omitempty"` }
A FacetRequest describes a facet or aggregation of the result document set you would like to be built.
func NewFacetRequest ¶
func NewFacetRequest(field string, size int) *FacetRequest
NewFacetRequest creates a facet on the specified field that limits the number of entries to the specified size.
Example ¶
facet := NewFacetRequest("Name", 1) query := NewMatchAllQuery() searchRequest := NewSearchRequest(query) searchRequest.AddFacet("facet name", facet) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } // total number of terms fmt.Println(searchResults.Facets["facet name"].Total) // numer of docs with no value for this field fmt.Println(searchResults.Facets["facet name"].Missing) // term with highest occurrences in field name fmt.Println(searchResults.Facets["facet name"].Terms[0].Term)
Output: 5 2 one
func (*FacetRequest) AddDateTimeRange ¶
func (fr *FacetRequest) AddDateTimeRange(name string, start, end time.Time)
AddDateTimeRange adds a bucket to a field containing date values. Documents with a date value falling into this range are tabulated as part of this bucket/range.
Example ¶
facet := NewFacetRequest("Created", 1) facet.AddDateTimeRange("range name", time.Unix(0, 0), time.Now()) query := NewMatchAllQuery() searchRequest := NewSearchRequest(query) searchRequest.AddFacet("facet name", facet) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } // dates in field Created since starting of unix time till now fmt.Println(searchResults.Facets["facet name"].DateRanges[0].Count)
Output: 2
func (*FacetRequest) AddNumericRange ¶
func (fr *FacetRequest) AddNumericRange(name string, min, max *float64)
AddNumericRange adds a bucket to a field containing numeric values. Documents with a numeric value falling into this range are tabulated as part of this bucket/range.
Example ¶
value1 := float64(11) facet := NewFacetRequest("Priority", 1) facet.AddNumericRange("range name", &value1, nil) query := NewMatchAllQuery() searchRequest := NewSearchRequest(query) searchRequest.AddFacet("facet name", facet) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } // number documents with field Priority in the given range fmt.Println(searchResults.Facets["facet name"].NumericRanges[0].Count)
Output: 1
func (*FacetRequest) Validate ¶ added in v0.3.0
func (fr *FacetRequest) Validate() error
type FacetsRequest ¶
type FacetsRequest map[string]*FacetRequest
FacetsRequest groups together all the FacetRequest objects for a single query.
func (FacetsRequest) Validate ¶ added in v0.3.0
func (fr FacetsRequest) Validate() error
type FieldMapping ¶
type FieldMapping struct { Name string `json:"name,omitempty"` Type string `json:"type,omitempty"` // Analyzer specifies the name of the analyzer to use for this field. If // Analyzer is empty, traverse the DocumentMapping tree toward the root and // pick the first non-empty DefaultAnalyzer found. If there is none, use // the IndexMapping.DefaultAnalyzer. Analyzer string `json:"analyzer,omitempty"` // Store indicates whether to store field values in the index. Stored // values can be retrieved from search results using SearchRequest.Fields. Store bool `json:"store,omitempty"` Index bool `json:"index,omitempty"` // IncludeTermVectors, if true, makes terms occurrences to be recorded for // this field. It includes the term position within the terms sequence and // the term offsets in the source document field. Term vectors are required // to perform phrase queries or terms highlighting in source documents. IncludeTermVectors bool `json:"include_term_vectors,omitempty"` IncludeInAll bool `json:"include_in_all,omitempty"` DateFormat string `json:"date_format,omitempty"` }
A FieldMapping describes how a specific item should be put into the index.
func NewBooleanFieldMapping ¶
func NewBooleanFieldMapping() *FieldMapping
NewBooleanFieldMapping returns a default field mapping for booleans
func NewDateTimeFieldMapping ¶
func NewDateTimeFieldMapping() *FieldMapping
NewDateTimeFieldMapping returns a default field mapping for dates
func NewNumericFieldMapping ¶
func NewNumericFieldMapping() *FieldMapping
NewNumericFieldMapping returns a default field mapping for numbers
func NewTextFieldMapping ¶
func NewTextFieldMapping() *FieldMapping
NewTextFieldMapping returns a default field mapping for text
func (*FieldMapping) Options ¶
func (fm *FieldMapping) Options() document.IndexingOptions
Options returns the indexing options for this field.
func (*FieldMapping) UnmarshalJSON ¶
func (fm *FieldMapping) UnmarshalJSON(data []byte) error
UnmarshalJSON offers custom unmarshaling with optional strict validation
type HighlightRequest ¶
HighlightRequest describes how field matches should be highlighted.
func NewHighlight ¶
func NewHighlight() *HighlightRequest
NewHighlight creates a default HighlightRequest.
Example ¶
query := NewMatchQuery("nameless") searchRequest := NewSearchRequest(query) searchRequest.Highlight = NewHighlight() searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].Fragments["Name"][0])
Output: great <mark>nameless</mark> one
func NewHighlightWithStyle ¶
func NewHighlightWithStyle(style string) *HighlightRequest
NewHighlightWithStyle creates a HighlightRequest with an alternate style.
Example ¶
query := NewMatchQuery("nameless") searchRequest := NewSearchRequest(query) searchRequest.Highlight = NewHighlightWithStyle(ansi.Name) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].Fragments["Name"][0])
Output: great �[43mnameless�[0m one
func (*HighlightRequest) AddField ¶
func (h *HighlightRequest) AddField(field string)
type Index ¶
type Index interface { // Index analyzes, indexes or stores mapped data fields. Supplied // identifier is bound to analyzed data and will be retrieved by search // requests. See Index interface documentation for details about mapping // rules. Index(id string, data interface{}) error Delete(id string) error NewBatch() *Batch Batch(b *Batch) error // Document returns specified document or nil if the document is not // indexed or stored. Document(id string) (*document.Document, error) // DocCount returns the number of documents in the index. DocCount() (uint64, error) Search(req *SearchRequest) (*SearchResult, error) SearchInContext(ctx context.Context, req *SearchRequest) (*SearchResult, error) Fields() ([]string, error) FieldDict(field string) (index.FieldDict, error) FieldDictRange(field string, startTerm []byte, endTerm []byte) (index.FieldDict, error) FieldDictPrefix(field string, termPrefix []byte) (index.FieldDict, error) Close() error Mapping() *IndexMapping Stats() *IndexStat StatsMap() map[string]interface{} GetInternal(key []byte) ([]byte, error) SetInternal(key, val []byte) error DeleteInternal(key []byte) error // Name returns the name of the index (by default this is the path) Name() string // SetName lets you assign your own logical name to this index SetName(string) // Advanced returns the indexer and data store, exposing lower level // methods to enumerate records and access data. Advanced() (index.Index, store.KVStore, error) }
An Index implements all the indexing and searching capabilities of bleve. An Index can be created using the New() and Open() methods.
Index() takes an input value, deduces a DocumentMapping for its type, assigns string paths to its fields or values then applies field mappings on them.
If the value is a []byte, the indexer attempts to convert it to something else using the ByteArrayConverter registered as IndexMapping.ByteArrayConverter. By default, it interprets the value as a JSON payload and unmarshals it to map[string]interface{}.
The DocumentMapping used to index a value is deduced by the following rules: 1) If value implements Classifier interface, resolve the mapping from Type(). 2) If value has a string field or value at IndexMapping.TypeField. (defaulting to "_type"), use it to resolve the mapping. Fields addressing is described below. 3) If IndexMapping.DefaultType is registered, return it. 4) Return IndexMapping.DefaultMapping.
Each field or nested field of the value is identified by a string path, then mapped to one or several FieldMappings which extract the result for analysis.
Struct values fields are identified by their "json:" tag, or by their name. Nested fields are identified by prefixing with their parent identifier, separated by a dot.
Map values entries are identified by their string key. Entries not indexed by strings are ignored. Entry values are identified recursively like struct fields.
Slice and array values are identified by their field name. Their elements are processed sequentially with the same FieldMapping.
String, float64 and time.Time values are identified by their field name. Other types are ignored.
Each value identifier is decomposed in its parts and recursively address SubDocumentMappings in the tree starting at the root DocumentMapping. If a mapping is found, all its FieldMappings are applied to the value. If no mapping is found and the root DocumentMapping is dynamic, default mappings are used based on value type and IndexMapping default configurations.
Finally, mapped values are analyzed, indexed or stored. See FieldMapping.Analyzer to know how an analyzer is resolved for a given field.
Examples:
type Date struct { Day string `json:"day"` Month string Year string } type Person struct { FirstName string `json:"first_name"` LastName string BirthDate Date `json:"birth_date"` }
A Person value FirstName is mapped by the SubDocumentMapping at "first_name". Its LastName is mapped by the one at "LastName". The day of BirthDate is mapped to the SubDocumentMapping "day" of the root SubDocumentMapping "birth_date". It will appear as the "birth_date.day" field in the index. The month is mapped to "birth_date.Month".
Example (Indexing) ¶
data := struct { Name string Created time.Time Age int }{Name: "named one", Created: time.Now(), Age: 50} data2 := struct { Name string Created time.Time Age int }{Name: "great nameless one", Created: time.Now(), Age: 25} // index some data err = example_index.Index("document id 1", data) if err != nil { panic(err) } err = example_index.Index("document id 2", data2) if err != nil { panic(err) } // 2 documents have been indexed count, err := example_index.DocCount() if err != nil { panic(err) } fmt.Println(count)
Output: 2
func New ¶
func New(path string, mapping *IndexMapping) (Index, error)
New index at the specified path, must not exist. The provided mapping will be used for all Index/Search operations.
Example ¶
mapping = NewIndexMapping() example_index, err = New("path_to_index", mapping) if err != nil { panic(err) } count, err := example_index.DocCount() if err != nil { panic(err) } fmt.Println(count)
Output: 0
func NewMemOnly ¶ added in v0.5.0
func NewMemOnly(mapping *IndexMapping) (Index, error)
NewMemOnly creates a memory-only index. The contents of the index is NOT persisted, and will be lost once closed. The provided mapping will be used for all Index/Search operations.
func NewUsing ¶
func NewUsing(path string, mapping *IndexMapping, indexType string, kvstore string, kvconfig map[string]interface{}) (Index, error)
NewUsing creates index at the specified path, which must not already exist. The provided mapping will be used for all Index/Search operations. The specified index type will be used The specified kvstore implementation will be used and the provided kvconfig will be passed to its constructor.
func Open ¶
Open index at the specified path, must exist. The mapping used when it was created will be used for all Index/Search operations.
type IndexAlias ¶
An IndexAlias is a wrapper around one or more Index objects. It has two distinct modes of operation. 1. When it points to a single index, ALL index operations are valid and will be passed through to the underlying index. 2. When it points to more than one index, the only valid operation is Search. In this case the search will be performed across all the underlying indexes and the results merged. Calls to Add/Remove/Swap the underlying indexes are atomic, so you can safely change the underlying Index objects while other components are performing operations.
type IndexErrMap ¶
IndexErrMap tracks errors with the name of the index where it occurred
func (IndexErrMap) MarshalJSON ¶
func (iem IndexErrMap) MarshalJSON() ([]byte, error)
MarshalJSON seralizes the error into a string for JSON consumption
func (IndexErrMap) UnmarshalJSON ¶ added in v0.3.0
func (iem IndexErrMap) UnmarshalJSON(data []byte) error
type IndexMapping ¶
type IndexMapping struct { TypeMapping map[string]*DocumentMapping `json:"types,omitempty"` DefaultMapping *DocumentMapping `json:"default_mapping"` TypeField string `json:"type_field"` DefaultType string `json:"default_type"` DefaultAnalyzer string `json:"default_analyzer"` DefaultDateTimeParser string `json:"default_datetime_parser"` DefaultField string `json:"default_field"` StoreDynamic bool `json:"store_dynamic"` IndexDynamic bool `json:"index_dynamic"` CustomAnalysis *customAnalysis `json:"analysis,omitempty"` // contains filtered or unexported fields }
An IndexMapping controls how objects are placed into an index. First the type of the object is determined. Once the type is know, the appropriate DocumentMapping is selected by the type. If no mapping was determined for that type, a DefaultMapping will be used.
func NewIndexMapping ¶
func NewIndexMapping() *IndexMapping
NewIndexMapping creates a new IndexMapping that will use all the default indexing rules
func (*IndexMapping) AddCustomAnalyzer ¶
func (im *IndexMapping) AddCustomAnalyzer(name string, config map[string]interface{}) error
AddCustomAnalyzer defines a custom analyzer for use in this mapping. The config map must have a "type" string entry to resolve the analyzer constructor. The constructor is invoked with the remaining entries and returned analyzer is registered in the IndexMapping.
bleve comes with predefined analyzers, like github.com/blevesearch/bleve/analysis/analyzers/custom_analyzer. They are available only if their package is imported by client code. To achieve this, use their metadata to fill configuration entries:
import ( "github.com/blevesearch/bleve/analysis/analyzers/custom_analyzer" "github.com/blevesearch/bleve/analysis/char_filters/html_char_filter" "github.com/blevesearch/bleve/analysis/token_filters/lower_case_filter" "github.com/blevesearch/bleve/analysis/tokenizers/unicode" ) m := bleve.NewIndexMapping() err := m.AddCustomAnalyzer("html", map[string]interface{}{ "type": custom_analyzer.Name, "char_filters": []string{ html_char_filter.Name, }, "tokenizer": unicode.Name, "token_filters": []string{ lower_case_filter.Name, ... }, })
func (*IndexMapping) AddCustomCharFilter ¶
func (im *IndexMapping) AddCustomCharFilter(name string, config map[string]interface{}) error
AddCustomCharFilter defines a custom char filter for use in this mapping
func (*IndexMapping) AddCustomDateTimeParser ¶
func (im *IndexMapping) AddCustomDateTimeParser(name string, config map[string]interface{}) error
AddCustomDateTimeParser defines a custom date time parser for use in this mapping
func (*IndexMapping) AddCustomTokenFilter ¶
func (im *IndexMapping) AddCustomTokenFilter(name string, config map[string]interface{}) error
AddCustomTokenFilter defines a custom token filter for use in this mapping
func (*IndexMapping) AddCustomTokenMap ¶
func (im *IndexMapping) AddCustomTokenMap(name string, config map[string]interface{}) error
AddCustomTokenMap defines a custom token map for use in this mapping
func (*IndexMapping) AddCustomTokenizer ¶
func (im *IndexMapping) AddCustomTokenizer(name string, config map[string]interface{}) error
AddCustomTokenizer defines a custom tokenizer for use in this mapping
func (*IndexMapping) AddDocumentMapping ¶
func (im *IndexMapping) AddDocumentMapping(doctype string, dm *DocumentMapping)
AddDocumentMapping sets a custom document mapping for the specified type
func (*IndexMapping) AnalyzeText ¶
func (im *IndexMapping) AnalyzeText(analyzerName string, text []byte) (analysis.TokenStream, error)
func (*IndexMapping) FieldAnalyzer ¶
func (im *IndexMapping) FieldAnalyzer(field string) string
FieldAnalyzer returns the name of the analyzer used on a field.
func (*IndexMapping) UnmarshalJSON ¶
func (im *IndexMapping) UnmarshalJSON(data []byte) error
UnmarshalJSON offers custom unmarshaling with optional strict validation
func (*IndexMapping) Validate ¶
func (im *IndexMapping) Validate() error
Validate will walk the entire structure ensuring the following explicitly named and default analyzers can be built
type IndexStat ¶
type IndexStat struct {
// contains filtered or unexported fields
}
func (*IndexStat) MarshalJSON ¶
type IndexStats ¶
type IndexStats struct {
// contains filtered or unexported fields
}
func NewIndexStats ¶
func NewIndexStats() *IndexStats
func (*IndexStats) Register ¶
func (i *IndexStats) Register(index Index)
func (*IndexStats) String ¶
func (i *IndexStats) String() string
func (*IndexStats) UnRegister ¶
func (i *IndexStats) UnRegister(index Index)
type MatchQueryOperator ¶ added in v0.4.0
type MatchQueryOperator int
func (MatchQueryOperator) MarshalJSON ¶ added in v0.4.0
func (o MatchQueryOperator) MarshalJSON() ([]byte, error)
func (*MatchQueryOperator) UnmarshalJSON ¶ added in v0.4.0
func (o *MatchQueryOperator) UnmarshalJSON(data []byte) error
type Query ¶
type Query interface { Boost() float64 SetBoost(b float64) Query Field() string SetField(f string) Query Searcher(i index.IndexReader, m *IndexMapping, explain bool) (search.Searcher, error) Validate() error }
A Query represents a description of the type and parameters for a query into the index.
func ParseQuery ¶
ParseQuery deserializes a JSON representation of a Query object.
type SearchRequest ¶
type SearchRequest struct { Query Query `json:"query"` Size int `json:"size"` From int `json:"from"` Highlight *HighlightRequest `json:"highlight"` Fields []string `json:"fields"` Facets FacetsRequest `json:"facets"` Explain bool `json:"explain"` Sort search.SortOrder `json:"sort"` }
A SearchRequest describes all the parameters needed to search the index. Query is required. Size/From describe how much and which part of the result set to return. Highlight describes optional search result highlighting. Fields describes a list of field values which should be retrieved for result documents, provided they were stored while indexing. Facets describe the set of facets to be computed. Explain triggers inclusion of additional search result score explanations. Sort describes the desired order for the results to be returned.
A special field named "*" can be used to return all fields.
func NewSearchRequest ¶
func NewSearchRequest(q Query) *SearchRequest
NewSearchRequest creates a new SearchRequest for the Query, using default values for all other search parameters.
Example ¶
// finds documents with fields fully matching the given query text query := NewMatchQuery("named one") searchRequest := NewSearchRequest(query) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID)
Output: document id 1
func NewSearchRequestOptions ¶
func NewSearchRequestOptions(q Query, size, from int, explain bool) *SearchRequest
NewSearchRequestOptions creates a new SearchRequest for the Query, with the requested size, from and explanation search parameters. By default results are ordered by score, descending.
func (*SearchRequest) AddFacet ¶
func (r *SearchRequest) AddFacet(facetName string, f *FacetRequest)
AddFacet adds a FacetRequest to this SearchRequest
Example ¶
facet := NewFacetRequest("Name", 1) query := NewMatchAllQuery() searchRequest := NewSearchRequest(query) searchRequest.AddFacet("facet name", facet) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } // total number of terms fmt.Println(searchResults.Facets["facet name"].Total) // numer of docs with no value for this field fmt.Println(searchResults.Facets["facet name"].Missing) // term with highest occurrences in field name fmt.Println(searchResults.Facets["facet name"].Terms[0].Term)
Output: 5 2 one
func (*SearchRequest) SortBy ¶ added in v0.4.0
func (r *SearchRequest) SortBy(order []string)
SortBy changes the request to use the requested sort order this form uses the simplified syntax with an array of strings each string can either be a field name or the magic value _id and _score which refer to the doc id and search score any of these values can optionally be prefixed with - to reverse the order
Example ¶
// find docs containing "one", order by Age instead of score query := NewMatchQuery("one") searchRequest := NewSearchRequest(query) searchRequest.SortBy([]string{"Age"}) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID) fmt.Println(searchResults.Hits[1].ID)
Output: document id 2 document id 1
func (*SearchRequest) SortByCustom ¶ added in v0.4.0
func (r *SearchRequest) SortByCustom(order search.SortOrder)
SortByCustom changes the request to use the requested sort order
Example ¶
// find all docs, order by Age, with docs missing Age field first query := NewMatchAllQuery() searchRequest := NewSearchRequest(query) searchRequest.SortByCustom(search.SortOrder{ &search.SortField{ Field: "Age", Missing: search.SortFieldMissingFirst, }, }) searchResults, err := example_index.Search(searchRequest) if err != nil { panic(err) } fmt.Println(searchResults.Hits[0].ID) fmt.Println(searchResults.Hits[1].ID) fmt.Println(searchResults.Hits[2].ID) fmt.Println(searchResults.Hits[3].ID)
Output: document id 3 document id 4 document id 2 document id 1
func (*SearchRequest) UnmarshalJSON ¶
func (r *SearchRequest) UnmarshalJSON(input []byte) error
UnmarshalJSON deserializes a JSON representation of a SearchRequest
func (*SearchRequest) Validate ¶ added in v0.3.0
func (sr *SearchRequest) Validate() error
type SearchResult ¶
type SearchResult struct { Status *SearchStatus `json:"status"` Request *SearchRequest `json:"request"` Hits search.DocumentMatchCollection `json:"hits"` Total uint64 `json:"total_hits"` MaxScore float64 `json:"max_score"` Took time.Duration `json:"took"` Facets search.FacetResults `json:"facets"` }
A SearchResult describes the results of executing a SearchRequest.
func MultiSearch ¶
func MultiSearch(ctx context.Context, req *SearchRequest, indexes ...Index) (*SearchResult, error)
MultiSearch executes a SearchRequest across multiple Index objects, then merges the results.
func (*SearchResult) Merge ¶
func (sr *SearchResult) Merge(other *SearchResult)
Merge will merge together multiple SearchResults during a MultiSearch
func (*SearchResult) String ¶
func (sr *SearchResult) String() string
type SearchStatus ¶
type SearchStatus struct { Total int `json:"total"` Failed int `json:"failed"` Successful int `json:"successful"` Errors IndexErrMap `json:"errors,omitempty"` }
SearchStatus is a secion in the SearchResult reporting how many underlying indexes were queried, how many were successful/failed and a map of any errors that were encountered
func (*SearchStatus) Merge ¶
func (ss *SearchStatus) Merge(other *SearchStatus)
Merge will merge together multiple SearchStatuses during a MultiSearch
Source Files ¶
- config.go
- config_disk.go
- doc.go
- error.go
- index.go
- index_alias.go
- index_alias_impl.go
- index_impl.go
- index_meta.go
- index_stats.go
- mapping_document.go
- mapping_field.go
- mapping_index.go
- query.go
- query_bool_field.go
- query_boolean.go
- query_conjunction.go
- query_date_range.go
- query_disjunction.go
- query_docid.go
- query_fuzzy.go
- query_match.go
- query_match_all.go
- query_match_none.go
- query_match_phrase.go
- query_numeric_range.go
- query_phrase.go
- query_prefix.go
- query_regexp.go
- query_string.go
- query_string.y.go
- query_string_lex.go
- query_string_parser.go
- query_term.go
- query_wildcard.go
- reflect.go
- search.go
Directories ¶
Path | Synopsis |
---|---|
language/en
Package en implements an analyzer with reasonable defaults for processing English text.
|
Package en implements an analyzer with reasonable defaults for processing English text. |
token_filters/lower_case_filter
Package lower_case_filter implements a TokenFilter which converts tokens to lower case according to unicode rules.
|
Package lower_case_filter implements a TokenFilter which converts tokens to lower case according to unicode rules. |
token_filters/stop_tokens_filter
package stop_tokens_filter implements a TokenFilter removing tokens found in a TokenMap.
|
package stop_tokens_filter implements a TokenFilter removing tokens found in a TokenMap. |
token_map
package token_map implements a generic TokenMap, often used in conjunction with filters to remove or process specific tokens.
|
package token_map implements a generic TokenMap, often used in conjunction with filters to remove or process specific tokens. |
tokenizers/exception
package exception implements a Tokenizer which extracts pieces matched by a regular expression from the input data, delegates the rest to another tokenizer, then insert back extracted parts in the token stream.
|
package exception implements a Tokenizer which extracts pieces matched by a regular expression from the input data, delegates the rest to another tokenizer, then insert back extracted parts in the token stream. |
cmd
|
|
smolder
Package smolder is a generated protocol buffer package.
|
Package smolder is a generated protocol buffer package. |
store/boltdb
Package boltdb implements a store.KVStore on top of BoltDB.
|
Package boltdb implements a store.KVStore on top of BoltDB. |
store/gtreap
Package gtreap provides an in-memory implementation of the KVStore interfaces using the gtreap balanced-binary treap, copy-on-write data structure.
|
Package gtreap provides an in-memory implementation of the KVStore interfaces using the gtreap balanced-binary treap, copy-on-write data structure. |
store/metrics
Package metrics provides a bleve.store.KVStore implementation that wraps another, real KVStore implementation, and uses go-metrics to track runtime performance metrics.
|
Package metrics provides a bleve.store.KVStore implementation that wraps another, real KVStore implementation, and uses go-metrics to track runtime performance metrics. |
upside_down
Package upside_down is a generated protocol buffer package.
|
Package upside_down is a generated protocol buffer package. |