search

package
v12.2.0-beta+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 9, 2018 License: Apache-2.0 Imports: 8 Imported by: 0

Documentation

Overview

Package search implements the Azure ARM Search service API version 2016-09-01.

Search Client

Index

Constants

View Source
const (
	// DefaultBaseURI is the default URI used for the service Search
	DefaultBaseURI = ""
)

Variables

This section is empty.

Functions

func UserAgent

func UserAgent() string

UserAgent returns the UserAgent string to use when sending http.Requests.

func Version

func Version() string

Version returns the semantic version (see http://semver.org) of the client.

Types

type ASCIIFoldingTokenFilter

type ASCIIFoldingTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// PreserveOriginal - A value indicating whether the original token will be kept. Default is false.
	PreserveOriginal *bool `json:"preserveOriginal,omitempty"`
}

ASCIIFoldingTokenFilter converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

func (ASCIIFoldingTokenFilter) AsASCIIFoldingTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsBasicTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsCjkBigramTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsCommonGramTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsDictionaryDecompounderTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsEdgeNGramTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsEdgeNGramTokenFilterV2

func (aftf ASCIIFoldingTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsElisionTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsKeepTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsKeywordMarkerTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsLengthTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsLimitTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsNGramTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsNGramTokenFilterV2

func (aftf ASCIIFoldingTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsPatternCaptureTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsPatternReplaceTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsPhoneticTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsShingleTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsSnowballTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsStemmerOverrideTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsStemmerTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsStopwordsTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsSynonymTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsTruncateTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsUniqueTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) AsWordDelimiterTokenFilter

func (aftf ASCIIFoldingTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for ASCIIFoldingTokenFilter.

func (ASCIIFoldingTokenFilter) MarshalJSON

func (aftf ASCIIFoldingTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for ASCIIFoldingTokenFilter.

type AnalyzeRequest

type AnalyzeRequest struct {
	// Text - The text to break into tokens.
	Text *string `json:"text,omitempty"`
	// Analyzer - The name of the analyzer to use to break the given text. If this parameter is not specified, you must specify a tokenizer instead. The tokenizer and analyzer parameters are mutually exclusive.
	Analyzer *AnalyzerName `json:"analyzer,omitempty"`
	// Tokenizer - The name of the tokenizer to use to break the given text. If this parameter is not specified, you must specify an analyzer instead. The tokenizer and analyzer parameters are mutually exclusive.
	Tokenizer *TokenizerName `json:"tokenizer,omitempty"`
	// TokenFilters - An optional list of token filters to use when breaking the given text. This parameter can only be set when using the tokenizer parameter.
	TokenFilters *[]TokenFilterName `json:"tokenFilters,omitempty"`
	// CharFilters - An optional list of character filters to use when breaking the given text. This parameter can only be set when using the tokenizer parameter.
	CharFilters *[]CharFilterName `json:"charFilters,omitempty"`
}

AnalyzeRequest specifies some text and analysis components used to break that text into tokens.

type AnalyzeResult

type AnalyzeResult struct {
	autorest.Response `json:"-"`
	// Tokens - The list of tokens returned by the analyzer specified in the request.
	Tokens *[]TokenInfo `json:"tokens,omitempty"`
}

AnalyzeResult the result of testing an analyzer on text.

type Analyzer

type Analyzer struct {
	// Name - The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeAnalyzer', 'OdataTypeMicrosoftAzureSearchCustomAnalyzer', 'OdataTypeMicrosoftAzureSearchPatternAnalyzer', 'OdataTypeMicrosoftAzureSearchStandardAnalyzer', 'OdataTypeMicrosoftAzureSearchStopAnalyzer'
	OdataType OdataType `json:"@odata.type,omitempty"`
}

Analyzer abstract base class for analyzers.

func (Analyzer) AsAnalyzer

func (a Analyzer) AsAnalyzer() (*Analyzer, bool)

AsAnalyzer is the BasicAnalyzer implementation for Analyzer.

func (Analyzer) AsBasicAnalyzer

func (a Analyzer) AsBasicAnalyzer() (BasicAnalyzer, bool)

AsBasicAnalyzer is the BasicAnalyzer implementation for Analyzer.

func (Analyzer) AsCustomAnalyzer

func (a Analyzer) AsCustomAnalyzer() (*CustomAnalyzer, bool)

AsCustomAnalyzer is the BasicAnalyzer implementation for Analyzer.

func (Analyzer) AsPatternAnalyzer

func (a Analyzer) AsPatternAnalyzer() (*PatternAnalyzer, bool)

AsPatternAnalyzer is the BasicAnalyzer implementation for Analyzer.

func (Analyzer) AsStandardAnalyzer

func (a Analyzer) AsStandardAnalyzer() (*StandardAnalyzer, bool)

AsStandardAnalyzer is the BasicAnalyzer implementation for Analyzer.

func (Analyzer) AsStopAnalyzer

func (a Analyzer) AsStopAnalyzer() (*StopAnalyzer, bool)

AsStopAnalyzer is the BasicAnalyzer implementation for Analyzer.

func (Analyzer) MarshalJSON

func (a Analyzer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for Analyzer.

type AnalyzerName

type AnalyzerName struct {
	Name *string `json:"name,omitempty"`
}

AnalyzerName defines the names of all text analyzers supported by Azure Search.

type BaseClient

type BaseClient struct {
	autorest.Client
	BaseURI string
}

BaseClient is the base client for Search.

func New

func New() BaseClient

New creates an instance of the BaseClient client.

func NewWithBaseURI

func NewWithBaseURI(baseURI string) BaseClient

NewWithBaseURI creates an instance of the BaseClient client.

type BasicAnalyzer

type BasicAnalyzer interface {
	AsCustomAnalyzer() (*CustomAnalyzer, bool)
	AsPatternAnalyzer() (*PatternAnalyzer, bool)
	AsStandardAnalyzer() (*StandardAnalyzer, bool)
	AsStopAnalyzer() (*StopAnalyzer, bool)
	AsAnalyzer() (*Analyzer, bool)
}

BasicAnalyzer abstract base class for analyzers.

type BasicCharFilter

type BasicCharFilter interface {
	AsMappingCharFilter() (*MappingCharFilter, bool)
	AsPatternReplaceCharFilter() (*PatternReplaceCharFilter, bool)
	AsCharFilter() (*CharFilter, bool)
}

BasicCharFilter abstract base class for character filters.

type BasicDataChangeDetectionPolicy

type BasicDataChangeDetectionPolicy interface {
	AsHighWaterMarkChangeDetectionPolicy() (*HighWaterMarkChangeDetectionPolicy, bool)
	AsSQLIntegratedChangeTrackingPolicy() (*SQLIntegratedChangeTrackingPolicy, bool)
	AsDataChangeDetectionPolicy() (*DataChangeDetectionPolicy, bool)
}

BasicDataChangeDetectionPolicy abstract base class for data change detection policies.

type BasicDataDeletionDetectionPolicy

type BasicDataDeletionDetectionPolicy interface {
	AsSoftDeleteColumnDeletionDetectionPolicy() (*SoftDeleteColumnDeletionDetectionPolicy, bool)
	AsDataDeletionDetectionPolicy() (*DataDeletionDetectionPolicy, bool)
}

BasicDataDeletionDetectionPolicy abstract base class for data deletion detection policies.

type BasicScoringFunction

type BasicScoringFunction interface {
	AsDistanceScoringFunction() (*DistanceScoringFunction, bool)
	AsFreshnessScoringFunction() (*FreshnessScoringFunction, bool)
	AsMagnitudeScoringFunction() (*MagnitudeScoringFunction, bool)
	AsTagScoringFunction() (*TagScoringFunction, bool)
	AsScoringFunction() (*ScoringFunction, bool)
}

BasicScoringFunction abstract base class for functions that can modify document scores during ranking.

type BasicTokenFilter

type BasicTokenFilter interface {
	AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)
	AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)
	AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)
	AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)
	AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)
	AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)
	AsElisionTokenFilter() (*ElisionTokenFilter, bool)
	AsKeepTokenFilter() (*KeepTokenFilter, bool)
	AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)
	AsLengthTokenFilter() (*LengthTokenFilter, bool)
	AsLimitTokenFilter() (*LimitTokenFilter, bool)
	AsNGramTokenFilter() (*NGramTokenFilter, bool)
	AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)
	AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)
	AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)
	AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)
	AsShingleTokenFilter() (*ShingleTokenFilter, bool)
	AsSnowballTokenFilter() (*SnowballTokenFilter, bool)
	AsStemmerTokenFilter() (*StemmerTokenFilter, bool)
	AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)
	AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)
	AsSynonymTokenFilter() (*SynonymTokenFilter, bool)
	AsTruncateTokenFilter() (*TruncateTokenFilter, bool)
	AsUniqueTokenFilter() (*UniqueTokenFilter, bool)
	AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)
	AsTokenFilter() (*TokenFilter, bool)
}

BasicTokenFilter abstract base class for token filters.

type BasicTokenizer

type BasicTokenizer interface {
	AsClassicTokenizer() (*ClassicTokenizer, bool)
	AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)
	AsKeywordTokenizer() (*KeywordTokenizer, bool)
	AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)
	AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)
	AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)
	AsNGramTokenizer() (*NGramTokenizer, bool)
	AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)
	AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)
	AsPatternTokenizer() (*PatternTokenizer, bool)
	AsStandardTokenizer() (*StandardTokenizer, bool)
	AsStandardTokenizerV2() (*StandardTokenizerV2, bool)
	AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)
	AsTokenizer() (*Tokenizer, bool)
}

BasicTokenizer abstract base class for tokenizers.

type CharFilter

type CharFilter struct {
	// Name - The name of the char filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeCharFilter', 'OdataTypeMicrosoftAzureSearchMappingCharFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceCharFilter'
	OdataType OdataTypeBasicCharFilter `json:"@odata.type,omitempty"`
}

CharFilter abstract base class for character filters.

func (CharFilter) AsBasicCharFilter

func (cf CharFilter) AsBasicCharFilter() (BasicCharFilter, bool)

AsBasicCharFilter is the BasicCharFilter implementation for CharFilter.

func (CharFilter) AsCharFilter

func (cf CharFilter) AsCharFilter() (*CharFilter, bool)

AsCharFilter is the BasicCharFilter implementation for CharFilter.

func (CharFilter) AsMappingCharFilter

func (cf CharFilter) AsMappingCharFilter() (*MappingCharFilter, bool)

AsMappingCharFilter is the BasicCharFilter implementation for CharFilter.

func (CharFilter) AsPatternReplaceCharFilter

func (cf CharFilter) AsPatternReplaceCharFilter() (*PatternReplaceCharFilter, bool)

AsPatternReplaceCharFilter is the BasicCharFilter implementation for CharFilter.

func (CharFilter) MarshalJSON

func (cf CharFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for CharFilter.

type CharFilterName

type CharFilterName struct {
	Name *string `json:"name,omitempty"`
}

CharFilterName defines the names of all character filters supported by Azure Search.

type CjkBigramTokenFilter

type CjkBigramTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// IgnoreScripts - The scripts to ignore.
	IgnoreScripts *[]CjkBigramTokenFilterScripts `json:"ignoreScripts,omitempty"`
	// OutputUnigrams - A value indicating whether to output both unigrams and bigrams (if true), or just bigrams (if false). Default is false.
	OutputUnigrams *bool `json:"outputUnigrams,omitempty"`
}

CjkBigramTokenFilter forms bigrams of CJK terms that are generated from StandardTokenizer. This token filter is implemented using Apache Lucene.

func (CjkBigramTokenFilter) AsASCIIFoldingTokenFilter

func (cbtf CjkBigramTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsBasicTokenFilter

func (cbtf CjkBigramTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsCjkBigramTokenFilter

func (cbtf CjkBigramTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsCommonGramTokenFilter

func (cbtf CjkBigramTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsDictionaryDecompounderTokenFilter

func (cbtf CjkBigramTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsEdgeNGramTokenFilter

func (cbtf CjkBigramTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsEdgeNGramTokenFilterV2

func (cbtf CjkBigramTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsElisionTokenFilter

func (cbtf CjkBigramTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsKeepTokenFilter

func (cbtf CjkBigramTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsKeywordMarkerTokenFilter

func (cbtf CjkBigramTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsLengthTokenFilter

func (cbtf CjkBigramTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsLimitTokenFilter

func (cbtf CjkBigramTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsNGramTokenFilter

func (cbtf CjkBigramTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsNGramTokenFilterV2

func (cbtf CjkBigramTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsPatternCaptureTokenFilter

func (cbtf CjkBigramTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsPatternReplaceTokenFilter

func (cbtf CjkBigramTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsPhoneticTokenFilter

func (cbtf CjkBigramTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsShingleTokenFilter

func (cbtf CjkBigramTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsSnowballTokenFilter

func (cbtf CjkBigramTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsStemmerOverrideTokenFilter

func (cbtf CjkBigramTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsStemmerTokenFilter

func (cbtf CjkBigramTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsStopwordsTokenFilter

func (cbtf CjkBigramTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsSynonymTokenFilter

func (cbtf CjkBigramTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsTokenFilter

func (cbtf CjkBigramTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsTruncateTokenFilter

func (cbtf CjkBigramTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsUniqueTokenFilter

func (cbtf CjkBigramTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) AsWordDelimiterTokenFilter

func (cbtf CjkBigramTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for CjkBigramTokenFilter.

func (CjkBigramTokenFilter) MarshalJSON

func (cbtf CjkBigramTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for CjkBigramTokenFilter.

type CjkBigramTokenFilterScripts

type CjkBigramTokenFilterScripts string

CjkBigramTokenFilterScripts enumerates the values for cjk bigram token filter scripts.

const (
	// Han ...
	Han CjkBigramTokenFilterScripts = "han"
	// Hangul ...
	Hangul CjkBigramTokenFilterScripts = "hangul"
	// Hiragana ...
	Hiragana CjkBigramTokenFilterScripts = "hiragana"
	// Katakana ...
	Katakana CjkBigramTokenFilterScripts = "katakana"
)

type ClassicTokenizer

type ClassicTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
}

ClassicTokenizer grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

func (ClassicTokenizer) AsBasicTokenizer

func (ct ClassicTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsClassicTokenizer

func (ct ClassicTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsEdgeNGramTokenizer

func (ct ClassicTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsKeywordTokenizer

func (ct ClassicTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsKeywordTokenizerV2

func (ct ClassicTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (ct ClassicTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsMicrosoftLanguageTokenizer

func (ct ClassicTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsNGramTokenizer

func (ct ClassicTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsPathHierarchyTokenizer

func (ct ClassicTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsPathHierarchyTokenizerV2

func (ct ClassicTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsPatternTokenizer

func (ct ClassicTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsStandardTokenizer

func (ct ClassicTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsStandardTokenizerV2

func (ct ClassicTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsTokenizer

func (ct ClassicTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) AsUaxURLEmailTokenizer

func (ct ClassicTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for ClassicTokenizer.

func (ClassicTokenizer) MarshalJSON

func (ct ClassicTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for ClassicTokenizer.

type CommonGramTokenFilter

type CommonGramTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// CommonWords - The set of common words.
	CommonWords *[]string `json:"commonWords,omitempty"`
	// IgnoreCase - A value indicating whether common words matching will be case insensitive. Default is false.
	IgnoreCase *bool `json:"ignoreCase,omitempty"`
	// UseQueryMode - A value that indicates whether the token filter is in query mode. When in query mode, the token filter generates bigrams and then removes common words and single terms followed by a common word. Default is false.
	UseQueryMode *bool `json:"queryMode,omitempty"`
}

CommonGramTokenFilter construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

func (CommonGramTokenFilter) AsASCIIFoldingTokenFilter

func (cgtf CommonGramTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsBasicTokenFilter

func (cgtf CommonGramTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsCjkBigramTokenFilter

func (cgtf CommonGramTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsCommonGramTokenFilter

func (cgtf CommonGramTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsDictionaryDecompounderTokenFilter

func (cgtf CommonGramTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsEdgeNGramTokenFilter

func (cgtf CommonGramTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsEdgeNGramTokenFilterV2

func (cgtf CommonGramTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsElisionTokenFilter

func (cgtf CommonGramTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsKeepTokenFilter

func (cgtf CommonGramTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsKeywordMarkerTokenFilter

func (cgtf CommonGramTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsLengthTokenFilter

func (cgtf CommonGramTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsLimitTokenFilter

func (cgtf CommonGramTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsNGramTokenFilter

func (cgtf CommonGramTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsNGramTokenFilterV2

func (cgtf CommonGramTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsPatternCaptureTokenFilter

func (cgtf CommonGramTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsPatternReplaceTokenFilter

func (cgtf CommonGramTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsPhoneticTokenFilter

func (cgtf CommonGramTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsShingleTokenFilter

func (cgtf CommonGramTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsSnowballTokenFilter

func (cgtf CommonGramTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsStemmerOverrideTokenFilter

func (cgtf CommonGramTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsStemmerTokenFilter

func (cgtf CommonGramTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsStopwordsTokenFilter

func (cgtf CommonGramTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsSynonymTokenFilter

func (cgtf CommonGramTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsTokenFilter

func (cgtf CommonGramTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsTruncateTokenFilter

func (cgtf CommonGramTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsUniqueTokenFilter

func (cgtf CommonGramTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) AsWordDelimiterTokenFilter

func (cgtf CommonGramTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for CommonGramTokenFilter.

func (CommonGramTokenFilter) MarshalJSON

func (cgtf CommonGramTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for CommonGramTokenFilter.

type CorsOptions

type CorsOptions struct {
	// AllowedOrigins - The list of origins from which JavaScript code will be granted access to your index. Can contain a list of hosts of the form {protocol}://{fully-qualified-domain-name}[:{port#}], or a single '*' to allow all origins (not recommended).
	AllowedOrigins *[]string `json:"allowedOrigins,omitempty"`
	// MaxAgeInSeconds - The duration for which browsers should cache CORS preflight responses. Defaults to 5 mintues.
	MaxAgeInSeconds *int64 `json:"maxAgeInSeconds,omitempty"`
}

CorsOptions defines options to control Cross-Origin Resource Sharing (CORS) for an index.

type CustomAnalyzer

type CustomAnalyzer struct {
	// Name - The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeAnalyzer', 'OdataTypeMicrosoftAzureSearchCustomAnalyzer', 'OdataTypeMicrosoftAzureSearchPatternAnalyzer', 'OdataTypeMicrosoftAzureSearchStandardAnalyzer', 'OdataTypeMicrosoftAzureSearchStopAnalyzer'
	OdataType OdataType `json:"@odata.type,omitempty"`
	// Tokenizer - The name of the tokenizer to use to divide continuous text into a sequence of tokens, such as breaking a sentence into words.
	Tokenizer *TokenizerName `json:"tokenizer,omitempty"`
	// TokenFilters - A list of token filters used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. The filters are run in the order in which they are listed.
	TokenFilters *[]TokenFilterName `json:"tokenFilters,omitempty"`
	// CharFilters - A list of character filters used to prepare input text before it is processed by the tokenizer. For instance, they can replace certain characters or symbols. The filters are run in the order in which they are listed.
	CharFilters *[]CharFilterName `json:"charFilters,omitempty"`
}

CustomAnalyzer allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

func (CustomAnalyzer) AsAnalyzer

func (ca CustomAnalyzer) AsAnalyzer() (*Analyzer, bool)

AsAnalyzer is the BasicAnalyzer implementation for CustomAnalyzer.

func (CustomAnalyzer) AsBasicAnalyzer

func (ca CustomAnalyzer) AsBasicAnalyzer() (BasicAnalyzer, bool)

AsBasicAnalyzer is the BasicAnalyzer implementation for CustomAnalyzer.

func (CustomAnalyzer) AsCustomAnalyzer

func (ca CustomAnalyzer) AsCustomAnalyzer() (*CustomAnalyzer, bool)

AsCustomAnalyzer is the BasicAnalyzer implementation for CustomAnalyzer.

func (CustomAnalyzer) AsPatternAnalyzer

func (ca CustomAnalyzer) AsPatternAnalyzer() (*PatternAnalyzer, bool)

AsPatternAnalyzer is the BasicAnalyzer implementation for CustomAnalyzer.

func (CustomAnalyzer) AsStandardAnalyzer

func (ca CustomAnalyzer) AsStandardAnalyzer() (*StandardAnalyzer, bool)

AsStandardAnalyzer is the BasicAnalyzer implementation for CustomAnalyzer.

func (CustomAnalyzer) AsStopAnalyzer

func (ca CustomAnalyzer) AsStopAnalyzer() (*StopAnalyzer, bool)

AsStopAnalyzer is the BasicAnalyzer implementation for CustomAnalyzer.

func (CustomAnalyzer) MarshalJSON

func (ca CustomAnalyzer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for CustomAnalyzer.

type DataChangeDetectionPolicy

type DataChangeDetectionPolicy struct {
	// OdataType - Possible values include: 'OdataTypeDataChangeDetectionPolicy', 'OdataTypeMicrosoftAzureSearchHighWaterMarkChangeDetectionPolicy', 'OdataTypeMicrosoftAzureSearchSQLIntegratedChangeTrackingPolicy'
	OdataType OdataTypeBasicDataChangeDetectionPolicy `json:"@odata.type,omitempty"`
}

DataChangeDetectionPolicy abstract base class for data change detection policies.

func (DataChangeDetectionPolicy) AsBasicDataChangeDetectionPolicy

func (dcdp DataChangeDetectionPolicy) AsBasicDataChangeDetectionPolicy() (BasicDataChangeDetectionPolicy, bool)

AsBasicDataChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for DataChangeDetectionPolicy.

func (DataChangeDetectionPolicy) AsDataChangeDetectionPolicy

func (dcdp DataChangeDetectionPolicy) AsDataChangeDetectionPolicy() (*DataChangeDetectionPolicy, bool)

AsDataChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for DataChangeDetectionPolicy.

func (DataChangeDetectionPolicy) AsHighWaterMarkChangeDetectionPolicy

func (dcdp DataChangeDetectionPolicy) AsHighWaterMarkChangeDetectionPolicy() (*HighWaterMarkChangeDetectionPolicy, bool)

AsHighWaterMarkChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for DataChangeDetectionPolicy.

func (DataChangeDetectionPolicy) AsSQLIntegratedChangeTrackingPolicy

func (dcdp DataChangeDetectionPolicy) AsSQLIntegratedChangeTrackingPolicy() (*SQLIntegratedChangeTrackingPolicy, bool)

AsSQLIntegratedChangeTrackingPolicy is the BasicDataChangeDetectionPolicy implementation for DataChangeDetectionPolicy.

func (DataChangeDetectionPolicy) MarshalJSON

func (dcdp DataChangeDetectionPolicy) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for DataChangeDetectionPolicy.

type DataContainer

type DataContainer struct {
	// Name - The name of the table or view (for Azure SQL data source) or collection (for DocumentDB data source) that will be indexed.
	Name *string `json:"name,omitempty"`
	// Query - A query that is applied to this data container. The syntax and meaning of this parameter is datasource-specific. Not supported by Azure SQL datasources.
	Query *string `json:"query,omitempty"`
}

DataContainer represents information about the entity (such as Azure SQL table or DocumentDb collection) that will be indexed.

type DataDeletionDetectionPolicy

type DataDeletionDetectionPolicy struct {
	// OdataType - Possible values include: 'OdataTypeDataDeletionDetectionPolicy', 'OdataTypeMicrosoftAzureSearchSoftDeleteColumnDeletionDetectionPolicy'
	OdataType OdataTypeBasicDataDeletionDetectionPolicy `json:"@odata.type,omitempty"`
}

DataDeletionDetectionPolicy abstract base class for data deletion detection policies.

func (DataDeletionDetectionPolicy) AsBasicDataDeletionDetectionPolicy

func (dddp DataDeletionDetectionPolicy) AsBasicDataDeletionDetectionPolicy() (BasicDataDeletionDetectionPolicy, bool)

AsBasicDataDeletionDetectionPolicy is the BasicDataDeletionDetectionPolicy implementation for DataDeletionDetectionPolicy.

func (DataDeletionDetectionPolicy) AsDataDeletionDetectionPolicy

func (dddp DataDeletionDetectionPolicy) AsDataDeletionDetectionPolicy() (*DataDeletionDetectionPolicy, bool)

AsDataDeletionDetectionPolicy is the BasicDataDeletionDetectionPolicy implementation for DataDeletionDetectionPolicy.

func (DataDeletionDetectionPolicy) AsSoftDeleteColumnDeletionDetectionPolicy

func (dddp DataDeletionDetectionPolicy) AsSoftDeleteColumnDeletionDetectionPolicy() (*SoftDeleteColumnDeletionDetectionPolicy, bool)

AsSoftDeleteColumnDeletionDetectionPolicy is the BasicDataDeletionDetectionPolicy implementation for DataDeletionDetectionPolicy.

func (DataDeletionDetectionPolicy) MarshalJSON

func (dddp DataDeletionDetectionPolicy) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for DataDeletionDetectionPolicy.

type DataSource

type DataSource struct {
	autorest.Response `json:"-"`
	// Name - The name of the datasource.
	Name *string `json:"name,omitempty"`
	// Description - The description of the datasource.
	Description *string `json:"description,omitempty"`
	// Type - The type of the datasource.
	Type *DataSourceType `json:"type,omitempty"`
	// Credentials - Credentials for the datasource.
	Credentials *DataSourceCredentials `json:"credentials,omitempty"`
	// Container - The data container for the datasource.
	Container *DataContainer `json:"container,omitempty"`
	// DataChangeDetectionPolicy - The data change detection policy for the datasource.
	DataChangeDetectionPolicy BasicDataChangeDetectionPolicy `json:"dataChangeDetectionPolicy,omitempty"`
	// DataDeletionDetectionPolicy - The data deletion detection policy for the datasource.
	DataDeletionDetectionPolicy BasicDataDeletionDetectionPolicy `json:"dataDeletionDetectionPolicy,omitempty"`
	// ETag - The ETag of the DataSource.
	ETag *string `json:"@odata.etag,omitempty"`
}

DataSource represents a datasource definition in Azure Search, which can be used to configure an indexer.

func (*DataSource) UnmarshalJSON

func (ds *DataSource) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for DataSource struct.

type DataSourceCredentials

type DataSourceCredentials struct {
	// ConnectionString - The connection string for the datasource.
	ConnectionString *string `json:"connectionString,omitempty"`
}

DataSourceCredentials represents credentials that can be used to connect to a datasource.

type DataSourceListResult

type DataSourceListResult struct {
	autorest.Response `json:"-"`
	// DataSources - The datasources in the Search service.
	DataSources *[]DataSource `json:"value,omitempty"`
}

DataSourceListResult response from a List Datasources request. If successful, it includes the full definitions of all datasources.

type DataSourceType

type DataSourceType struct {
	Name *string `json:"name,omitempty"`
}

DataSourceType defines the type of an Azure Search datasource.

type DataSourcesClient

type DataSourcesClient struct {
	BaseClient
}

DataSourcesClient is the search Client

func NewDataSourcesClient

func NewDataSourcesClient() DataSourcesClient

NewDataSourcesClient creates an instance of the DataSourcesClient client.

func NewDataSourcesClientWithBaseURI

func NewDataSourcesClientWithBaseURI(baseURI string) DataSourcesClient

NewDataSourcesClientWithBaseURI creates an instance of the DataSourcesClient client.

func (DataSourcesClient) Create

func (client DataSourcesClient) Create(ctx context.Context, dataSource DataSource, clientRequestID *uuid.UUID) (result DataSource, err error)

Create creates a new Azure Search datasource.

dataSource is the definition of the datasource to create. clientRequestID is the tracking ID sent with the request to help with debugging.

func (DataSourcesClient) CreateOrUpdate

func (client DataSourcesClient) CreateOrUpdate(ctx context.Context, dataSourceName string, dataSource DataSource, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (result DataSource, err error)

CreateOrUpdate creates a new Azure Search datasource or updates a datasource if it already exists.

dataSourceName is the name of the datasource to create or update. dataSource is the definition of the datasource to create or update. clientRequestID is the tracking ID sent with the request to help with debugging. ifMatch is defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. ifNoneMatch is defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value.

func (DataSourcesClient) CreateOrUpdatePreparer

func (client DataSourcesClient) CreateOrUpdatePreparer(ctx context.Context, dataSourceName string, dataSource DataSource, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (*http.Request, error)

CreateOrUpdatePreparer prepares the CreateOrUpdate request.

func (DataSourcesClient) CreateOrUpdateResponder

func (client DataSourcesClient) CreateOrUpdateResponder(resp *http.Response) (result DataSource, err error)

CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always closes the http.Response Body.

func (DataSourcesClient) CreateOrUpdateSender

func (client DataSourcesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error)

CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the http.Response Body if it receives an error.

func (DataSourcesClient) CreatePreparer

func (client DataSourcesClient) CreatePreparer(ctx context.Context, dataSource DataSource, clientRequestID *uuid.UUID) (*http.Request, error)

CreatePreparer prepares the Create request.

func (DataSourcesClient) CreateResponder

func (client DataSourcesClient) CreateResponder(resp *http.Response) (result DataSource, err error)

CreateResponder handles the response to the Create request. The method always closes the http.Response Body.

func (DataSourcesClient) CreateSender

func (client DataSourcesClient) CreateSender(req *http.Request) (*http.Response, error)

CreateSender sends the Create request. The method will close the http.Response Body if it receives an error.

func (DataSourcesClient) Delete

func (client DataSourcesClient) Delete(ctx context.Context, dataSourceName string, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (result autorest.Response, err error)

Delete deletes an Azure Search datasource.

dataSourceName is the name of the datasource to delete. clientRequestID is the tracking ID sent with the request to help with debugging. ifMatch is defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. ifNoneMatch is defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value.

func (DataSourcesClient) DeletePreparer

func (client DataSourcesClient) DeletePreparer(ctx context.Context, dataSourceName string, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (*http.Request, error)

DeletePreparer prepares the Delete request.

func (DataSourcesClient) DeleteResponder

func (client DataSourcesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error)

DeleteResponder handles the response to the Delete request. The method always closes the http.Response Body.

func (DataSourcesClient) DeleteSender

func (client DataSourcesClient) DeleteSender(req *http.Request) (*http.Response, error)

DeleteSender sends the Delete request. The method will close the http.Response Body if it receives an error.

func (DataSourcesClient) Get

func (client DataSourcesClient) Get(ctx context.Context, dataSourceName string, clientRequestID *uuid.UUID) (result DataSource, err error)

Get retrieves a datasource definition from Azure Search.

dataSourceName is the name of the datasource to retrieve. clientRequestID is the tracking ID sent with the request to help with debugging.

func (DataSourcesClient) GetPreparer

func (client DataSourcesClient) GetPreparer(ctx context.Context, dataSourceName string, clientRequestID *uuid.UUID) (*http.Request, error)

GetPreparer prepares the Get request.

func (DataSourcesClient) GetResponder

func (client DataSourcesClient) GetResponder(resp *http.Response) (result DataSource, err error)

GetResponder handles the response to the Get request. The method always closes the http.Response Body.

func (DataSourcesClient) GetSender

func (client DataSourcesClient) GetSender(req *http.Request) (*http.Response, error)

GetSender sends the Get request. The method will close the http.Response Body if it receives an error.

func (DataSourcesClient) List

func (client DataSourcesClient) List(ctx context.Context, clientRequestID *uuid.UUID) (result DataSourceListResult, err error)

List lists all datasources available for an Azure Search service.

clientRequestID is the tracking ID sent with the request to help with debugging.

func (DataSourcesClient) ListPreparer

func (client DataSourcesClient) ListPreparer(ctx context.Context, clientRequestID *uuid.UUID) (*http.Request, error)

ListPreparer prepares the List request.

func (DataSourcesClient) ListResponder

func (client DataSourcesClient) ListResponder(resp *http.Response) (result DataSourceListResult, err error)

ListResponder handles the response to the List request. The method always closes the http.Response Body.

func (DataSourcesClient) ListSender

func (client DataSourcesClient) ListSender(req *http.Request) (*http.Response, error)

ListSender sends the List request. The method will close the http.Response Body if it receives an error.

type DataType

type DataType struct {
	Name *string `json:"name,omitempty"`
}

DataType defines the data type of a field in an Azure Search index.

type DictionaryDecompounderTokenFilter

type DictionaryDecompounderTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// WordList - The list of words to match against.
	WordList *[]string `json:"wordList,omitempty"`
	// MinWordSize - The minimum word size. Only words longer than this get processed. Default is 5. Maximum is 300.
	MinWordSize *int32 `json:"minWordSize,omitempty"`
	// MinSubwordSize - The minimum subword size. Only subwords longer than this are outputted. Default is 2. Maximum is 300.
	MinSubwordSize *int32 `json:"minSubwordSize,omitempty"`
	// MaxSubwordSize - The maximum subword size. Only subwords shorter than this are outputted. Default is 15. Maximum is 300.
	MaxSubwordSize *int32 `json:"maxSubwordSize,omitempty"`
	// OnlyLongestMatch - A value indicating whether to add only the longest matching subword to the output. Default is false.
	OnlyLongestMatch *bool `json:"onlyLongestMatch,omitempty"`
}

DictionaryDecompounderTokenFilter decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

func (DictionaryDecompounderTokenFilter) AsASCIIFoldingTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsBasicTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsCjkBigramTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsCommonGramTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsDictionaryDecompounderTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsEdgeNGramTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsEdgeNGramTokenFilterV2

func (ddtf DictionaryDecompounderTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsElisionTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsKeepTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsKeywordMarkerTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsLengthTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsLimitTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsNGramTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsNGramTokenFilterV2

func (ddtf DictionaryDecompounderTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsPatternCaptureTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsPatternReplaceTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsPhoneticTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsShingleTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsSnowballTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsStemmerOverrideTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsStemmerTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsStopwordsTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsSynonymTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsTruncateTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsUniqueTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) AsWordDelimiterTokenFilter

func (ddtf DictionaryDecompounderTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for DictionaryDecompounderTokenFilter.

func (DictionaryDecompounderTokenFilter) MarshalJSON

func (ddtf DictionaryDecompounderTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for DictionaryDecompounderTokenFilter.

type DistanceScoringFunction

type DistanceScoringFunction struct {
	// FieldName - The name of the field used as input to the scoring function.
	FieldName *string `json:"fieldName,omitempty"`
	// Boost - A multiplier for the raw score. Must be a positive number not equal to 1.0.
	Boost *float64 `json:"boost,omitempty"`
	// Interpolation - A value indicating how boosting will be interpolated across document scores; defaults to "Linear". Possible values include: 'Linear', 'Constant', 'Quadratic', 'Logarithmic'
	Interpolation ScoringFunctionInterpolation `json:"interpolation,omitempty"`
	// Type - Possible values include: 'TypeScoringFunction', 'TypeDistance', 'TypeFreshness', 'TypeMagnitude', 'TypeTag'
	Type Type `json:"type,omitempty"`
	// Parameters - Parameter values for the distance scoring function.
	Parameters *DistanceScoringParameters `json:"distance,omitempty"`
}

DistanceScoringFunction defines a function that boosts scores based on distance from a geographic location.

func (DistanceScoringFunction) AsBasicScoringFunction

func (dsf DistanceScoringFunction) AsBasicScoringFunction() (BasicScoringFunction, bool)

AsBasicScoringFunction is the BasicScoringFunction implementation for DistanceScoringFunction.

func (DistanceScoringFunction) AsDistanceScoringFunction

func (dsf DistanceScoringFunction) AsDistanceScoringFunction() (*DistanceScoringFunction, bool)

AsDistanceScoringFunction is the BasicScoringFunction implementation for DistanceScoringFunction.

func (DistanceScoringFunction) AsFreshnessScoringFunction

func (dsf DistanceScoringFunction) AsFreshnessScoringFunction() (*FreshnessScoringFunction, bool)

AsFreshnessScoringFunction is the BasicScoringFunction implementation for DistanceScoringFunction.

func (DistanceScoringFunction) AsMagnitudeScoringFunction

func (dsf DistanceScoringFunction) AsMagnitudeScoringFunction() (*MagnitudeScoringFunction, bool)

AsMagnitudeScoringFunction is the BasicScoringFunction implementation for DistanceScoringFunction.

func (DistanceScoringFunction) AsScoringFunction

func (dsf DistanceScoringFunction) AsScoringFunction() (*ScoringFunction, bool)

AsScoringFunction is the BasicScoringFunction implementation for DistanceScoringFunction.

func (DistanceScoringFunction) AsTagScoringFunction

func (dsf DistanceScoringFunction) AsTagScoringFunction() (*TagScoringFunction, bool)

AsTagScoringFunction is the BasicScoringFunction implementation for DistanceScoringFunction.

func (DistanceScoringFunction) MarshalJSON

func (dsf DistanceScoringFunction) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for DistanceScoringFunction.

type DistanceScoringParameters

type DistanceScoringParameters struct {
	// ReferencePointParameter - The name of the parameter passed in search queries to specify the reference location.
	ReferencePointParameter *string `json:"referencePointParameter,omitempty"`
	// BoostingDistance - The distance in kilometers from the reference location where the boosting range ends.
	BoostingDistance *float64 `json:"boostingDistance,omitempty"`
}

DistanceScoringParameters provides parameter values to a distance scoring function.

type DocumentIndexResult

type DocumentIndexResult struct {
	// Results - The list of status information for each document in the indexing request.
	Results *[]IndexingResult `json:"value,omitempty"`
}

DocumentIndexResult response containing the status of operations for all documents in the indexing request.

type DocumentsProxyClient

type DocumentsProxyClient struct {
	BaseClient
}

DocumentsProxyClient is the search Client

func NewDocumentsProxyClient

func NewDocumentsProxyClient() DocumentsProxyClient

NewDocumentsProxyClient creates an instance of the DocumentsProxyClient client.

func NewDocumentsProxyClientWithBaseURI

func NewDocumentsProxyClientWithBaseURI(baseURI string) DocumentsProxyClient

NewDocumentsProxyClientWithBaseURI creates an instance of the DocumentsProxyClient client.

func (DocumentsProxyClient) Count

func (client DocumentsProxyClient) Count(ctx context.Context, clientRequestID *uuid.UUID) (result Int64, err error)

Count queries the number of documents in the Azure Search index.

clientRequestID is the tracking ID sent with the request to help with debugging.

func (DocumentsProxyClient) CountPreparer

func (client DocumentsProxyClient) CountPreparer(ctx context.Context, clientRequestID *uuid.UUID) (*http.Request, error)

CountPreparer prepares the Count request.

func (DocumentsProxyClient) CountResponder

func (client DocumentsProxyClient) CountResponder(resp *http.Response) (result Int64, err error)

CountResponder handles the response to the Count request. The method always closes the http.Response Body.

func (DocumentsProxyClient) CountSender

func (client DocumentsProxyClient) CountSender(req *http.Request) (*http.Response, error)

CountSender sends the Count request. The method will close the http.Response Body if it receives an error.

type EdgeNGramTokenFilter

type EdgeNGramTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// MinGram - The minimum n-gram length. Default is 1. Must be less than the value of maxGram.
	MinGram *int32 `json:"minGram,omitempty"`
	// MaxGram - The maximum n-gram length. Default is 2.
	MaxGram *int32 `json:"maxGram,omitempty"`
	// Side - Specifies which side of the input the n-gram should be generated from. Default is "front". Possible values include: 'Front', 'Back'
	Side EdgeNGramTokenFilterSide `json:"side,omitempty"`
}

EdgeNGramTokenFilter generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

func (EdgeNGramTokenFilter) AsASCIIFoldingTokenFilter

func (engtf EdgeNGramTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsBasicTokenFilter

func (engtf EdgeNGramTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsCjkBigramTokenFilter

func (engtf EdgeNGramTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsCommonGramTokenFilter

func (engtf EdgeNGramTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsDictionaryDecompounderTokenFilter

func (engtf EdgeNGramTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsEdgeNGramTokenFilter

func (engtf EdgeNGramTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsEdgeNGramTokenFilterV2

func (engtf EdgeNGramTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsElisionTokenFilter

func (engtf EdgeNGramTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsKeepTokenFilter

func (engtf EdgeNGramTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsKeywordMarkerTokenFilter

func (engtf EdgeNGramTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsLengthTokenFilter

func (engtf EdgeNGramTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsLimitTokenFilter

func (engtf EdgeNGramTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsNGramTokenFilter

func (engtf EdgeNGramTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsNGramTokenFilterV2

func (engtf EdgeNGramTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsPatternCaptureTokenFilter

func (engtf EdgeNGramTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsPatternReplaceTokenFilter

func (engtf EdgeNGramTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsPhoneticTokenFilter

func (engtf EdgeNGramTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsShingleTokenFilter

func (engtf EdgeNGramTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsSnowballTokenFilter

func (engtf EdgeNGramTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsStemmerOverrideTokenFilter

func (engtf EdgeNGramTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsStemmerTokenFilter

func (engtf EdgeNGramTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsStopwordsTokenFilter

func (engtf EdgeNGramTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsSynonymTokenFilter

func (engtf EdgeNGramTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsTokenFilter

func (engtf EdgeNGramTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsTruncateTokenFilter

func (engtf EdgeNGramTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsUniqueTokenFilter

func (engtf EdgeNGramTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) AsWordDelimiterTokenFilter

func (engtf EdgeNGramTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilter.

func (EdgeNGramTokenFilter) MarshalJSON

func (engtf EdgeNGramTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for EdgeNGramTokenFilter.

type EdgeNGramTokenFilterSide

type EdgeNGramTokenFilterSide string

EdgeNGramTokenFilterSide enumerates the values for edge n gram token filter side.

const (
	// Back ...
	Back EdgeNGramTokenFilterSide = "back"
	// Front ...
	Front EdgeNGramTokenFilterSide = "front"
)

type EdgeNGramTokenFilterV2

type EdgeNGramTokenFilterV2 struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// MinGram - The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.
	MinGram *int32 `json:"minGram,omitempty"`
	// MaxGram - The maximum n-gram length. Default is 2. Maximum is 300.
	MaxGram *int32 `json:"maxGram,omitempty"`
	// Side - Specifies which side of the input the n-gram should be generated from. Default is "front". Possible values include: 'Front', 'Back'
	Side EdgeNGramTokenFilterSide `json:"side,omitempty"`
}

EdgeNGramTokenFilterV2 generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

func (EdgeNGramTokenFilterV2) AsASCIIFoldingTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsBasicTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsCjkBigramTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsCommonGramTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsDictionaryDecompounderTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsEdgeNGramTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsEdgeNGramTokenFilterV2

func (engtfv EdgeNGramTokenFilterV2) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsElisionTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsKeepTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsKeywordMarkerTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsLengthTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsLimitTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsNGramTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsNGramTokenFilterV2

func (engtfv EdgeNGramTokenFilterV2) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsPatternCaptureTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsPatternReplaceTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsPhoneticTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsShingleTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsSnowballTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsStemmerOverrideTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsStemmerTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsStopwordsTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsSynonymTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsTruncateTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsUniqueTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) AsWordDelimiterTokenFilter

func (engtfv EdgeNGramTokenFilterV2) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for EdgeNGramTokenFilterV2.

func (EdgeNGramTokenFilterV2) MarshalJSON

func (engtfv EdgeNGramTokenFilterV2) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for EdgeNGramTokenFilterV2.

type EdgeNGramTokenizer

type EdgeNGramTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MinGram - The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.
	MinGram *int32 `json:"minGram,omitempty"`
	// MaxGram - The maximum n-gram length. Default is 2. Maximum is 300.
	MaxGram *int32 `json:"maxGram,omitempty"`
	// TokenChars - Character classes to keep in the tokens.
	TokenChars *[]TokenCharacterKind `json:"tokenChars,omitempty"`
}

EdgeNGramTokenizer tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

func (EdgeNGramTokenizer) AsBasicTokenizer

func (engt EdgeNGramTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsClassicTokenizer

func (engt EdgeNGramTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsEdgeNGramTokenizer

func (engt EdgeNGramTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsKeywordTokenizer

func (engt EdgeNGramTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsKeywordTokenizerV2

func (engt EdgeNGramTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (engt EdgeNGramTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsMicrosoftLanguageTokenizer

func (engt EdgeNGramTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsNGramTokenizer

func (engt EdgeNGramTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsPathHierarchyTokenizer

func (engt EdgeNGramTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsPathHierarchyTokenizerV2

func (engt EdgeNGramTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsPatternTokenizer

func (engt EdgeNGramTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsStandardTokenizer

func (engt EdgeNGramTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsStandardTokenizerV2

func (engt EdgeNGramTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsTokenizer

func (engt EdgeNGramTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) AsUaxURLEmailTokenizer

func (engt EdgeNGramTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for EdgeNGramTokenizer.

func (EdgeNGramTokenizer) MarshalJSON

func (engt EdgeNGramTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for EdgeNGramTokenizer.

type ElisionTokenFilter

type ElisionTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Articles - The set of articles to remove.
	Articles *[]string `json:"articles,omitempty"`
}

ElisionTokenFilter removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

func (ElisionTokenFilter) AsASCIIFoldingTokenFilter

func (etf ElisionTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsBasicTokenFilter

func (etf ElisionTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsCjkBigramTokenFilter

func (etf ElisionTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsCommonGramTokenFilter

func (etf ElisionTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsDictionaryDecompounderTokenFilter

func (etf ElisionTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsEdgeNGramTokenFilter

func (etf ElisionTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsEdgeNGramTokenFilterV2

func (etf ElisionTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsElisionTokenFilter

func (etf ElisionTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsKeepTokenFilter

func (etf ElisionTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsKeywordMarkerTokenFilter

func (etf ElisionTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsLengthTokenFilter

func (etf ElisionTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsLimitTokenFilter

func (etf ElisionTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsNGramTokenFilter

func (etf ElisionTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsNGramTokenFilterV2

func (etf ElisionTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsPatternCaptureTokenFilter

func (etf ElisionTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsPatternReplaceTokenFilter

func (etf ElisionTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsPhoneticTokenFilter

func (etf ElisionTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsShingleTokenFilter

func (etf ElisionTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsSnowballTokenFilter

func (etf ElisionTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsStemmerOverrideTokenFilter

func (etf ElisionTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsStemmerTokenFilter

func (etf ElisionTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsStopwordsTokenFilter

func (etf ElisionTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsSynonymTokenFilter

func (etf ElisionTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsTokenFilter

func (etf ElisionTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsTruncateTokenFilter

func (etf ElisionTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsUniqueTokenFilter

func (etf ElisionTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) AsWordDelimiterTokenFilter

func (etf ElisionTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for ElisionTokenFilter.

func (ElisionTokenFilter) MarshalJSON

func (etf ElisionTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for ElisionTokenFilter.

type Field

type Field struct {
	// Name - The name of the field.
	Name *string `json:"name,omitempty"`
	// Type - The data type of the field.
	Type *DataType `json:"type,omitempty"`
	// Analyzer - The name of the analyzer to use for the field at search time and indexing time. This option can be used only with searchable fields and it can't be set together with either searchAnalyzer or indexAnalyzer. Once the analyzer is chosen, it cannot be changed for the field.
	Analyzer *AnalyzerName `json:"analyzer,omitempty"`
	// SearchAnalyzer - The name of the analyzer used at search time for the field. This option can be used only with searchable fields. It must be set together with indexAnalyzer and it cannot be set together with the analyzer option. This analyzer can be updated on an existing field.
	SearchAnalyzer *AnalyzerName `json:"searchAnalyzer,omitempty"`
	// IndexAnalyzer - The name of the analyzer used at indexing time for the field. This option can be used only with searchable fields. It must be set together with searchAnalyzer and it cannot be set together with the analyzer option. Once the analyzer is chosen, it cannot be changed for the field.
	IndexAnalyzer *AnalyzerName `json:"indexAnalyzer,omitempty"`
	// IsKey - A value indicating whether the field is the key of the index. Valid only for string fields. Every index must have exactly one key field.
	IsKey *bool `json:"key,omitempty"`
	// IsSearchable - A value indicating whether the field is included in full-text searches. Valid only forstring or string collection fields. Default is false.
	IsSearchable *bool `json:"searchable,omitempty"`
	// IsFilterable - A value indicating whether the field can be used in filter expressions. Default is false.
	IsFilterable *bool `json:"filterable,omitempty"`
	// IsSortable - A value indicating whether the field can be used in orderby expressions. Not valid for string collection fields. Default is false.
	IsSortable *bool `json:"sortable,omitempty"`
	// IsFacetable - A value indicating whether it is possible to facet on this field. Not valid for geo-point fields. Default is false.
	IsFacetable *bool `json:"facetable,omitempty"`
	// IsRetrievable - A value indicating whether the field can be returned in a search result. Default is true.
	IsRetrievable *bool `json:"retrievable,omitempty"`
}

Field represents a field in an index definition in Azure Search, which describes the name, data type, and search behavior of a field.

type FieldMapping

type FieldMapping struct {
	// SourceFieldName - The name of the field in the data source.
	SourceFieldName *string `json:"sourceFieldName,omitempty"`
	// TargetFieldName - The name of the target field in the index. Same as the source field name by default.
	TargetFieldName *string `json:"targetFieldName,omitempty"`
	// MappingFunction - A function to apply to each source field value before indexing.
	MappingFunction *FieldMappingFunction `json:"mappingFunction,omitempty"`
}

FieldMapping defines a mapping between a field in a data source and a target field in an index.

type FieldMappingFunction

type FieldMappingFunction struct {
	// Name - The name of the field mapping function.
	Name *string `json:"name,omitempty"`
	// Parameters - A dictionary of parameter name/value pairs to pass to the function. Each value must be of a primitive type.
	Parameters *map[string]*map[string]interface{} `json:"parameters,omitempty"`
}

FieldMappingFunction represents a function that transforms a value from a data source before indexing.

type FreshnessScoringFunction

type FreshnessScoringFunction struct {
	// FieldName - The name of the field used as input to the scoring function.
	FieldName *string `json:"fieldName,omitempty"`
	// Boost - A multiplier for the raw score. Must be a positive number not equal to 1.0.
	Boost *float64 `json:"boost,omitempty"`
	// Interpolation - A value indicating how boosting will be interpolated across document scores; defaults to "Linear". Possible values include: 'Linear', 'Constant', 'Quadratic', 'Logarithmic'
	Interpolation ScoringFunctionInterpolation `json:"interpolation,omitempty"`
	// Type - Possible values include: 'TypeScoringFunction', 'TypeDistance', 'TypeFreshness', 'TypeMagnitude', 'TypeTag'
	Type Type `json:"type,omitempty"`
	// Parameters - Parameter values for the freshness scoring function.
	Parameters *FreshnessScoringParameters `json:"freshness,omitempty"`
}

FreshnessScoringFunction defines a function that boosts scores based on the value of a date-time field.

func (FreshnessScoringFunction) AsBasicScoringFunction

func (fsf FreshnessScoringFunction) AsBasicScoringFunction() (BasicScoringFunction, bool)

AsBasicScoringFunction is the BasicScoringFunction implementation for FreshnessScoringFunction.

func (FreshnessScoringFunction) AsDistanceScoringFunction

func (fsf FreshnessScoringFunction) AsDistanceScoringFunction() (*DistanceScoringFunction, bool)

AsDistanceScoringFunction is the BasicScoringFunction implementation for FreshnessScoringFunction.

func (FreshnessScoringFunction) AsFreshnessScoringFunction

func (fsf FreshnessScoringFunction) AsFreshnessScoringFunction() (*FreshnessScoringFunction, bool)

AsFreshnessScoringFunction is the BasicScoringFunction implementation for FreshnessScoringFunction.

func (FreshnessScoringFunction) AsMagnitudeScoringFunction

func (fsf FreshnessScoringFunction) AsMagnitudeScoringFunction() (*MagnitudeScoringFunction, bool)

AsMagnitudeScoringFunction is the BasicScoringFunction implementation for FreshnessScoringFunction.

func (FreshnessScoringFunction) AsScoringFunction

func (fsf FreshnessScoringFunction) AsScoringFunction() (*ScoringFunction, bool)

AsScoringFunction is the BasicScoringFunction implementation for FreshnessScoringFunction.

func (FreshnessScoringFunction) AsTagScoringFunction

func (fsf FreshnessScoringFunction) AsTagScoringFunction() (*TagScoringFunction, bool)

AsTagScoringFunction is the BasicScoringFunction implementation for FreshnessScoringFunction.

func (FreshnessScoringFunction) MarshalJSON

func (fsf FreshnessScoringFunction) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for FreshnessScoringFunction.

type FreshnessScoringParameters

type FreshnessScoringParameters struct {
	// BoostingDuration - The expiration period after which boosting will stop for a particular document.
	BoostingDuration *string `json:"boostingDuration,omitempty"`
}

FreshnessScoringParameters provides parameter values to a freshness scoring function.

type HighWaterMarkChangeDetectionPolicy

type HighWaterMarkChangeDetectionPolicy struct {
	// OdataType - Possible values include: 'OdataTypeDataChangeDetectionPolicy', 'OdataTypeMicrosoftAzureSearchHighWaterMarkChangeDetectionPolicy', 'OdataTypeMicrosoftAzureSearchSQLIntegratedChangeTrackingPolicy'
	OdataType OdataTypeBasicDataChangeDetectionPolicy `json:"@odata.type,omitempty"`
	// HighWaterMarkColumnName - The name of the high water mark column.
	HighWaterMarkColumnName *string `json:"highWaterMarkColumnName,omitempty"`
}

HighWaterMarkChangeDetectionPolicy defines a data change detection policy that captures changes based on the value of a high water mark column.

func (HighWaterMarkChangeDetectionPolicy) AsBasicDataChangeDetectionPolicy

func (hwmcdp HighWaterMarkChangeDetectionPolicy) AsBasicDataChangeDetectionPolicy() (BasicDataChangeDetectionPolicy, bool)

AsBasicDataChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for HighWaterMarkChangeDetectionPolicy.

func (HighWaterMarkChangeDetectionPolicy) AsDataChangeDetectionPolicy

func (hwmcdp HighWaterMarkChangeDetectionPolicy) AsDataChangeDetectionPolicy() (*DataChangeDetectionPolicy, bool)

AsDataChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for HighWaterMarkChangeDetectionPolicy.

func (HighWaterMarkChangeDetectionPolicy) AsHighWaterMarkChangeDetectionPolicy

func (hwmcdp HighWaterMarkChangeDetectionPolicy) AsHighWaterMarkChangeDetectionPolicy() (*HighWaterMarkChangeDetectionPolicy, bool)

AsHighWaterMarkChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for HighWaterMarkChangeDetectionPolicy.

func (HighWaterMarkChangeDetectionPolicy) AsSQLIntegratedChangeTrackingPolicy

func (hwmcdp HighWaterMarkChangeDetectionPolicy) AsSQLIntegratedChangeTrackingPolicy() (*SQLIntegratedChangeTrackingPolicy, bool)

AsSQLIntegratedChangeTrackingPolicy is the BasicDataChangeDetectionPolicy implementation for HighWaterMarkChangeDetectionPolicy.

func (HighWaterMarkChangeDetectionPolicy) MarshalJSON

func (hwmcdp HighWaterMarkChangeDetectionPolicy) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for HighWaterMarkChangeDetectionPolicy.

type Index

type Index struct {
	autorest.Response `json:"-"`
	// Name - The name of the index.
	Name *string `json:"name,omitempty"`
	// Fields - The fields of the index.
	Fields *[]Field `json:"fields,omitempty"`
	// ScoringProfiles - The scoring profiles for the index.
	ScoringProfiles *[]ScoringProfile `json:"scoringProfiles,omitempty"`
	// DefaultScoringProfile - The name of the scoring profile to use if none is specified in the query. If this property is not set and no scoring profile is specified in the query, then default scoring (tf-idf) will be used.
	DefaultScoringProfile *string `json:"defaultScoringProfile,omitempty"`
	// CorsOptions - Options to control Cross-Origin Resource Sharing (CORS) for the index.
	CorsOptions *CorsOptions `json:"corsOptions,omitempty"`
	// Suggesters - The suggesters for the index.
	Suggesters *[]Suggester `json:"suggesters,omitempty"`
	// Analyzers - The analyzers for the index.
	Analyzers *[]BasicAnalyzer `json:"analyzers,omitempty"`
	// Tokenizers - The tokenizers for the index.
	Tokenizers *[]BasicTokenizer `json:"tokenizers,omitempty"`
	// TokenFilters - The token filters for the index.
	TokenFilters *[]BasicTokenFilter `json:"tokenFilters,omitempty"`
	// CharFilters - The character filters for the index.
	CharFilters *[]BasicCharFilter `json:"charFilters,omitempty"`
	// ETag - The ETag of the index.
	ETag *string `json:"@odata.etag,omitempty"`
}

Index represents an index definition in Azure Search, which describes the fields and search behavior of an index.

func (*Index) UnmarshalJSON

func (i *Index) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for Index struct.

type IndexActionType

type IndexActionType string

IndexActionType enumerates the values for index action type.

const (
	// Delete ...
	Delete IndexActionType = "delete"
	// Merge ...
	Merge IndexActionType = "merge"
	// MergeOrUpload ...
	MergeOrUpload IndexActionType = "mergeOrUpload"
	// Upload ...
	Upload IndexActionType = "upload"
)

type IndexGetStatisticsResult

type IndexGetStatisticsResult struct {
	autorest.Response `json:"-"`
	// DocumentCount - The number of documents in the index.
	DocumentCount *int64 `json:"documentCount,omitempty"`
	// StorageSize - The amount of storage in bytes consumed by the index.
	StorageSize *int64 `json:"storageSize,omitempty"`
}

IndexGetStatisticsResult statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.

type IndexListResult

type IndexListResult struct {
	autorest.Response `json:"-"`
	// Indexes - The indexes in the Search service.
	Indexes *[]Index `json:"value,omitempty"`
}

IndexListResult response from a List Indexes request. If successful, it includes the full definitions of all indexes.

type Indexer

type Indexer struct {
	autorest.Response `json:"-"`
	// Name - The name of the indexer.
	Name *string `json:"name,omitempty"`
	// Description - The description of the indexer.
	Description *string `json:"description,omitempty"`
	// DataSourceName - The name of the datasource from which this indexer reads data.
	DataSourceName *string `json:"dataSourceName,omitempty"`
	// TargetIndexName - The name of the index to which this indexer writes data.
	TargetIndexName *string `json:"targetIndexName,omitempty"`
	// Schedule - The schedule for this indexer.
	Schedule *IndexingSchedule `json:"schedule,omitempty"`
	// Parameters - Parameters for indexer execution.
	Parameters *IndexingParameters `json:"parameters,omitempty"`
	// FieldMappings - Defines mappings between fields in the data source and corresponding target fields in the index.
	FieldMappings *[]FieldMapping `json:"fieldMappings,omitempty"`
	// IsDisabled - A value indicating whether the indexer is disabled. Default is false.
	IsDisabled *bool `json:"disabled,omitempty"`
	// ETag - The ETag of the Indexer.
	ETag *string `json:"@odata.etag,omitempty"`
}

Indexer represents an Azure Search indexer.

type IndexerExecutionInfo

type IndexerExecutionInfo struct {
	autorest.Response `json:"-"`
	// Status - Overall indexer status. Possible values include: 'Unknown', 'Error', 'Running'
	Status IndexerStatus `json:"status,omitempty"`
	// LastResult - The result of the most recent or an in-progress indexer execution.
	LastResult *IndexerExecutionResult `json:"lastResult,omitempty"`
	// ExecutionHistory - History of the recent indexer executions, sorted in reverse chronological order.
	ExecutionHistory *[]IndexerExecutionResult `json:"executionHistory,omitempty"`
}

IndexerExecutionInfo represents the current status and execution history of an indexer.

type IndexerExecutionResult

type IndexerExecutionResult struct {
	// Status - The outcome of this indexer execution. Possible values include: 'TransientFailure', 'Success', 'InProgress', 'Reset'
	Status IndexerExecutionStatus `json:"status,omitempty"`
	// ErrorMessage - The error message indicating the top-level error, if any.
	ErrorMessage *string `json:"errorMessage,omitempty"`
	// StartTime - The start time of this indexer execution.
	StartTime *date.Time `json:"startTime,omitempty"`
	// EndTime - The end time of this indexer execution, if the execution has already completed.
	EndTime *date.Time `json:"endTime,omitempty"`
	// Errors - The item-level indexing errors
	Errors *[]ItemError `json:"errors,omitempty"`
	// ItemCount - The number of items that were processed during this indexer execution. This includes both successfully processed items and items where indexing was attempted but failed.
	ItemCount *int32 `json:"itemsProcessed,omitempty"`
	// FailedItemCount - The number of items that failed to be indexed during this indexer execution.
	FailedItemCount *int32 `json:"itemsFailed,omitempty"`
	// InitialTrackingState - Change tracking state with which an indexer execution started.
	InitialTrackingState *string `json:"initialTrackingState,omitempty"`
	// FinalTrackingState - Change tracking state with which an indexer execution finished.
	FinalTrackingState *string `json:"finalTrackingState,omitempty"`
}

IndexerExecutionResult represents the result of an individual indexer execution.

type IndexerExecutionStatus

type IndexerExecutionStatus string

IndexerExecutionStatus enumerates the values for indexer execution status.

const (
	// InProgress ...
	InProgress IndexerExecutionStatus = "inProgress"
	// Reset ...
	Reset IndexerExecutionStatus = "reset"
	// Success ...
	Success IndexerExecutionStatus = "success"
	// TransientFailure ...
	TransientFailure IndexerExecutionStatus = "transientFailure"
)

type IndexerListResult

type IndexerListResult struct {
	autorest.Response `json:"-"`
	// Indexers - The indexers in the Search service.
	Indexers *[]Indexer `json:"value,omitempty"`
}

IndexerListResult response from a List Indexers request. If successful, it includes the full definitions of all indexers.

type IndexerStatus

type IndexerStatus string

IndexerStatus enumerates the values for indexer status.

const (
	// Error ...
	Error IndexerStatus = "error"
	// Running ...
	Running IndexerStatus = "running"
	// Unknown ...
	Unknown IndexerStatus = "unknown"
)

type IndexersClient

type IndexersClient struct {
	BaseClient
}

IndexersClient is the search Client

func NewIndexersClient

func NewIndexersClient() IndexersClient

NewIndexersClient creates an instance of the IndexersClient client.

func NewIndexersClientWithBaseURI

func NewIndexersClientWithBaseURI(baseURI string) IndexersClient

NewIndexersClientWithBaseURI creates an instance of the IndexersClient client.

func (IndexersClient) Create

func (client IndexersClient) Create(ctx context.Context, indexer Indexer, clientRequestID *uuid.UUID) (result Indexer, err error)

Create creates a new Azure Search indexer.

indexer is the definition of the indexer to create. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexersClient) CreateOrUpdate

func (client IndexersClient) CreateOrUpdate(ctx context.Context, indexerName string, indexer Indexer, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (result Indexer, err error)

CreateOrUpdate creates a new Azure Search indexer or updates an indexer if it already exists.

indexerName is the name of the indexer to create or update. indexer is the definition of the indexer to create or update. clientRequestID is the tracking ID sent with the request to help with debugging. ifMatch is defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. ifNoneMatch is defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value.

func (IndexersClient) CreateOrUpdatePreparer

func (client IndexersClient) CreateOrUpdatePreparer(ctx context.Context, indexerName string, indexer Indexer, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (*http.Request, error)

CreateOrUpdatePreparer prepares the CreateOrUpdate request.

func (IndexersClient) CreateOrUpdateResponder

func (client IndexersClient) CreateOrUpdateResponder(resp *http.Response) (result Indexer, err error)

CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always closes the http.Response Body.

func (IndexersClient) CreateOrUpdateSender

func (client IndexersClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error)

CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the http.Response Body if it receives an error.

func (IndexersClient) CreatePreparer

func (client IndexersClient) CreatePreparer(ctx context.Context, indexer Indexer, clientRequestID *uuid.UUID) (*http.Request, error)

CreatePreparer prepares the Create request.

func (IndexersClient) CreateResponder

func (client IndexersClient) CreateResponder(resp *http.Response) (result Indexer, err error)

CreateResponder handles the response to the Create request. The method always closes the http.Response Body.

func (IndexersClient) CreateSender

func (client IndexersClient) CreateSender(req *http.Request) (*http.Response, error)

CreateSender sends the Create request. The method will close the http.Response Body if it receives an error.

func (IndexersClient) Delete

func (client IndexersClient) Delete(ctx context.Context, indexerName string, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (result autorest.Response, err error)

Delete deletes an Azure Search indexer.

indexerName is the name of the indexer to delete. clientRequestID is the tracking ID sent with the request to help with debugging. ifMatch is defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. ifNoneMatch is defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value.

func (IndexersClient) DeletePreparer

func (client IndexersClient) DeletePreparer(ctx context.Context, indexerName string, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (*http.Request, error)

DeletePreparer prepares the Delete request.

func (IndexersClient) DeleteResponder

func (client IndexersClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error)

DeleteResponder handles the response to the Delete request. The method always closes the http.Response Body.

func (IndexersClient) DeleteSender

func (client IndexersClient) DeleteSender(req *http.Request) (*http.Response, error)

DeleteSender sends the Delete request. The method will close the http.Response Body if it receives an error.

func (IndexersClient) Get

func (client IndexersClient) Get(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (result Indexer, err error)

Get retrieves an indexer definition from Azure Search.

indexerName is the name of the indexer to retrieve. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexersClient) GetPreparer

func (client IndexersClient) GetPreparer(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (*http.Request, error)

GetPreparer prepares the Get request.

func (IndexersClient) GetResponder

func (client IndexersClient) GetResponder(resp *http.Response) (result Indexer, err error)

GetResponder handles the response to the Get request. The method always closes the http.Response Body.

func (IndexersClient) GetSender

func (client IndexersClient) GetSender(req *http.Request) (*http.Response, error)

GetSender sends the Get request. The method will close the http.Response Body if it receives an error.

func (IndexersClient) GetStatus

func (client IndexersClient) GetStatus(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (result IndexerExecutionInfo, err error)

GetStatus returns the current status and execution history of an indexer.

indexerName is the name of the indexer for which to retrieve status. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexersClient) GetStatusPreparer

func (client IndexersClient) GetStatusPreparer(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (*http.Request, error)

GetStatusPreparer prepares the GetStatus request.

func (IndexersClient) GetStatusResponder

func (client IndexersClient) GetStatusResponder(resp *http.Response) (result IndexerExecutionInfo, err error)

GetStatusResponder handles the response to the GetStatus request. The method always closes the http.Response Body.

func (IndexersClient) GetStatusSender

func (client IndexersClient) GetStatusSender(req *http.Request) (*http.Response, error)

GetStatusSender sends the GetStatus request. The method will close the http.Response Body if it receives an error.

func (IndexersClient) List

func (client IndexersClient) List(ctx context.Context, clientRequestID *uuid.UUID) (result IndexerListResult, err error)

List lists all indexers available for an Azure Search service.

clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexersClient) ListPreparer

func (client IndexersClient) ListPreparer(ctx context.Context, clientRequestID *uuid.UUID) (*http.Request, error)

ListPreparer prepares the List request.

func (IndexersClient) ListResponder

func (client IndexersClient) ListResponder(resp *http.Response) (result IndexerListResult, err error)

ListResponder handles the response to the List request. The method always closes the http.Response Body.

func (IndexersClient) ListSender

func (client IndexersClient) ListSender(req *http.Request) (*http.Response, error)

ListSender sends the List request. The method will close the http.Response Body if it receives an error.

func (IndexersClient) Reset

func (client IndexersClient) Reset(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (result autorest.Response, err error)

Reset resets the change tracking state associated with an Azure Search indexer.

indexerName is the name of the indexer to reset. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexersClient) ResetPreparer

func (client IndexersClient) ResetPreparer(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (*http.Request, error)

ResetPreparer prepares the Reset request.

func (IndexersClient) ResetResponder

func (client IndexersClient) ResetResponder(resp *http.Response) (result autorest.Response, err error)

ResetResponder handles the response to the Reset request. The method always closes the http.Response Body.

func (IndexersClient) ResetSender

func (client IndexersClient) ResetSender(req *http.Request) (*http.Response, error)

ResetSender sends the Reset request. The method will close the http.Response Body if it receives an error.

func (IndexersClient) Run

func (client IndexersClient) Run(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (result autorest.Response, err error)

Run runs an Azure Search indexer on-demand.

indexerName is the name of the indexer to run. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexersClient) RunPreparer

func (client IndexersClient) RunPreparer(ctx context.Context, indexerName string, clientRequestID *uuid.UUID) (*http.Request, error)

RunPreparer prepares the Run request.

func (IndexersClient) RunResponder

func (client IndexersClient) RunResponder(resp *http.Response) (result autorest.Response, err error)

RunResponder handles the response to the Run request. The method always closes the http.Response Body.

func (IndexersClient) RunSender

func (client IndexersClient) RunSender(req *http.Request) (*http.Response, error)

RunSender sends the Run request. The method will close the http.Response Body if it receives an error.

type IndexesClient

type IndexesClient struct {
	BaseClient
}

IndexesClient is the search Client

func NewIndexesClient

func NewIndexesClient() IndexesClient

NewIndexesClient creates an instance of the IndexesClient client.

func NewIndexesClientWithBaseURI

func NewIndexesClientWithBaseURI(baseURI string) IndexesClient

NewIndexesClientWithBaseURI creates an instance of the IndexesClient client.

func (IndexesClient) Analyze

func (client IndexesClient) Analyze(ctx context.Context, indexName string, request AnalyzeRequest, clientRequestID *uuid.UUID) (result AnalyzeResult, err error)

Analyze shows how an analyzer breaks text into tokens.

indexName is the name of the index for which to test an analyzer. request is the text and analyzer or analysis components to test. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexesClient) AnalyzePreparer

func (client IndexesClient) AnalyzePreparer(ctx context.Context, indexName string, request AnalyzeRequest, clientRequestID *uuid.UUID) (*http.Request, error)

AnalyzePreparer prepares the Analyze request.

func (IndexesClient) AnalyzeResponder

func (client IndexesClient) AnalyzeResponder(resp *http.Response) (result AnalyzeResult, err error)

AnalyzeResponder handles the response to the Analyze request. The method always closes the http.Response Body.

func (IndexesClient) AnalyzeSender

func (client IndexesClient) AnalyzeSender(req *http.Request) (*http.Response, error)

AnalyzeSender sends the Analyze request. The method will close the http.Response Body if it receives an error.

func (IndexesClient) Create

func (client IndexesClient) Create(ctx context.Context, indexParameter Index, clientRequestID *uuid.UUID) (result Index, err error)

Create creates a new Azure Search index.

indexParameter is the definition of the index to create. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexesClient) CreateOrUpdate

func (client IndexesClient) CreateOrUpdate(ctx context.Context, indexName string, indexParameter Index, allowIndexDowntime *bool, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (result Index, err error)

CreateOrUpdate creates a new Azure Search index or updates an index if it already exists.

indexName is the definition of the index to create or update. indexParameter is the definition of the index to create or update. allowIndexDowntime is allows new analyzers, tokenizers, token filters, or char filters to be added to an index by taking the index offline for at least a few seconds. This temporarily causes indexing and query requests to fail. Performance and write availability of the index can be impaired for several minutes after the index is updated, or longer for very large indexes. clientRequestID is the tracking ID sent with the request to help with debugging. ifMatch is defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. ifNoneMatch is defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value.

func (IndexesClient) CreateOrUpdatePreparer

func (client IndexesClient) CreateOrUpdatePreparer(ctx context.Context, indexName string, indexParameter Index, allowIndexDowntime *bool, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (*http.Request, error)

CreateOrUpdatePreparer prepares the CreateOrUpdate request.

func (IndexesClient) CreateOrUpdateResponder

func (client IndexesClient) CreateOrUpdateResponder(resp *http.Response) (result Index, err error)

CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always closes the http.Response Body.

func (IndexesClient) CreateOrUpdateSender

func (client IndexesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error)

CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the http.Response Body if it receives an error.

func (IndexesClient) CreatePreparer

func (client IndexesClient) CreatePreparer(ctx context.Context, indexParameter Index, clientRequestID *uuid.UUID) (*http.Request, error)

CreatePreparer prepares the Create request.

func (IndexesClient) CreateResponder

func (client IndexesClient) CreateResponder(resp *http.Response) (result Index, err error)

CreateResponder handles the response to the Create request. The method always closes the http.Response Body.

func (IndexesClient) CreateSender

func (client IndexesClient) CreateSender(req *http.Request) (*http.Response, error)

CreateSender sends the Create request. The method will close the http.Response Body if it receives an error.

func (IndexesClient) Delete

func (client IndexesClient) Delete(ctx context.Context, indexName string, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (result autorest.Response, err error)

Delete deletes an Azure Search index and all the documents it contains.

indexName is the name of the index to delete. clientRequestID is the tracking ID sent with the request to help with debugging. ifMatch is defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. ifNoneMatch is defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value.

func (IndexesClient) DeletePreparer

func (client IndexesClient) DeletePreparer(ctx context.Context, indexName string, clientRequestID *uuid.UUID, ifMatch string, ifNoneMatch string) (*http.Request, error)

DeletePreparer prepares the Delete request.

func (IndexesClient) DeleteResponder

func (client IndexesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error)

DeleteResponder handles the response to the Delete request. The method always closes the http.Response Body.

func (IndexesClient) DeleteSender

func (client IndexesClient) DeleteSender(req *http.Request) (*http.Response, error)

DeleteSender sends the Delete request. The method will close the http.Response Body if it receives an error.

func (IndexesClient) Get

func (client IndexesClient) Get(ctx context.Context, indexName string, clientRequestID *uuid.UUID) (result Index, err error)

Get retrieves an index definition from Azure Search.

indexName is the name of the index to retrieve. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexesClient) GetPreparer

func (client IndexesClient) GetPreparer(ctx context.Context, indexName string, clientRequestID *uuid.UUID) (*http.Request, error)

GetPreparer prepares the Get request.

func (IndexesClient) GetResponder

func (client IndexesClient) GetResponder(resp *http.Response) (result Index, err error)

GetResponder handles the response to the Get request. The method always closes the http.Response Body.

func (IndexesClient) GetSender

func (client IndexesClient) GetSender(req *http.Request) (*http.Response, error)

GetSender sends the Get request. The method will close the http.Response Body if it receives an error.

func (IndexesClient) GetStatistics

func (client IndexesClient) GetStatistics(ctx context.Context, indexName string, clientRequestID *uuid.UUID) (result IndexGetStatisticsResult, err error)

GetStatistics returns statistics for the given index, including a document count and storage usage.

indexName is the name of the index for which to retrieve statistics. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexesClient) GetStatisticsPreparer

func (client IndexesClient) GetStatisticsPreparer(ctx context.Context, indexName string, clientRequestID *uuid.UUID) (*http.Request, error)

GetStatisticsPreparer prepares the GetStatistics request.

func (IndexesClient) GetStatisticsResponder

func (client IndexesClient) GetStatisticsResponder(resp *http.Response) (result IndexGetStatisticsResult, err error)

GetStatisticsResponder handles the response to the GetStatistics request. The method always closes the http.Response Body.

func (IndexesClient) GetStatisticsSender

func (client IndexesClient) GetStatisticsSender(req *http.Request) (*http.Response, error)

GetStatisticsSender sends the GetStatistics request. The method will close the http.Response Body if it receives an error.

func (IndexesClient) List

func (client IndexesClient) List(ctx context.Context, selectParameter string, clientRequestID *uuid.UUID) (result IndexListResult, err error)

List lists all indexes available for an Azure Search service.

selectParameter is selects which properties of the index definitions to retrieve. Specified as a comma-separated list of JSON property names, or '*' for all properties. The default is all properties. clientRequestID is the tracking ID sent with the request to help with debugging.

func (IndexesClient) ListPreparer

func (client IndexesClient) ListPreparer(ctx context.Context, selectParameter string, clientRequestID *uuid.UUID) (*http.Request, error)

ListPreparer prepares the List request.

func (IndexesClient) ListResponder

func (client IndexesClient) ListResponder(resp *http.Response) (result IndexListResult, err error)

ListResponder handles the response to the List request. The method always closes the http.Response Body.

func (IndexesClient) ListSender

func (client IndexesClient) ListSender(req *http.Request) (*http.Response, error)

ListSender sends the List request. The method will close the http.Response Body if it receives an error.

type IndexingParameters

type IndexingParameters struct {
	// BatchSize - The number of items that are read from the data source and indexed as a single batch in order to improve performance. The default depends on the data source type.
	BatchSize *int32 `json:"batchSize,omitempty"`
	// MaxFailedItems - The maximum number of items that can fail indexing for indexer execution to still be considered successful. -1 means no limit. Default is 0.
	MaxFailedItems *int32 `json:"maxFailedItems,omitempty"`
	// MaxFailedItemsPerBatch - The maximum number of items in a single batch that can fail indexing for the batch to still be considered successful. -1 means no limit. Default is 0.
	MaxFailedItemsPerBatch *int32 `json:"maxFailedItemsPerBatch,omitempty"`
	// Base64EncodeKeys - Whether indexer will base64-encode all values that are inserted into key field of the target index. This is needed if keys can contain characters that are invalid in keys (such as dot '.'). Default is false.
	Base64EncodeKeys *bool `json:"base64EncodeKeys,omitempty"`
	// Configuration - A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type.
	Configuration *map[string]*map[string]interface{} `json:"configuration,omitempty"`
}

IndexingParameters represents parameters for indexer execution.

type IndexingResult

type IndexingResult struct {
	// Key - The key of a document that was in the indexing request.
	Key *string `json:"key,omitempty"`
	// ErrorMessage - The error message explaining why the indexing operation failed for the document identified by the key; null if indexing succeeded.
	ErrorMessage *string `json:"errorMessage,omitempty"`
	// Succeeded - A value indicating whether the indexing operation succeeded for the document identified by the key.
	Succeeded *bool `json:"status,omitempty"`
	// StatusCode - The status code of the indexing operation. Possible values include: 200 for a successful update or delete, 201 for successful document creation, 400 for a malformed input document, 404 for document not found, 409 for a version conflict, 422 when the index is temporarily unavailable, or 503 for when the service is too busy.
	StatusCode *int32 `json:"statusCode,omitempty"`
}

IndexingResult status of an indexing operation for a single document.

type IndexingSchedule

type IndexingSchedule struct {
	// Interval - The interval of time between indexer executions.
	Interval *string `json:"interval,omitempty"`
	// StartTime - The time when an indexer should start running.
	StartTime *date.Time `json:"startTime,omitempty"`
}

IndexingSchedule represents a schedule for indexer execution.

type Int64

type Int64 struct {
	autorest.Response `json:"-"`
	Value             *int64 `json:"value,omitempty"`
}

Int64 ...

type ItemError

type ItemError struct {
	// Key - The key of the item for which indexing failed.
	Key *string `json:"key,omitempty"`
	// ErrorMessage - The message describing the error that occurred while attempting to index the item.
	ErrorMessage *string `json:"errorMessage,omitempty"`
}

ItemError represents an item- or document-level indexing error.

type KeepTokenFilter

type KeepTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// KeepWords - The list of words to keep.
	KeepWords *[]string `json:"keepWords,omitempty"`
	// LowerCaseKeepWords - A value indicating whether to lower case all words first. Default is false.
	LowerCaseKeepWords *bool `json:"keepWordsCase,omitempty"`
}

KeepTokenFilter a token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

func (KeepTokenFilter) AsASCIIFoldingTokenFilter

func (ktf KeepTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsBasicTokenFilter

func (ktf KeepTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsCjkBigramTokenFilter

func (ktf KeepTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsCommonGramTokenFilter

func (ktf KeepTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsDictionaryDecompounderTokenFilter

func (ktf KeepTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsEdgeNGramTokenFilter

func (ktf KeepTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsEdgeNGramTokenFilterV2

func (ktf KeepTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsElisionTokenFilter

func (ktf KeepTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsKeepTokenFilter

func (ktf KeepTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsKeywordMarkerTokenFilter

func (ktf KeepTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsLengthTokenFilter

func (ktf KeepTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsLimitTokenFilter

func (ktf KeepTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsNGramTokenFilter

func (ktf KeepTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsNGramTokenFilterV2

func (ktf KeepTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsPatternCaptureTokenFilter

func (ktf KeepTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsPatternReplaceTokenFilter

func (ktf KeepTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsPhoneticTokenFilter

func (ktf KeepTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsShingleTokenFilter

func (ktf KeepTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsSnowballTokenFilter

func (ktf KeepTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsStemmerOverrideTokenFilter

func (ktf KeepTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsStemmerTokenFilter

func (ktf KeepTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsStopwordsTokenFilter

func (ktf KeepTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsSynonymTokenFilter

func (ktf KeepTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsTokenFilter

func (ktf KeepTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsTruncateTokenFilter

func (ktf KeepTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsUniqueTokenFilter

func (ktf KeepTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) AsWordDelimiterTokenFilter

func (ktf KeepTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for KeepTokenFilter.

func (KeepTokenFilter) MarshalJSON

func (ktf KeepTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for KeepTokenFilter.

type KeywordMarkerTokenFilter

type KeywordMarkerTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Keywords - A list of words to mark as keywords.
	Keywords *[]string `json:"keywords,omitempty"`
	// IgnoreCase - A value indicating whether to ignore case. If true, all words are converted to lower case first. Default is false.
	IgnoreCase *bool `json:"ignoreCase,omitempty"`
}

KeywordMarkerTokenFilter marks terms as keywords. This token filter is implemented using Apache Lucene.

func (KeywordMarkerTokenFilter) AsASCIIFoldingTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsBasicTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsCjkBigramTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsCommonGramTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsDictionaryDecompounderTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsEdgeNGramTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsEdgeNGramTokenFilterV2

func (kmtf KeywordMarkerTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsElisionTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsKeepTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsKeywordMarkerTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsLengthTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsLimitTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsNGramTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsNGramTokenFilterV2

func (kmtf KeywordMarkerTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsPatternCaptureTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsPatternReplaceTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsPhoneticTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsShingleTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsSnowballTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsStemmerOverrideTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsStemmerTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsStopwordsTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsSynonymTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsTruncateTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsUniqueTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) AsWordDelimiterTokenFilter

func (kmtf KeywordMarkerTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for KeywordMarkerTokenFilter.

func (KeywordMarkerTokenFilter) MarshalJSON

func (kmtf KeywordMarkerTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for KeywordMarkerTokenFilter.

type KeywordTokenizer

type KeywordTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// BufferSize - The read buffer size in bytes. Default is 256.
	BufferSize *int32 `json:"bufferSize,omitempty"`
}

KeywordTokenizer emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

func (KeywordTokenizer) AsBasicTokenizer

func (kt KeywordTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsClassicTokenizer

func (kt KeywordTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsEdgeNGramTokenizer

func (kt KeywordTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsKeywordTokenizer

func (kt KeywordTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsKeywordTokenizerV2

func (kt KeywordTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (kt KeywordTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsMicrosoftLanguageTokenizer

func (kt KeywordTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsNGramTokenizer

func (kt KeywordTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsPathHierarchyTokenizer

func (kt KeywordTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsPathHierarchyTokenizerV2

func (kt KeywordTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsPatternTokenizer

func (kt KeywordTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsStandardTokenizer

func (kt KeywordTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsStandardTokenizerV2

func (kt KeywordTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsTokenizer

func (kt KeywordTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) AsUaxURLEmailTokenizer

func (kt KeywordTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for KeywordTokenizer.

func (KeywordTokenizer) MarshalJSON

func (kt KeywordTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for KeywordTokenizer.

type KeywordTokenizerV2

type KeywordTokenizerV2 struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
}

KeywordTokenizerV2 emits the entire input as a single token.

func (KeywordTokenizerV2) AsBasicTokenizer

func (ktv KeywordTokenizerV2) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsClassicTokenizer

func (ktv KeywordTokenizerV2) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsEdgeNGramTokenizer

func (ktv KeywordTokenizerV2) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsKeywordTokenizer

func (ktv KeywordTokenizerV2) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsKeywordTokenizerV2

func (ktv KeywordTokenizerV2) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsMicrosoftLanguageStemmingTokenizer

func (ktv KeywordTokenizerV2) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsMicrosoftLanguageTokenizer

func (ktv KeywordTokenizerV2) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsNGramTokenizer

func (ktv KeywordTokenizerV2) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsPathHierarchyTokenizer

func (ktv KeywordTokenizerV2) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsPathHierarchyTokenizerV2

func (ktv KeywordTokenizerV2) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsPatternTokenizer

func (ktv KeywordTokenizerV2) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsStandardTokenizer

func (ktv KeywordTokenizerV2) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsStandardTokenizerV2

func (ktv KeywordTokenizerV2) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsTokenizer

func (ktv KeywordTokenizerV2) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) AsUaxURLEmailTokenizer

func (ktv KeywordTokenizerV2) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for KeywordTokenizerV2.

func (KeywordTokenizerV2) MarshalJSON

func (ktv KeywordTokenizerV2) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for KeywordTokenizerV2.

type LengthTokenFilter

type LengthTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Min - The minimum length in characters. Default is 0. Maximum is 300. Must be less than the value of max.
	Min *int32 `json:"min,omitempty"`
	// Max - The maximum length in characters. Default and maximum is 300.
	Max *int32 `json:"max,omitempty"`
}

LengthTokenFilter removes words that are too long or too short. This token filter is implemented using Apache Lucene.

func (LengthTokenFilter) AsASCIIFoldingTokenFilter

func (ltf LengthTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsBasicTokenFilter

func (ltf LengthTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsCjkBigramTokenFilter

func (ltf LengthTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsCommonGramTokenFilter

func (ltf LengthTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsDictionaryDecompounderTokenFilter

func (ltf LengthTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsEdgeNGramTokenFilter

func (ltf LengthTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsEdgeNGramTokenFilterV2

func (ltf LengthTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsElisionTokenFilter

func (ltf LengthTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsKeepTokenFilter

func (ltf LengthTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsKeywordMarkerTokenFilter

func (ltf LengthTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsLengthTokenFilter

func (ltf LengthTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsLimitTokenFilter

func (ltf LengthTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsNGramTokenFilter

func (ltf LengthTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsNGramTokenFilterV2

func (ltf LengthTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsPatternCaptureTokenFilter

func (ltf LengthTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsPatternReplaceTokenFilter

func (ltf LengthTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsPhoneticTokenFilter

func (ltf LengthTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsShingleTokenFilter

func (ltf LengthTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsSnowballTokenFilter

func (ltf LengthTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsStemmerOverrideTokenFilter

func (ltf LengthTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsStemmerTokenFilter

func (ltf LengthTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsStopwordsTokenFilter

func (ltf LengthTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsSynonymTokenFilter

func (ltf LengthTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsTokenFilter

func (ltf LengthTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsTruncateTokenFilter

func (ltf LengthTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsUniqueTokenFilter

func (ltf LengthTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) AsWordDelimiterTokenFilter

func (ltf LengthTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for LengthTokenFilter.

func (LengthTokenFilter) MarshalJSON

func (ltf LengthTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for LengthTokenFilter.

type LimitTokenFilter

type LimitTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// MaxTokenCount - The maximum number of tokens to produce. Default is 1.
	MaxTokenCount *int32 `json:"maxTokenCount,omitempty"`
	// ConsumeAllTokens - A value indicating whether all tokens from the input must be consumed even if maxTokenCount is reached. Default is false.
	ConsumeAllTokens *bool `json:"consumeAllTokens,omitempty"`
}

LimitTokenFilter limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

func (LimitTokenFilter) AsASCIIFoldingTokenFilter

func (ltf LimitTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsBasicTokenFilter

func (ltf LimitTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsCjkBigramTokenFilter

func (ltf LimitTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsCommonGramTokenFilter

func (ltf LimitTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsDictionaryDecompounderTokenFilter

func (ltf LimitTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsEdgeNGramTokenFilter

func (ltf LimitTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsEdgeNGramTokenFilterV2

func (ltf LimitTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsElisionTokenFilter

func (ltf LimitTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsKeepTokenFilter

func (ltf LimitTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsKeywordMarkerTokenFilter

func (ltf LimitTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsLengthTokenFilter

func (ltf LimitTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsLimitTokenFilter

func (ltf LimitTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsNGramTokenFilter

func (ltf LimitTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsNGramTokenFilterV2

func (ltf LimitTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsPatternCaptureTokenFilter

func (ltf LimitTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsPatternReplaceTokenFilter

func (ltf LimitTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsPhoneticTokenFilter

func (ltf LimitTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsShingleTokenFilter

func (ltf LimitTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsSnowballTokenFilter

func (ltf LimitTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsStemmerOverrideTokenFilter

func (ltf LimitTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsStemmerTokenFilter

func (ltf LimitTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsStopwordsTokenFilter

func (ltf LimitTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsSynonymTokenFilter

func (ltf LimitTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsTokenFilter

func (ltf LimitTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsTruncateTokenFilter

func (ltf LimitTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsUniqueTokenFilter

func (ltf LimitTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) AsWordDelimiterTokenFilter

func (ltf LimitTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for LimitTokenFilter.

func (LimitTokenFilter) MarshalJSON

func (ltf LimitTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for LimitTokenFilter.

type MagnitudeScoringFunction

type MagnitudeScoringFunction struct {
	// FieldName - The name of the field used as input to the scoring function.
	FieldName *string `json:"fieldName,omitempty"`
	// Boost - A multiplier for the raw score. Must be a positive number not equal to 1.0.
	Boost *float64 `json:"boost,omitempty"`
	// Interpolation - A value indicating how boosting will be interpolated across document scores; defaults to "Linear". Possible values include: 'Linear', 'Constant', 'Quadratic', 'Logarithmic'
	Interpolation ScoringFunctionInterpolation `json:"interpolation,omitempty"`
	// Type - Possible values include: 'TypeScoringFunction', 'TypeDistance', 'TypeFreshness', 'TypeMagnitude', 'TypeTag'
	Type Type `json:"type,omitempty"`
	// Parameters - Parameter values for the magnitude scoring function.
	Parameters *MagnitudeScoringParameters `json:"magnitude,omitempty"`
}

MagnitudeScoringFunction defines a function that boosts scores based on the magnitude of a numeric field.

func (MagnitudeScoringFunction) AsBasicScoringFunction

func (msf MagnitudeScoringFunction) AsBasicScoringFunction() (BasicScoringFunction, bool)

AsBasicScoringFunction is the BasicScoringFunction implementation for MagnitudeScoringFunction.

func (MagnitudeScoringFunction) AsDistanceScoringFunction

func (msf MagnitudeScoringFunction) AsDistanceScoringFunction() (*DistanceScoringFunction, bool)

AsDistanceScoringFunction is the BasicScoringFunction implementation for MagnitudeScoringFunction.

func (MagnitudeScoringFunction) AsFreshnessScoringFunction

func (msf MagnitudeScoringFunction) AsFreshnessScoringFunction() (*FreshnessScoringFunction, bool)

AsFreshnessScoringFunction is the BasicScoringFunction implementation for MagnitudeScoringFunction.

func (MagnitudeScoringFunction) AsMagnitudeScoringFunction

func (msf MagnitudeScoringFunction) AsMagnitudeScoringFunction() (*MagnitudeScoringFunction, bool)

AsMagnitudeScoringFunction is the BasicScoringFunction implementation for MagnitudeScoringFunction.

func (MagnitudeScoringFunction) AsScoringFunction

func (msf MagnitudeScoringFunction) AsScoringFunction() (*ScoringFunction, bool)

AsScoringFunction is the BasicScoringFunction implementation for MagnitudeScoringFunction.

func (MagnitudeScoringFunction) AsTagScoringFunction

func (msf MagnitudeScoringFunction) AsTagScoringFunction() (*TagScoringFunction, bool)

AsTagScoringFunction is the BasicScoringFunction implementation for MagnitudeScoringFunction.

func (MagnitudeScoringFunction) MarshalJSON

func (msf MagnitudeScoringFunction) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for MagnitudeScoringFunction.

type MagnitudeScoringParameters

type MagnitudeScoringParameters struct {
	// BoostingRangeStart - The field value at which boosting starts.
	BoostingRangeStart *float64 `json:"boostingRangeStart,omitempty"`
	// BoostingRangeEnd - The field value at which boosting ends.
	BoostingRangeEnd *float64 `json:"boostingRangeEnd,omitempty"`
	// ShouldBoostBeyondRangeByConstant - A value indicating whether to apply a constant boost for field values beyond the range end value; default is false.
	ShouldBoostBeyondRangeByConstant *bool `json:"constantBoostBeyondRange,omitempty"`
}

MagnitudeScoringParameters provides parameter values to a magnitude scoring function.

type MappingCharFilter

type MappingCharFilter struct {
	// Name - The name of the char filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeCharFilter', 'OdataTypeMicrosoftAzureSearchMappingCharFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceCharFilter'
	OdataType OdataTypeBasicCharFilter `json:"@odata.type,omitempty"`
	// Mappings - A list of mappings of the following format: "a=>b" (all occurrences of the character "a" will be replaced with character "b").
	Mappings *[]string `json:"mappings,omitempty"`
}

MappingCharFilter a character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

func (MappingCharFilter) AsBasicCharFilter

func (mcf MappingCharFilter) AsBasicCharFilter() (BasicCharFilter, bool)

AsBasicCharFilter is the BasicCharFilter implementation for MappingCharFilter.

func (MappingCharFilter) AsCharFilter

func (mcf MappingCharFilter) AsCharFilter() (*CharFilter, bool)

AsCharFilter is the BasicCharFilter implementation for MappingCharFilter.

func (MappingCharFilter) AsMappingCharFilter

func (mcf MappingCharFilter) AsMappingCharFilter() (*MappingCharFilter, bool)

AsMappingCharFilter is the BasicCharFilter implementation for MappingCharFilter.

func (MappingCharFilter) AsPatternReplaceCharFilter

func (mcf MappingCharFilter) AsPatternReplaceCharFilter() (*PatternReplaceCharFilter, bool)

AsPatternReplaceCharFilter is the BasicCharFilter implementation for MappingCharFilter.

func (MappingCharFilter) MarshalJSON

func (mcf MappingCharFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for MappingCharFilter.

type MicrosoftLanguageStemmingTokenizer

type MicrosoftLanguageStemmingTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
	// IsSearchTokenizer - A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
	IsSearchTokenizer *bool `json:"isSearchTokenizer,omitempty"`
	// Language - The language to use. The default is English. Possible values include: 'Arabic', 'Bangla', 'Bulgarian', 'Catalan', 'Croatian', 'Czech', 'Danish', 'Dutch', 'English', 'Estonian', 'Finnish', 'French', 'German', 'Greek', 'Gujarati', 'Hebrew', 'Hindi', 'Hungarian', 'Icelandic', 'Indonesian', 'Italian', 'Kannada', 'Latvian', 'Lithuanian', 'Malay', 'Malayalam', 'Marathi', 'NorwegianBokmaal', 'Polish', 'Portuguese', 'PortugueseBrazilian', 'Punjabi', 'Romanian', 'Russian', 'SerbianCyrillic', 'SerbianLatin', 'Slovak', 'Slovenian', 'Spanish', 'Swedish', 'Tamil', 'Telugu', 'Turkish', 'Ukrainian', 'Urdu'
	Language MicrosoftStemmingTokenizerLanguage `json:"language,omitempty"`
}

MicrosoftLanguageStemmingTokenizer divides text using language-specific rules and reduces words to their base forms.

func (MicrosoftLanguageStemmingTokenizer) AsBasicTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsClassicTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsEdgeNGramTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsKeywordTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsKeywordTokenizerV2

func (mlst MicrosoftLanguageStemmingTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsMicrosoftLanguageTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsNGramTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsPathHierarchyTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsPathHierarchyTokenizerV2

func (mlst MicrosoftLanguageStemmingTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsPatternTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsStandardTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsStandardTokenizerV2

func (mlst MicrosoftLanguageStemmingTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) AsUaxURLEmailTokenizer

func (mlst MicrosoftLanguageStemmingTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for MicrosoftLanguageStemmingTokenizer.

func (MicrosoftLanguageStemmingTokenizer) MarshalJSON

func (mlst MicrosoftLanguageStemmingTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for MicrosoftLanguageStemmingTokenizer.

type MicrosoftLanguageTokenizer

type MicrosoftLanguageTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
	// IsSearchTokenizer - A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
	IsSearchTokenizer *bool `json:"isSearchTokenizer,omitempty"`
	// Language - The language to use. The default is English. Possible values include: 'MicrosoftTokenizerLanguageBangla', 'MicrosoftTokenizerLanguageBulgarian', 'MicrosoftTokenizerLanguageCatalan', 'MicrosoftTokenizerLanguageChineseSimplified', 'MicrosoftTokenizerLanguageChineseTraditional', 'MicrosoftTokenizerLanguageCroatian', 'MicrosoftTokenizerLanguageCzech', 'MicrosoftTokenizerLanguageDanish', 'MicrosoftTokenizerLanguageDutch', 'MicrosoftTokenizerLanguageEnglish', 'MicrosoftTokenizerLanguageFrench', 'MicrosoftTokenizerLanguageGerman', 'MicrosoftTokenizerLanguageGreek', 'MicrosoftTokenizerLanguageGujarati', 'MicrosoftTokenizerLanguageHindi', 'MicrosoftTokenizerLanguageIcelandic', 'MicrosoftTokenizerLanguageIndonesian', 'MicrosoftTokenizerLanguageItalian', 'MicrosoftTokenizerLanguageJapanese', 'MicrosoftTokenizerLanguageKannada', 'MicrosoftTokenizerLanguageKorean', 'MicrosoftTokenizerLanguageMalay', 'MicrosoftTokenizerLanguageMalayalam', 'MicrosoftTokenizerLanguageMarathi', 'MicrosoftTokenizerLanguageNorwegianBokmaal', 'MicrosoftTokenizerLanguagePolish', 'MicrosoftTokenizerLanguagePortuguese', 'MicrosoftTokenizerLanguagePortugueseBrazilian', 'MicrosoftTokenizerLanguagePunjabi', 'MicrosoftTokenizerLanguageRomanian', 'MicrosoftTokenizerLanguageRussian', 'MicrosoftTokenizerLanguageSerbianCyrillic', 'MicrosoftTokenizerLanguageSerbianLatin', 'MicrosoftTokenizerLanguageSlovenian', 'MicrosoftTokenizerLanguageSpanish', 'MicrosoftTokenizerLanguageSwedish', 'MicrosoftTokenizerLanguageTamil', 'MicrosoftTokenizerLanguageTelugu', 'MicrosoftTokenizerLanguageThai', 'MicrosoftTokenizerLanguageUkrainian', 'MicrosoftTokenizerLanguageUrdu', 'MicrosoftTokenizerLanguageVietnamese'
	Language MicrosoftTokenizerLanguage `json:"language,omitempty"`
}

MicrosoftLanguageTokenizer divides text using language-specific rules.

func (MicrosoftLanguageTokenizer) AsBasicTokenizer

func (mlt MicrosoftLanguageTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsClassicTokenizer

func (mlt MicrosoftLanguageTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsEdgeNGramTokenizer

func (mlt MicrosoftLanguageTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsKeywordTokenizer

func (mlt MicrosoftLanguageTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsKeywordTokenizerV2

func (mlt MicrosoftLanguageTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (mlt MicrosoftLanguageTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsMicrosoftLanguageTokenizer

func (mlt MicrosoftLanguageTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsNGramTokenizer

func (mlt MicrosoftLanguageTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsPathHierarchyTokenizer

func (mlt MicrosoftLanguageTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsPathHierarchyTokenizerV2

func (mlt MicrosoftLanguageTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsPatternTokenizer

func (mlt MicrosoftLanguageTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsStandardTokenizer

func (mlt MicrosoftLanguageTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsStandardTokenizerV2

func (mlt MicrosoftLanguageTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsTokenizer

func (mlt MicrosoftLanguageTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) AsUaxURLEmailTokenizer

func (mlt MicrosoftLanguageTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for MicrosoftLanguageTokenizer.

func (MicrosoftLanguageTokenizer) MarshalJSON

func (mlt MicrosoftLanguageTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for MicrosoftLanguageTokenizer.

type MicrosoftStemmingTokenizerLanguage

type MicrosoftStemmingTokenizerLanguage string

MicrosoftStemmingTokenizerLanguage enumerates the values for microsoft stemming tokenizer language.

const (
	// Arabic ...
	Arabic MicrosoftStemmingTokenizerLanguage = "arabic"
	// Bangla ...
	Bangla MicrosoftStemmingTokenizerLanguage = "bangla"
	// Bulgarian ...
	Bulgarian MicrosoftStemmingTokenizerLanguage = "bulgarian"
	// Catalan ...
	Catalan MicrosoftStemmingTokenizerLanguage = "catalan"
	// Croatian ...
	Croatian MicrosoftStemmingTokenizerLanguage = "croatian"
	// Czech ...
	Czech MicrosoftStemmingTokenizerLanguage = "czech"
	// Danish ...
	Danish MicrosoftStemmingTokenizerLanguage = "danish"
	// Dutch ...
	Dutch MicrosoftStemmingTokenizerLanguage = "dutch"
	// English ...
	English MicrosoftStemmingTokenizerLanguage = "english"
	// Estonian ...
	Estonian MicrosoftStemmingTokenizerLanguage = "estonian"
	// Finnish ...
	Finnish MicrosoftStemmingTokenizerLanguage = "finnish"
	// French ...
	French MicrosoftStemmingTokenizerLanguage = "french"
	// German ...
	German MicrosoftStemmingTokenizerLanguage = "german"
	// Greek ...
	Greek MicrosoftStemmingTokenizerLanguage = "greek"
	// Gujarati ...
	Gujarati MicrosoftStemmingTokenizerLanguage = "gujarati"
	// Hebrew ...
	Hebrew MicrosoftStemmingTokenizerLanguage = "hebrew"
	// Hindi ...
	Hindi MicrosoftStemmingTokenizerLanguage = "hindi"
	// Hungarian ...
	Hungarian MicrosoftStemmingTokenizerLanguage = "hungarian"
	// Icelandic ...
	Icelandic MicrosoftStemmingTokenizerLanguage = "icelandic"
	// Indonesian ...
	Indonesian MicrosoftStemmingTokenizerLanguage = "indonesian"
	// Italian ...
	Italian MicrosoftStemmingTokenizerLanguage = "italian"
	// Kannada ...
	Kannada MicrosoftStemmingTokenizerLanguage = "kannada"
	// Latvian ...
	Latvian MicrosoftStemmingTokenizerLanguage = "latvian"
	// Lithuanian ...
	Lithuanian MicrosoftStemmingTokenizerLanguage = "lithuanian"
	// Malay ...
	Malay MicrosoftStemmingTokenizerLanguage = "malay"
	// Malayalam ...
	Malayalam MicrosoftStemmingTokenizerLanguage = "malayalam"
	// Marathi ...
	Marathi MicrosoftStemmingTokenizerLanguage = "marathi"
	// NorwegianBokmaal ...
	NorwegianBokmaal MicrosoftStemmingTokenizerLanguage = "norwegianBokmaal"
	// Polish ...
	Polish MicrosoftStemmingTokenizerLanguage = "polish"
	// Portuguese ...
	Portuguese MicrosoftStemmingTokenizerLanguage = "portuguese"
	// PortugueseBrazilian ...
	PortugueseBrazilian MicrosoftStemmingTokenizerLanguage = "portugueseBrazilian"
	// Punjabi ...
	Punjabi MicrosoftStemmingTokenizerLanguage = "punjabi"
	// Romanian ...
	Romanian MicrosoftStemmingTokenizerLanguage = "romanian"
	// Russian ...
	Russian MicrosoftStemmingTokenizerLanguage = "russian"
	// SerbianCyrillic ...
	SerbianCyrillic MicrosoftStemmingTokenizerLanguage = "serbianCyrillic"
	// SerbianLatin ...
	SerbianLatin MicrosoftStemmingTokenizerLanguage = "serbianLatin"
	// Slovak ...
	Slovak MicrosoftStemmingTokenizerLanguage = "slovak"
	// Slovenian ...
	Slovenian MicrosoftStemmingTokenizerLanguage = "slovenian"
	// Spanish ...
	Spanish MicrosoftStemmingTokenizerLanguage = "spanish"
	// Swedish ...
	Swedish MicrosoftStemmingTokenizerLanguage = "swedish"
	// Tamil ...
	Tamil MicrosoftStemmingTokenizerLanguage = "tamil"
	// Telugu ...
	Telugu MicrosoftStemmingTokenizerLanguage = "telugu"
	// Turkish ...
	Turkish MicrosoftStemmingTokenizerLanguage = "turkish"
	// Ukrainian ...
	Ukrainian MicrosoftStemmingTokenizerLanguage = "ukrainian"
	// Urdu ...
	Urdu MicrosoftStemmingTokenizerLanguage = "urdu"
)

type MicrosoftTokenizerLanguage

type MicrosoftTokenizerLanguage string

MicrosoftTokenizerLanguage enumerates the values for microsoft tokenizer language.

const (
	// MicrosoftTokenizerLanguageBangla ...
	MicrosoftTokenizerLanguageBangla MicrosoftTokenizerLanguage = "bangla"
	// MicrosoftTokenizerLanguageBulgarian ...
	MicrosoftTokenizerLanguageBulgarian MicrosoftTokenizerLanguage = "bulgarian"
	// MicrosoftTokenizerLanguageCatalan ...
	MicrosoftTokenizerLanguageCatalan MicrosoftTokenizerLanguage = "catalan"
	// MicrosoftTokenizerLanguageChineseSimplified ...
	MicrosoftTokenizerLanguageChineseSimplified MicrosoftTokenizerLanguage = "chineseSimplified"
	// MicrosoftTokenizerLanguageChineseTraditional ...
	MicrosoftTokenizerLanguageChineseTraditional MicrosoftTokenizerLanguage = "chineseTraditional"
	// MicrosoftTokenizerLanguageCroatian ...
	MicrosoftTokenizerLanguageCroatian MicrosoftTokenizerLanguage = "croatian"
	// MicrosoftTokenizerLanguageCzech ...
	MicrosoftTokenizerLanguageCzech MicrosoftTokenizerLanguage = "czech"
	// MicrosoftTokenizerLanguageDanish ...
	MicrosoftTokenizerLanguageDanish MicrosoftTokenizerLanguage = "danish"
	// MicrosoftTokenizerLanguageDutch ...
	MicrosoftTokenizerLanguageDutch MicrosoftTokenizerLanguage = "dutch"
	// MicrosoftTokenizerLanguageEnglish ...
	MicrosoftTokenizerLanguageEnglish MicrosoftTokenizerLanguage = "english"
	// MicrosoftTokenizerLanguageFrench ...
	MicrosoftTokenizerLanguageFrench MicrosoftTokenizerLanguage = "french"
	// MicrosoftTokenizerLanguageGerman ...
	MicrosoftTokenizerLanguageGerman MicrosoftTokenizerLanguage = "german"
	// MicrosoftTokenizerLanguageGreek ...
	MicrosoftTokenizerLanguageGreek MicrosoftTokenizerLanguage = "greek"
	// MicrosoftTokenizerLanguageGujarati ...
	MicrosoftTokenizerLanguageGujarati MicrosoftTokenizerLanguage = "gujarati"
	// MicrosoftTokenizerLanguageHindi ...
	MicrosoftTokenizerLanguageHindi MicrosoftTokenizerLanguage = "hindi"
	// MicrosoftTokenizerLanguageIcelandic ...
	MicrosoftTokenizerLanguageIcelandic MicrosoftTokenizerLanguage = "icelandic"
	// MicrosoftTokenizerLanguageIndonesian ...
	MicrosoftTokenizerLanguageIndonesian MicrosoftTokenizerLanguage = "indonesian"
	// MicrosoftTokenizerLanguageItalian ...
	MicrosoftTokenizerLanguageItalian MicrosoftTokenizerLanguage = "italian"
	// MicrosoftTokenizerLanguageJapanese ...
	MicrosoftTokenizerLanguageJapanese MicrosoftTokenizerLanguage = "japanese"
	// MicrosoftTokenizerLanguageKannada ...
	MicrosoftTokenizerLanguageKannada MicrosoftTokenizerLanguage = "kannada"
	// MicrosoftTokenizerLanguageKorean ...
	MicrosoftTokenizerLanguageKorean MicrosoftTokenizerLanguage = "korean"
	// MicrosoftTokenizerLanguageMalay ...
	MicrosoftTokenizerLanguageMalay MicrosoftTokenizerLanguage = "malay"
	// MicrosoftTokenizerLanguageMalayalam ...
	MicrosoftTokenizerLanguageMalayalam MicrosoftTokenizerLanguage = "malayalam"
	// MicrosoftTokenizerLanguageMarathi ...
	MicrosoftTokenizerLanguageMarathi MicrosoftTokenizerLanguage = "marathi"
	// MicrosoftTokenizerLanguageNorwegianBokmaal ...
	MicrosoftTokenizerLanguageNorwegianBokmaal MicrosoftTokenizerLanguage = "norwegianBokmaal"
	// MicrosoftTokenizerLanguagePolish ...
	MicrosoftTokenizerLanguagePolish MicrosoftTokenizerLanguage = "polish"
	// MicrosoftTokenizerLanguagePortuguese ...
	MicrosoftTokenizerLanguagePortuguese MicrosoftTokenizerLanguage = "portuguese"
	// MicrosoftTokenizerLanguagePortugueseBrazilian ...
	MicrosoftTokenizerLanguagePortugueseBrazilian MicrosoftTokenizerLanguage = "portugueseBrazilian"
	// MicrosoftTokenizerLanguagePunjabi ...
	MicrosoftTokenizerLanguagePunjabi MicrosoftTokenizerLanguage = "punjabi"
	// MicrosoftTokenizerLanguageRomanian ...
	MicrosoftTokenizerLanguageRomanian MicrosoftTokenizerLanguage = "romanian"
	// MicrosoftTokenizerLanguageRussian ...
	MicrosoftTokenizerLanguageRussian MicrosoftTokenizerLanguage = "russian"
	// MicrosoftTokenizerLanguageSerbianCyrillic ...
	MicrosoftTokenizerLanguageSerbianCyrillic MicrosoftTokenizerLanguage = "serbianCyrillic"
	// MicrosoftTokenizerLanguageSerbianLatin ...
	MicrosoftTokenizerLanguageSerbianLatin MicrosoftTokenizerLanguage = "serbianLatin"
	// MicrosoftTokenizerLanguageSlovenian ...
	MicrosoftTokenizerLanguageSlovenian MicrosoftTokenizerLanguage = "slovenian"
	// MicrosoftTokenizerLanguageSpanish ...
	MicrosoftTokenizerLanguageSpanish MicrosoftTokenizerLanguage = "spanish"
	// MicrosoftTokenizerLanguageSwedish ...
	MicrosoftTokenizerLanguageSwedish MicrosoftTokenizerLanguage = "swedish"
	// MicrosoftTokenizerLanguageTamil ...
	MicrosoftTokenizerLanguageTamil MicrosoftTokenizerLanguage = "tamil"
	// MicrosoftTokenizerLanguageTelugu ...
	MicrosoftTokenizerLanguageTelugu MicrosoftTokenizerLanguage = "telugu"
	// MicrosoftTokenizerLanguageThai ...
	MicrosoftTokenizerLanguageThai MicrosoftTokenizerLanguage = "thai"
	// MicrosoftTokenizerLanguageUkrainian ...
	MicrosoftTokenizerLanguageUkrainian MicrosoftTokenizerLanguage = "ukrainian"
	// MicrosoftTokenizerLanguageUrdu ...
	MicrosoftTokenizerLanguageUrdu MicrosoftTokenizerLanguage = "urdu"
	// MicrosoftTokenizerLanguageVietnamese ...
	MicrosoftTokenizerLanguageVietnamese MicrosoftTokenizerLanguage = "vietnamese"
)

type Mode

type Mode string

Mode enumerates the values for mode.

const (
	// All ...
	All Mode = "all"
	// Any ...
	Any Mode = "any"
)

type NGramTokenFilter

type NGramTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// MinGram - The minimum n-gram length. Default is 1. Must be less than the value of maxGram.
	MinGram *int32 `json:"minGram,omitempty"`
	// MaxGram - The maximum n-gram length. Default is 2.
	MaxGram *int32 `json:"maxGram,omitempty"`
}

NGramTokenFilter generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

func (NGramTokenFilter) AsASCIIFoldingTokenFilter

func (ngtf NGramTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsBasicTokenFilter

func (ngtf NGramTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsCjkBigramTokenFilter

func (ngtf NGramTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsCommonGramTokenFilter

func (ngtf NGramTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsDictionaryDecompounderTokenFilter

func (ngtf NGramTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsEdgeNGramTokenFilter

func (ngtf NGramTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsEdgeNGramTokenFilterV2

func (ngtf NGramTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsElisionTokenFilter

func (ngtf NGramTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsKeepTokenFilter

func (ngtf NGramTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsKeywordMarkerTokenFilter

func (ngtf NGramTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsLengthTokenFilter

func (ngtf NGramTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsLimitTokenFilter

func (ngtf NGramTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsNGramTokenFilter

func (ngtf NGramTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsNGramTokenFilterV2

func (ngtf NGramTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsPatternCaptureTokenFilter

func (ngtf NGramTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsPatternReplaceTokenFilter

func (ngtf NGramTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsPhoneticTokenFilter

func (ngtf NGramTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsShingleTokenFilter

func (ngtf NGramTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsSnowballTokenFilter

func (ngtf NGramTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsStemmerOverrideTokenFilter

func (ngtf NGramTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsStemmerTokenFilter

func (ngtf NGramTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsStopwordsTokenFilter

func (ngtf NGramTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsSynonymTokenFilter

func (ngtf NGramTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsTokenFilter

func (ngtf NGramTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsTruncateTokenFilter

func (ngtf NGramTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsUniqueTokenFilter

func (ngtf NGramTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) AsWordDelimiterTokenFilter

func (ngtf NGramTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for NGramTokenFilter.

func (NGramTokenFilter) MarshalJSON

func (ngtf NGramTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for NGramTokenFilter.

type NGramTokenFilterV2

type NGramTokenFilterV2 struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// MinGram - The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.
	MinGram *int32 `json:"minGram,omitempty"`
	// MaxGram - The maximum n-gram length. Default is 2. Maximum is 300.
	MaxGram *int32 `json:"maxGram,omitempty"`
}

NGramTokenFilterV2 generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

func (NGramTokenFilterV2) AsASCIIFoldingTokenFilter

func (ngtfv NGramTokenFilterV2) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsBasicTokenFilter

func (ngtfv NGramTokenFilterV2) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsCjkBigramTokenFilter

func (ngtfv NGramTokenFilterV2) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsCommonGramTokenFilter

func (ngtfv NGramTokenFilterV2) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsDictionaryDecompounderTokenFilter

func (ngtfv NGramTokenFilterV2) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsEdgeNGramTokenFilter

func (ngtfv NGramTokenFilterV2) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsEdgeNGramTokenFilterV2

func (ngtfv NGramTokenFilterV2) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsElisionTokenFilter

func (ngtfv NGramTokenFilterV2) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsKeepTokenFilter

func (ngtfv NGramTokenFilterV2) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsKeywordMarkerTokenFilter

func (ngtfv NGramTokenFilterV2) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsLengthTokenFilter

func (ngtfv NGramTokenFilterV2) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsLimitTokenFilter

func (ngtfv NGramTokenFilterV2) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsNGramTokenFilter

func (ngtfv NGramTokenFilterV2) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsNGramTokenFilterV2

func (ngtfv NGramTokenFilterV2) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsPatternCaptureTokenFilter

func (ngtfv NGramTokenFilterV2) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsPatternReplaceTokenFilter

func (ngtfv NGramTokenFilterV2) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsPhoneticTokenFilter

func (ngtfv NGramTokenFilterV2) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsShingleTokenFilter

func (ngtfv NGramTokenFilterV2) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsSnowballTokenFilter

func (ngtfv NGramTokenFilterV2) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsStemmerOverrideTokenFilter

func (ngtfv NGramTokenFilterV2) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsStemmerTokenFilter

func (ngtfv NGramTokenFilterV2) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsStopwordsTokenFilter

func (ngtfv NGramTokenFilterV2) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsSynonymTokenFilter

func (ngtfv NGramTokenFilterV2) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsTokenFilter

func (ngtfv NGramTokenFilterV2) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsTruncateTokenFilter

func (ngtfv NGramTokenFilterV2) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsUniqueTokenFilter

func (ngtfv NGramTokenFilterV2) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) AsWordDelimiterTokenFilter

func (ngtfv NGramTokenFilterV2) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for NGramTokenFilterV2.

func (NGramTokenFilterV2) MarshalJSON

func (ngtfv NGramTokenFilterV2) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for NGramTokenFilterV2.

type NGramTokenizer

type NGramTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MinGram - The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.
	MinGram *int32 `json:"minGram,omitempty"`
	// MaxGram - The maximum n-gram length. Default is 2. Maximum is 300.
	MaxGram *int32 `json:"maxGram,omitempty"`
	// TokenChars - Character classes to keep in the tokens.
	TokenChars *[]TokenCharacterKind `json:"tokenChars,omitempty"`
}

NGramTokenizer tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

func (NGramTokenizer) AsBasicTokenizer

func (ngt NGramTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsClassicTokenizer

func (ngt NGramTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsEdgeNGramTokenizer

func (ngt NGramTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsKeywordTokenizer

func (ngt NGramTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsKeywordTokenizerV2

func (ngt NGramTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (ngt NGramTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsMicrosoftLanguageTokenizer

func (ngt NGramTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsNGramTokenizer

func (ngt NGramTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsPathHierarchyTokenizer

func (ngt NGramTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsPathHierarchyTokenizerV2

func (ngt NGramTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsPatternTokenizer

func (ngt NGramTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsStandardTokenizer

func (ngt NGramTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsStandardTokenizerV2

func (ngt NGramTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsTokenizer

func (ngt NGramTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) AsUaxURLEmailTokenizer

func (ngt NGramTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for NGramTokenizer.

func (NGramTokenizer) MarshalJSON

func (ngt NGramTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for NGramTokenizer.

type OdataType

type OdataType string

OdataType enumerates the values for odata type.

const (
	// OdataTypeAnalyzer ...
	OdataTypeAnalyzer OdataType = "Analyzer"
	// OdataTypeMicrosoftAzureSearchCustomAnalyzer ...
	OdataTypeMicrosoftAzureSearchCustomAnalyzer OdataType = "#Microsoft.Azure.Search.CustomAnalyzer"
	// OdataTypeMicrosoftAzureSearchPatternAnalyzer ...
	OdataTypeMicrosoftAzureSearchPatternAnalyzer OdataType = "#Microsoft.Azure.Search.PatternAnalyzer"
	// OdataTypeMicrosoftAzureSearchStandardAnalyzer ...
	OdataTypeMicrosoftAzureSearchStandardAnalyzer OdataType = "#Microsoft.Azure.Search.StandardAnalyzer"
	// OdataTypeMicrosoftAzureSearchStopAnalyzer ...
	OdataTypeMicrosoftAzureSearchStopAnalyzer OdataType = "#Microsoft.Azure.Search.StopAnalyzer"
)

type OdataTypeBasicCharFilter

type OdataTypeBasicCharFilter string

OdataTypeBasicCharFilter enumerates the values for odata type basic char filter.

const (
	// OdataTypeCharFilter ...
	OdataTypeCharFilter OdataTypeBasicCharFilter = "CharFilter"
	// OdataTypeMicrosoftAzureSearchMappingCharFilter ...
	OdataTypeMicrosoftAzureSearchMappingCharFilter OdataTypeBasicCharFilter = "#Microsoft.Azure.Search.MappingCharFilter"
	// OdataTypeMicrosoftAzureSearchPatternReplaceCharFilter ...
	OdataTypeMicrosoftAzureSearchPatternReplaceCharFilter OdataTypeBasicCharFilter = "#Microsoft.Azure.Search.PatternReplaceCharFilter"
)

type OdataTypeBasicDataChangeDetectionPolicy

type OdataTypeBasicDataChangeDetectionPolicy string

OdataTypeBasicDataChangeDetectionPolicy enumerates the values for odata type basic data change detection policy.

const (
	// OdataTypeDataChangeDetectionPolicy ...
	OdataTypeDataChangeDetectionPolicy OdataTypeBasicDataChangeDetectionPolicy = "DataChangeDetectionPolicy"
	// OdataTypeMicrosoftAzureSearchHighWaterMarkChangeDetectionPolicy ...
	OdataTypeMicrosoftAzureSearchHighWaterMarkChangeDetectionPolicy OdataTypeBasicDataChangeDetectionPolicy = "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy"
	// OdataTypeMicrosoftAzureSearchSQLIntegratedChangeTrackingPolicy ...
	OdataTypeMicrosoftAzureSearchSQLIntegratedChangeTrackingPolicy OdataTypeBasicDataChangeDetectionPolicy = "#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy"
)

type OdataTypeBasicDataDeletionDetectionPolicy

type OdataTypeBasicDataDeletionDetectionPolicy string

OdataTypeBasicDataDeletionDetectionPolicy enumerates the values for odata type basic data deletion detection policy.

const (
	// OdataTypeDataDeletionDetectionPolicy ...
	OdataTypeDataDeletionDetectionPolicy OdataTypeBasicDataDeletionDetectionPolicy = "DataDeletionDetectionPolicy"
	// OdataTypeMicrosoftAzureSearchSoftDeleteColumnDeletionDetectionPolicy ...
	OdataTypeMicrosoftAzureSearchSoftDeleteColumnDeletionDetectionPolicy OdataTypeBasicDataDeletionDetectionPolicy = "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy"
)

type OdataTypeBasicTokenFilter

type OdataTypeBasicTokenFilter string

OdataTypeBasicTokenFilter enumerates the values for odata type basic token filter.

const (
	// OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter ...
	OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.AsciiFoldingTokenFilter"
	// OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter ...
	OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.CjkBigramTokenFilter"
	// OdataTypeMicrosoftAzureSearchCommonGramTokenFilter ...
	OdataTypeMicrosoftAzureSearchCommonGramTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.CommonGramTokenFilter"
	// OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter ...
	OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter"
	// OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter ...
	OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.EdgeNGramTokenFilter"
	// OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2 ...
	OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2 OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.EdgeNGramTokenFilterV2"
	// OdataTypeMicrosoftAzureSearchElisionTokenFilter ...
	OdataTypeMicrosoftAzureSearchElisionTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.ElisionTokenFilter"
	// OdataTypeMicrosoftAzureSearchKeepTokenFilter ...
	OdataTypeMicrosoftAzureSearchKeepTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.KeepTokenFilter"
	// OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter ...
	OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.KeywordMarkerTokenFilter"
	// OdataTypeMicrosoftAzureSearchLengthTokenFilter ...
	OdataTypeMicrosoftAzureSearchLengthTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.LengthTokenFilter"
	// OdataTypeMicrosoftAzureSearchLimitTokenFilter ...
	OdataTypeMicrosoftAzureSearchLimitTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.LimitTokenFilter"
	// OdataTypeMicrosoftAzureSearchNGramTokenFilter ...
	OdataTypeMicrosoftAzureSearchNGramTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.NGramTokenFilter"
	// OdataTypeMicrosoftAzureSearchNGramTokenFilterV2 ...
	OdataTypeMicrosoftAzureSearchNGramTokenFilterV2 OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.NGramTokenFilterV2"
	// OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter ...
	OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.PatternCaptureTokenFilter"
	// OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter ...
	OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.PatternReplaceTokenFilter"
	// OdataTypeMicrosoftAzureSearchPhoneticTokenFilter ...
	OdataTypeMicrosoftAzureSearchPhoneticTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.PhoneticTokenFilter"
	// OdataTypeMicrosoftAzureSearchShingleTokenFilter ...
	OdataTypeMicrosoftAzureSearchShingleTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.ShingleTokenFilter"
	// OdataTypeMicrosoftAzureSearchSnowballTokenFilter ...
	OdataTypeMicrosoftAzureSearchSnowballTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.SnowballTokenFilter"
	// OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter ...
	OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.StemmerOverrideTokenFilter"
	// OdataTypeMicrosoftAzureSearchStemmerTokenFilter ...
	OdataTypeMicrosoftAzureSearchStemmerTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.StemmerTokenFilter"
	// OdataTypeMicrosoftAzureSearchStopwordsTokenFilter ...
	OdataTypeMicrosoftAzureSearchStopwordsTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.StopwordsTokenFilter"
	// OdataTypeMicrosoftAzureSearchSynonymTokenFilter ...
	OdataTypeMicrosoftAzureSearchSynonymTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.SynonymTokenFilter"
	// OdataTypeMicrosoftAzureSearchTruncateTokenFilter ...
	OdataTypeMicrosoftAzureSearchTruncateTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.TruncateTokenFilter"
	// OdataTypeMicrosoftAzureSearchUniqueTokenFilter ...
	OdataTypeMicrosoftAzureSearchUniqueTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.UniqueTokenFilter"
	// OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter ...
	OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter OdataTypeBasicTokenFilter = "#Microsoft.Azure.Search.WordDelimiterTokenFilter"
	// OdataTypeTokenFilter ...
	OdataTypeTokenFilter OdataTypeBasicTokenFilter = "TokenFilter"
)

type OdataTypeBasicTokenizer

type OdataTypeBasicTokenizer string

OdataTypeBasicTokenizer enumerates the values for odata type basic tokenizer.

const (
	// OdataTypeMicrosoftAzureSearchClassicTokenizer ...
	OdataTypeMicrosoftAzureSearchClassicTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.ClassicTokenizer"
	// OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer ...
	OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.EdgeNGramTokenizer"
	// OdataTypeMicrosoftAzureSearchKeywordTokenizer ...
	OdataTypeMicrosoftAzureSearchKeywordTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.KeywordTokenizer"
	// OdataTypeMicrosoftAzureSearchKeywordTokenizerV2 ...
	OdataTypeMicrosoftAzureSearchKeywordTokenizerV2 OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.KeywordTokenizerV2"
	// OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer ...
	OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer"
	// OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer ...
	OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.MicrosoftLanguageTokenizer"
	// OdataTypeMicrosoftAzureSearchNGramTokenizer ...
	OdataTypeMicrosoftAzureSearchNGramTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.NGramTokenizer"
	// OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer ...
	OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.PathHierarchyTokenizer"
	// OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2 ...
	OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2 OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.PathHierarchyTokenizerV2"
	// OdataTypeMicrosoftAzureSearchPatternTokenizer ...
	OdataTypeMicrosoftAzureSearchPatternTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.PatternTokenizer"
	// OdataTypeMicrosoftAzureSearchStandardTokenizer ...
	OdataTypeMicrosoftAzureSearchStandardTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.StandardTokenizer"
	// OdataTypeMicrosoftAzureSearchStandardTokenizerV2 ...
	OdataTypeMicrosoftAzureSearchStandardTokenizerV2 OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.StandardTokenizerV2"
	// OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer ...
	OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer OdataTypeBasicTokenizer = "#Microsoft.Azure.Search.UaxUrlEmailTokenizer"
	// OdataTypeTokenizer ...
	OdataTypeTokenizer OdataTypeBasicTokenizer = "Tokenizer"
)

type ParametersPayload

type ParametersPayload struct {
	// Count - A value that specifies whether to fetch the total count of results. Default is false. Setting this value to true may have a performance impact. Note that the count returned is an approximation.
	Count *bool `json:"count,omitempty"`
	// Facets - The list of facet expressions to apply to the search query. Each facet expression contains a field name, optionally followed by a comma-separated list of name:value pairs.
	Facets *[]string `json:"facets,omitempty"`
	// Filter - The OData $filter expression to apply to the search query.
	Filter *string `json:"filter,omitempty"`
	// Highlight - The comma-separated list of field names to use for hit highlights. Only searchable fields can be used for hit highlighting.
	Highlight *string `json:"highlight,omitempty"`
	// HighlightPostTag - A string tag that is appended to hit highlights. Must be set with HighlightPreTag. Default is </em>.
	HighlightPostTag *string `json:"highlightPostTag,omitempty"`
	// HighlightPreTag - A string tag that is prepended to hit highlights. Must be set with HighlightPostTag. Default is <em>.
	HighlightPreTag *string `json:"highlightPreTag,omitempty"`
	// MinimumCoverage - A number between 0 and 100 indicating the percentage of the index that must be covered by a search query in order for the query to be reported as a success. This parameter can be useful for ensuring search availability even for services with only one replica. The default is 100.
	MinimumCoverage *float64 `json:"minimumCoverage,omitempty"`
	// OrderBy - The comma-separated list of OData $orderby expressions by which to sort the results. Each expression can be either a field name or a call to the geo.distance() function. Each expression can be followed by asc to indicate ascending, and desc to indicate descending. The default is ascending order. Ties will be broken by the match scores of documents. If no OrderBy is specified, the default sort order is descending by document match score. There can be at most 32 Orderby clauses.
	OrderBy *string `json:"orderby,omitempty"`
	// QueryType - Gets or sets a value that specifies the syntax of the search query. The default is 'simple'. Use 'full' if your query uses the Lucene query syntax. Possible values include: 'Simple', 'Full'
	QueryType QueryType `json:"queryType,omitempty"`
	// ScoringParameters - The list of parameter values to be used in scoring functions (for example, referencePointParameter) using the format name:value. For example, if the scoring profile defines a function with a parameter called 'mylocation' the parameter string would be "mylocation:-122.2,44.8"(without the quotes).
	ScoringParameters *[]string `json:"scoringParameters,omitempty"`
	// ScoringProfile - The name of a scoring profile to evaluate match scores for matching documents in order to sort the results.
	ScoringProfile *string `json:"scoringProfile,omitempty"`
	// SearchProperty - A full-text search query expression; Use null or "*" to match all documents.
	SearchProperty *string `json:"search,omitempty"`
	// SearchFields - The comma-separated list of field names to include in the full-text search.
	SearchFields *string `json:"searchFields,omitempty"`
	// SearchMode - A value that specifies whether any or all of the search terms must be matched in order to count the document as a match. Possible values include: 'Any', 'All'
	SearchMode Mode `json:"searchMode,omitempty"`
	// Select - The comma-separated list of fields to retrieve. If unspecified, all fields marked as retrievable in the schema are included.
	Select *string `json:"select,omitempty"`
	// Skip - The number of search results to skip. This value cannot be greater than 100,000. If you need to scan documents in sequence, but cannot use Skip due to this limitation, consider using OrderBy on a totally-ordered key and Filter with a range query instead.
	Skip *int32 `json:"skip,omitempty"`
	// Top - The number of search results to retrieve. This can be used in conjunction with Skip to implement client-side paging of search results. If results are truncated due to server-side paging, the response will include a continuation token that can be passed to ContinueSearch to retrieve the next page of results. See DocumentSearchResponse.ContinuationToken for more information.
	Top *int32 `json:"top,omitempty"`
}

ParametersPayload parameters for filtering, sorting, faceting, paging, and other search query behaviors.

type PathHierarchyTokenizer

type PathHierarchyTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// Delimiter - The delimiter character to use. Default is "/".
	Delimiter *string `json:"delimiter,omitempty"`
	// Replacement - A value that, if set, replaces the delimiter character. Default is "/".
	Replacement *string `json:"replacement,omitempty"`
	// BufferSize - The buffer size. Default is 1024.
	BufferSize *int32 `json:"bufferSize,omitempty"`
	// ReverseTokenOrder - A value indicating whether to generate tokens in reverse order. Default is false.
	ReverseTokenOrder *bool `json:"reverse,omitempty"`
	// NumberOfTokensToSkip - The number of initial tokens to skip. Default is 0.
	NumberOfTokensToSkip *int32 `json:"skip,omitempty"`
}

PathHierarchyTokenizer tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

func (PathHierarchyTokenizer) AsBasicTokenizer

func (pht PathHierarchyTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsClassicTokenizer

func (pht PathHierarchyTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsEdgeNGramTokenizer

func (pht PathHierarchyTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsKeywordTokenizer

func (pht PathHierarchyTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsKeywordTokenizerV2

func (pht PathHierarchyTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (pht PathHierarchyTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsMicrosoftLanguageTokenizer

func (pht PathHierarchyTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsNGramTokenizer

func (pht PathHierarchyTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsPathHierarchyTokenizer

func (pht PathHierarchyTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsPathHierarchyTokenizerV2

func (pht PathHierarchyTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsPatternTokenizer

func (pht PathHierarchyTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsStandardTokenizer

func (pht PathHierarchyTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsStandardTokenizerV2

func (pht PathHierarchyTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsTokenizer

func (pht PathHierarchyTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) AsUaxURLEmailTokenizer

func (pht PathHierarchyTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizer.

func (PathHierarchyTokenizer) MarshalJSON

func (pht PathHierarchyTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PathHierarchyTokenizer.

type PathHierarchyTokenizerV2

type PathHierarchyTokenizerV2 struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// Delimiter - The delimiter character to use. Default is "/".
	Delimiter *string `json:"delimiter,omitempty"`
	// Replacement - A value that, if set, replaces the delimiter character. Default is "/".
	Replacement *string `json:"replacement,omitempty"`
	// MaxTokenLength - The maximum token length. Default and maximum is 300.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
	// ReverseTokenOrder - A value indicating whether to generate tokens in reverse order. Default is false.
	ReverseTokenOrder *bool `json:"reverse,omitempty"`
	// NumberOfTokensToSkip - The number of initial tokens to skip. Default is 0.
	NumberOfTokensToSkip *int32 `json:"skip,omitempty"`
}

PathHierarchyTokenizerV2 tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

func (PathHierarchyTokenizerV2) AsBasicTokenizer

func (phtv PathHierarchyTokenizerV2) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsClassicTokenizer

func (phtv PathHierarchyTokenizerV2) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsEdgeNGramTokenizer

func (phtv PathHierarchyTokenizerV2) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsKeywordTokenizer

func (phtv PathHierarchyTokenizerV2) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsKeywordTokenizerV2

func (phtv PathHierarchyTokenizerV2) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsMicrosoftLanguageStemmingTokenizer

func (phtv PathHierarchyTokenizerV2) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsMicrosoftLanguageTokenizer

func (phtv PathHierarchyTokenizerV2) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsNGramTokenizer

func (phtv PathHierarchyTokenizerV2) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsPathHierarchyTokenizer

func (phtv PathHierarchyTokenizerV2) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsPathHierarchyTokenizerV2

func (phtv PathHierarchyTokenizerV2) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsPatternTokenizer

func (phtv PathHierarchyTokenizerV2) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsStandardTokenizer

func (phtv PathHierarchyTokenizerV2) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsStandardTokenizerV2

func (phtv PathHierarchyTokenizerV2) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsTokenizer

func (phtv PathHierarchyTokenizerV2) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) AsUaxURLEmailTokenizer

func (phtv PathHierarchyTokenizerV2) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for PathHierarchyTokenizerV2.

func (PathHierarchyTokenizerV2) MarshalJSON

func (phtv PathHierarchyTokenizerV2) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PathHierarchyTokenizerV2.

type PatternAnalyzer

type PatternAnalyzer struct {
	// Name - The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeAnalyzer', 'OdataTypeMicrosoftAzureSearchCustomAnalyzer', 'OdataTypeMicrosoftAzureSearchPatternAnalyzer', 'OdataTypeMicrosoftAzureSearchStandardAnalyzer', 'OdataTypeMicrosoftAzureSearchStopAnalyzer'
	OdataType OdataType `json:"@odata.type,omitempty"`
	// LowerCaseTerms - A value indicating whether terms should be lower-cased. Default is true.
	LowerCaseTerms *bool `json:"lowercase,omitempty"`
	// Pattern - A regular expression pattern to match token separators. Default is an expression that matches one or more whitespace characters.
	Pattern *string `json:"pattern,omitempty"`
	// Flags - Regular expression flags.
	Flags *RegexFlags `json:"flags,omitempty"`
	// Stopwords - A list of stopwords.
	Stopwords *[]string `json:"stopwords,omitempty"`
}

PatternAnalyzer flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

func (PatternAnalyzer) AsAnalyzer

func (pa PatternAnalyzer) AsAnalyzer() (*Analyzer, bool)

AsAnalyzer is the BasicAnalyzer implementation for PatternAnalyzer.

func (PatternAnalyzer) AsBasicAnalyzer

func (pa PatternAnalyzer) AsBasicAnalyzer() (BasicAnalyzer, bool)

AsBasicAnalyzer is the BasicAnalyzer implementation for PatternAnalyzer.

func (PatternAnalyzer) AsCustomAnalyzer

func (pa PatternAnalyzer) AsCustomAnalyzer() (*CustomAnalyzer, bool)

AsCustomAnalyzer is the BasicAnalyzer implementation for PatternAnalyzer.

func (PatternAnalyzer) AsPatternAnalyzer

func (pa PatternAnalyzer) AsPatternAnalyzer() (*PatternAnalyzer, bool)

AsPatternAnalyzer is the BasicAnalyzer implementation for PatternAnalyzer.

func (PatternAnalyzer) AsStandardAnalyzer

func (pa PatternAnalyzer) AsStandardAnalyzer() (*StandardAnalyzer, bool)

AsStandardAnalyzer is the BasicAnalyzer implementation for PatternAnalyzer.

func (PatternAnalyzer) AsStopAnalyzer

func (pa PatternAnalyzer) AsStopAnalyzer() (*StopAnalyzer, bool)

AsStopAnalyzer is the BasicAnalyzer implementation for PatternAnalyzer.

func (PatternAnalyzer) MarshalJSON

func (pa PatternAnalyzer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PatternAnalyzer.

type PatternCaptureTokenFilter

type PatternCaptureTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Patterns - A list of patterns to match against each token.
	Patterns *[]string `json:"patterns,omitempty"`
	// PreserveOriginal - A value indicating whether to return the original token even if one of the patterns matches. Default is true.
	PreserveOriginal *bool `json:"preserveOriginal,omitempty"`
}

PatternCaptureTokenFilter uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

func (PatternCaptureTokenFilter) AsASCIIFoldingTokenFilter

func (pctf PatternCaptureTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsBasicTokenFilter

func (pctf PatternCaptureTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsCjkBigramTokenFilter

func (pctf PatternCaptureTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsCommonGramTokenFilter

func (pctf PatternCaptureTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsDictionaryDecompounderTokenFilter

func (pctf PatternCaptureTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsEdgeNGramTokenFilter

func (pctf PatternCaptureTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsEdgeNGramTokenFilterV2

func (pctf PatternCaptureTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsElisionTokenFilter

func (pctf PatternCaptureTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsKeepTokenFilter

func (pctf PatternCaptureTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsKeywordMarkerTokenFilter

func (pctf PatternCaptureTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsLengthTokenFilter

func (pctf PatternCaptureTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsLimitTokenFilter

func (pctf PatternCaptureTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsNGramTokenFilter

func (pctf PatternCaptureTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsNGramTokenFilterV2

func (pctf PatternCaptureTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsPatternCaptureTokenFilter

func (pctf PatternCaptureTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsPatternReplaceTokenFilter

func (pctf PatternCaptureTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsPhoneticTokenFilter

func (pctf PatternCaptureTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsShingleTokenFilter

func (pctf PatternCaptureTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsSnowballTokenFilter

func (pctf PatternCaptureTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsStemmerOverrideTokenFilter

func (pctf PatternCaptureTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsStemmerTokenFilter

func (pctf PatternCaptureTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsStopwordsTokenFilter

func (pctf PatternCaptureTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsSynonymTokenFilter

func (pctf PatternCaptureTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsTokenFilter

func (pctf PatternCaptureTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsTruncateTokenFilter

func (pctf PatternCaptureTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsUniqueTokenFilter

func (pctf PatternCaptureTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) AsWordDelimiterTokenFilter

func (pctf PatternCaptureTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for PatternCaptureTokenFilter.

func (PatternCaptureTokenFilter) MarshalJSON

func (pctf PatternCaptureTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PatternCaptureTokenFilter.

type PatternReplaceCharFilter

type PatternReplaceCharFilter struct {
	// Name - The name of the char filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeCharFilter', 'OdataTypeMicrosoftAzureSearchMappingCharFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceCharFilter'
	OdataType OdataTypeBasicCharFilter `json:"@odata.type,omitempty"`
	// Pattern - A regular expression pattern.
	Pattern *string `json:"pattern,omitempty"`
	// Replacement - The replacement text.
	Replacement *string `json:"replacement,omitempty"`
}

PatternReplaceCharFilter a character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

func (PatternReplaceCharFilter) AsBasicCharFilter

func (prcf PatternReplaceCharFilter) AsBasicCharFilter() (BasicCharFilter, bool)

AsBasicCharFilter is the BasicCharFilter implementation for PatternReplaceCharFilter.

func (PatternReplaceCharFilter) AsCharFilter

func (prcf PatternReplaceCharFilter) AsCharFilter() (*CharFilter, bool)

AsCharFilter is the BasicCharFilter implementation for PatternReplaceCharFilter.

func (PatternReplaceCharFilter) AsMappingCharFilter

func (prcf PatternReplaceCharFilter) AsMappingCharFilter() (*MappingCharFilter, bool)

AsMappingCharFilter is the BasicCharFilter implementation for PatternReplaceCharFilter.

func (PatternReplaceCharFilter) AsPatternReplaceCharFilter

func (prcf PatternReplaceCharFilter) AsPatternReplaceCharFilter() (*PatternReplaceCharFilter, bool)

AsPatternReplaceCharFilter is the BasicCharFilter implementation for PatternReplaceCharFilter.

func (PatternReplaceCharFilter) MarshalJSON

func (prcf PatternReplaceCharFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PatternReplaceCharFilter.

type PatternReplaceTokenFilter

type PatternReplaceTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Pattern - A regular expression pattern.
	Pattern *string `json:"pattern,omitempty"`
	// Replacement - The replacement text.
	Replacement *string `json:"replacement,omitempty"`
}

PatternReplaceTokenFilter a character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

func (PatternReplaceTokenFilter) AsASCIIFoldingTokenFilter

func (prtf PatternReplaceTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsBasicTokenFilter

func (prtf PatternReplaceTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsCjkBigramTokenFilter

func (prtf PatternReplaceTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsCommonGramTokenFilter

func (prtf PatternReplaceTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsDictionaryDecompounderTokenFilter

func (prtf PatternReplaceTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsEdgeNGramTokenFilter

func (prtf PatternReplaceTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsEdgeNGramTokenFilterV2

func (prtf PatternReplaceTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsElisionTokenFilter

func (prtf PatternReplaceTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsKeepTokenFilter

func (prtf PatternReplaceTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsKeywordMarkerTokenFilter

func (prtf PatternReplaceTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsLengthTokenFilter

func (prtf PatternReplaceTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsLimitTokenFilter

func (prtf PatternReplaceTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsNGramTokenFilter

func (prtf PatternReplaceTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsNGramTokenFilterV2

func (prtf PatternReplaceTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsPatternCaptureTokenFilter

func (prtf PatternReplaceTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsPatternReplaceTokenFilter

func (prtf PatternReplaceTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsPhoneticTokenFilter

func (prtf PatternReplaceTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsShingleTokenFilter

func (prtf PatternReplaceTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsSnowballTokenFilter

func (prtf PatternReplaceTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsStemmerOverrideTokenFilter

func (prtf PatternReplaceTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsStemmerTokenFilter

func (prtf PatternReplaceTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsStopwordsTokenFilter

func (prtf PatternReplaceTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsSynonymTokenFilter

func (prtf PatternReplaceTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsTokenFilter

func (prtf PatternReplaceTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsTruncateTokenFilter

func (prtf PatternReplaceTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsUniqueTokenFilter

func (prtf PatternReplaceTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) AsWordDelimiterTokenFilter

func (prtf PatternReplaceTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for PatternReplaceTokenFilter.

func (PatternReplaceTokenFilter) MarshalJSON

func (prtf PatternReplaceTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PatternReplaceTokenFilter.

type PatternTokenizer

type PatternTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// Pattern - A regular expression pattern to match token separators. Default is an expression that matches one or more whitespace characters.
	Pattern *string `json:"pattern,omitempty"`
	// Flags - Regular expression flags.
	Flags *RegexFlags `json:"flags,omitempty"`
	// Group - The zero-based ordinal of the matching group in the regular expression pattern to extract into tokens. Use -1 if you want to use the entire pattern to split the input into tokens, irrespective of matching groups. Default is -1.
	Group *int32 `json:"group,omitempty"`
}

PatternTokenizer tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

func (PatternTokenizer) AsBasicTokenizer

func (pt PatternTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsClassicTokenizer

func (pt PatternTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsEdgeNGramTokenizer

func (pt PatternTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsKeywordTokenizer

func (pt PatternTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsKeywordTokenizerV2

func (pt PatternTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (pt PatternTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsMicrosoftLanguageTokenizer

func (pt PatternTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsNGramTokenizer

func (pt PatternTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsPathHierarchyTokenizer

func (pt PatternTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsPathHierarchyTokenizerV2

func (pt PatternTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsPatternTokenizer

func (pt PatternTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsStandardTokenizer

func (pt PatternTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsStandardTokenizerV2

func (pt PatternTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsTokenizer

func (pt PatternTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) AsUaxURLEmailTokenizer

func (pt PatternTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for PatternTokenizer.

func (PatternTokenizer) MarshalJSON

func (pt PatternTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PatternTokenizer.

type PhoneticEncoder

type PhoneticEncoder string

PhoneticEncoder enumerates the values for phonetic encoder.

const (
	// BeiderMorse ...
	BeiderMorse PhoneticEncoder = "beiderMorse"
	// Caverphone1 ...
	Caverphone1 PhoneticEncoder = "caverphone1"
	// Caverphone2 ...
	Caverphone2 PhoneticEncoder = "caverphone2"
	// Cologne ...
	Cologne PhoneticEncoder = "cologne"
	// DoubleMetaphone ...
	DoubleMetaphone PhoneticEncoder = "doubleMetaphone"
	// HaasePhonetik ...
	HaasePhonetik PhoneticEncoder = "haasePhonetik"
	// KoelnerPhonetik ...
	KoelnerPhonetik PhoneticEncoder = "koelnerPhonetik"
	// Metaphone ...
	Metaphone PhoneticEncoder = "metaphone"
	// Nysiis ...
	Nysiis PhoneticEncoder = "nysiis"
	// RefinedSoundex ...
	RefinedSoundex PhoneticEncoder = "refinedSoundex"
	// Soundex ...
	Soundex PhoneticEncoder = "soundex"
)

type PhoneticTokenFilter

type PhoneticTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Encoder - The phonetic encoder to use. Default is "metaphone". Possible values include: 'Metaphone', 'DoubleMetaphone', 'Soundex', 'RefinedSoundex', 'Caverphone1', 'Caverphone2', 'Cologne', 'Nysiis', 'KoelnerPhonetik', 'HaasePhonetik', 'BeiderMorse'
	Encoder PhoneticEncoder `json:"encoder,omitempty"`
	// ReplaceOriginalTokens - A value indicating whether encoded tokens should replace original tokens. If false, encoded tokens are added as synonyms. Default is true.
	ReplaceOriginalTokens *bool `json:"replace,omitempty"`
}

PhoneticTokenFilter create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

func (PhoneticTokenFilter) AsASCIIFoldingTokenFilter

func (ptf PhoneticTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsBasicTokenFilter

func (ptf PhoneticTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsCjkBigramTokenFilter

func (ptf PhoneticTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsCommonGramTokenFilter

func (ptf PhoneticTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsDictionaryDecompounderTokenFilter

func (ptf PhoneticTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsEdgeNGramTokenFilter

func (ptf PhoneticTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsEdgeNGramTokenFilterV2

func (ptf PhoneticTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsElisionTokenFilter

func (ptf PhoneticTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsKeepTokenFilter

func (ptf PhoneticTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsKeywordMarkerTokenFilter

func (ptf PhoneticTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsLengthTokenFilter

func (ptf PhoneticTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsLimitTokenFilter

func (ptf PhoneticTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsNGramTokenFilter

func (ptf PhoneticTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsNGramTokenFilterV2

func (ptf PhoneticTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsPatternCaptureTokenFilter

func (ptf PhoneticTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsPatternReplaceTokenFilter

func (ptf PhoneticTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsPhoneticTokenFilter

func (ptf PhoneticTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsShingleTokenFilter

func (ptf PhoneticTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsSnowballTokenFilter

func (ptf PhoneticTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsStemmerOverrideTokenFilter

func (ptf PhoneticTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsStemmerTokenFilter

func (ptf PhoneticTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsStopwordsTokenFilter

func (ptf PhoneticTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsSynonymTokenFilter

func (ptf PhoneticTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsTokenFilter

func (ptf PhoneticTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsTruncateTokenFilter

func (ptf PhoneticTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsUniqueTokenFilter

func (ptf PhoneticTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) AsWordDelimiterTokenFilter

func (ptf PhoneticTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for PhoneticTokenFilter.

func (PhoneticTokenFilter) MarshalJSON

func (ptf PhoneticTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for PhoneticTokenFilter.

type QueryType

type QueryType string

QueryType enumerates the values for query type.

const (
	// Full ...
	Full QueryType = "full"
	// Simple ...
	Simple QueryType = "simple"
)

type RegexFlags

type RegexFlags struct {
	Name *string `json:"name,omitempty"`
}

RegexFlags defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer.

type SQLIntegratedChangeTrackingPolicy

type SQLIntegratedChangeTrackingPolicy struct {
	// OdataType - Possible values include: 'OdataTypeDataChangeDetectionPolicy', 'OdataTypeMicrosoftAzureSearchHighWaterMarkChangeDetectionPolicy', 'OdataTypeMicrosoftAzureSearchSQLIntegratedChangeTrackingPolicy'
	OdataType OdataTypeBasicDataChangeDetectionPolicy `json:"@odata.type,omitempty"`
}

SQLIntegratedChangeTrackingPolicy defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

func (SQLIntegratedChangeTrackingPolicy) AsBasicDataChangeDetectionPolicy

func (sictp SQLIntegratedChangeTrackingPolicy) AsBasicDataChangeDetectionPolicy() (BasicDataChangeDetectionPolicy, bool)

AsBasicDataChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for SQLIntegratedChangeTrackingPolicy.

func (SQLIntegratedChangeTrackingPolicy) AsDataChangeDetectionPolicy

func (sictp SQLIntegratedChangeTrackingPolicy) AsDataChangeDetectionPolicy() (*DataChangeDetectionPolicy, bool)

AsDataChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for SQLIntegratedChangeTrackingPolicy.

func (SQLIntegratedChangeTrackingPolicy) AsHighWaterMarkChangeDetectionPolicy

func (sictp SQLIntegratedChangeTrackingPolicy) AsHighWaterMarkChangeDetectionPolicy() (*HighWaterMarkChangeDetectionPolicy, bool)

AsHighWaterMarkChangeDetectionPolicy is the BasicDataChangeDetectionPolicy implementation for SQLIntegratedChangeTrackingPolicy.

func (SQLIntegratedChangeTrackingPolicy) AsSQLIntegratedChangeTrackingPolicy

func (sictp SQLIntegratedChangeTrackingPolicy) AsSQLIntegratedChangeTrackingPolicy() (*SQLIntegratedChangeTrackingPolicy, bool)

AsSQLIntegratedChangeTrackingPolicy is the BasicDataChangeDetectionPolicy implementation for SQLIntegratedChangeTrackingPolicy.

func (SQLIntegratedChangeTrackingPolicy) MarshalJSON

func (sictp SQLIntegratedChangeTrackingPolicy) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for SQLIntegratedChangeTrackingPolicy.

type ScoringFunction

type ScoringFunction struct {
	// FieldName - The name of the field used as input to the scoring function.
	FieldName *string `json:"fieldName,omitempty"`
	// Boost - A multiplier for the raw score. Must be a positive number not equal to 1.0.
	Boost *float64 `json:"boost,omitempty"`
	// Interpolation - A value indicating how boosting will be interpolated across document scores; defaults to "Linear". Possible values include: 'Linear', 'Constant', 'Quadratic', 'Logarithmic'
	Interpolation ScoringFunctionInterpolation `json:"interpolation,omitempty"`
	// Type - Possible values include: 'TypeScoringFunction', 'TypeDistance', 'TypeFreshness', 'TypeMagnitude', 'TypeTag'
	Type Type `json:"type,omitempty"`
}

ScoringFunction abstract base class for functions that can modify document scores during ranking.

func (ScoringFunction) AsBasicScoringFunction

func (sf ScoringFunction) AsBasicScoringFunction() (BasicScoringFunction, bool)

AsBasicScoringFunction is the BasicScoringFunction implementation for ScoringFunction.

func (ScoringFunction) AsDistanceScoringFunction

func (sf ScoringFunction) AsDistanceScoringFunction() (*DistanceScoringFunction, bool)

AsDistanceScoringFunction is the BasicScoringFunction implementation for ScoringFunction.

func (ScoringFunction) AsFreshnessScoringFunction

func (sf ScoringFunction) AsFreshnessScoringFunction() (*FreshnessScoringFunction, bool)

AsFreshnessScoringFunction is the BasicScoringFunction implementation for ScoringFunction.

func (ScoringFunction) AsMagnitudeScoringFunction

func (sf ScoringFunction) AsMagnitudeScoringFunction() (*MagnitudeScoringFunction, bool)

AsMagnitudeScoringFunction is the BasicScoringFunction implementation for ScoringFunction.

func (ScoringFunction) AsScoringFunction

func (sf ScoringFunction) AsScoringFunction() (*ScoringFunction, bool)

AsScoringFunction is the BasicScoringFunction implementation for ScoringFunction.

func (ScoringFunction) AsTagScoringFunction

func (sf ScoringFunction) AsTagScoringFunction() (*TagScoringFunction, bool)

AsTagScoringFunction is the BasicScoringFunction implementation for ScoringFunction.

func (ScoringFunction) MarshalJSON

func (sf ScoringFunction) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for ScoringFunction.

type ScoringFunctionAggregation

type ScoringFunctionAggregation string

ScoringFunctionAggregation enumerates the values for scoring function aggregation.

const (
	// Average ...
	Average ScoringFunctionAggregation = "average"
	// FirstMatching ...
	FirstMatching ScoringFunctionAggregation = "firstMatching"
	// Maximum ...
	Maximum ScoringFunctionAggregation = "maximum"
	// Minimum ...
	Minimum ScoringFunctionAggregation = "minimum"
	// Sum ...
	Sum ScoringFunctionAggregation = "sum"
)

type ScoringFunctionInterpolation

type ScoringFunctionInterpolation string

ScoringFunctionInterpolation enumerates the values for scoring function interpolation.

const (
	// Constant ...
	Constant ScoringFunctionInterpolation = "constant"
	// Linear ...
	Linear ScoringFunctionInterpolation = "linear"
	// Logarithmic ...
	Logarithmic ScoringFunctionInterpolation = "logarithmic"
	// Quadratic ...
	Quadratic ScoringFunctionInterpolation = "quadratic"
)

type ScoringProfile

type ScoringProfile struct {
	// Name - The name of the scoring profile.
	Name *string `json:"name,omitempty"`
	// TextWeights - Parameters that boost scoring based on text matches in certain index fields.
	TextWeights *TextWeights `json:"text,omitempty"`
	// Functions - The collection of functions that influence the scoring of documents.
	Functions *[]BasicScoringFunction `json:"functions,omitempty"`
	// FunctionAggregation - A value indicating how the results of individual scoring functions should be combined. Defaults to "Sum". Ignored if there are no scoring functions. Possible values include: 'Sum', 'Average', 'Minimum', 'Maximum', 'FirstMatching'
	FunctionAggregation ScoringFunctionAggregation `json:"functionAggregation,omitempty"`
}

ScoringProfile defines parameters for an Azure Search index that influence scoring in search queries.

func (*ScoringProfile) UnmarshalJSON

func (sp *ScoringProfile) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for ScoringProfile struct.

type ShingleTokenFilter

type ShingleTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// MaxShingleSize - The maximum shingle size. Default and minimum value is 2.
	MaxShingleSize *int32 `json:"maxShingleSize,omitempty"`
	// MinShingleSize - The minimum shingle size. Default and minimum value is 2. Must be less than the value of maxShingleSize.
	MinShingleSize *int32 `json:"minShingleSize,omitempty"`
	// OutputUnigrams - A value indicating whether the output stream will contain the input tokens (unigrams) as well as shingles. Default is true.
	OutputUnigrams *bool `json:"outputUnigrams,omitempty"`
	// OutputUnigramsIfNoShingles - A value indicating whether to output unigrams for those times when no shingles are available. This property takes precedence when outputUnigrams is set to false. Default is false.
	OutputUnigramsIfNoShingles *bool `json:"outputUnigramsIfNoShingles,omitempty"`
	// TokenSeparator - The string to use when joining adjacent tokens to form a shingle. Default is a single space (" ").
	TokenSeparator *string `json:"tokenSeparator,omitempty"`
	// FilterToken - The string to insert for each position at which there is no token. Default is an underscore ("_").
	FilterToken *string `json:"filterToken,omitempty"`
}

ShingleTokenFilter creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

func (ShingleTokenFilter) AsASCIIFoldingTokenFilter

func (stf ShingleTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsBasicTokenFilter

func (stf ShingleTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsCjkBigramTokenFilter

func (stf ShingleTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsCommonGramTokenFilter

func (stf ShingleTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsDictionaryDecompounderTokenFilter

func (stf ShingleTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsEdgeNGramTokenFilter

func (stf ShingleTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsEdgeNGramTokenFilterV2

func (stf ShingleTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsElisionTokenFilter

func (stf ShingleTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsKeepTokenFilter

func (stf ShingleTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsKeywordMarkerTokenFilter

func (stf ShingleTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsLengthTokenFilter

func (stf ShingleTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsLimitTokenFilter

func (stf ShingleTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsNGramTokenFilter

func (stf ShingleTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsNGramTokenFilterV2

func (stf ShingleTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsPatternCaptureTokenFilter

func (stf ShingleTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsPatternReplaceTokenFilter

func (stf ShingleTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsPhoneticTokenFilter

func (stf ShingleTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsShingleTokenFilter

func (stf ShingleTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsSnowballTokenFilter

func (stf ShingleTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsStemmerOverrideTokenFilter

func (stf ShingleTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsStemmerTokenFilter

func (stf ShingleTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsStopwordsTokenFilter

func (stf ShingleTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsSynonymTokenFilter

func (stf ShingleTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsTokenFilter

func (stf ShingleTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsTruncateTokenFilter

func (stf ShingleTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsUniqueTokenFilter

func (stf ShingleTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) AsWordDelimiterTokenFilter

func (stf ShingleTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for ShingleTokenFilter.

func (ShingleTokenFilter) MarshalJSON

func (stf ShingleTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for ShingleTokenFilter.

type SnowballTokenFilter

type SnowballTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Language - The language to use. Possible values include: 'SnowballTokenFilterLanguageArmenian', 'SnowballTokenFilterLanguageBasque', 'SnowballTokenFilterLanguageCatalan', 'SnowballTokenFilterLanguageDanish', 'SnowballTokenFilterLanguageDutch', 'SnowballTokenFilterLanguageEnglish', 'SnowballTokenFilterLanguageFinnish', 'SnowballTokenFilterLanguageFrench', 'SnowballTokenFilterLanguageGerman', 'SnowballTokenFilterLanguageGerman2', 'SnowballTokenFilterLanguageHungarian', 'SnowballTokenFilterLanguageItalian', 'SnowballTokenFilterLanguageKp', 'SnowballTokenFilterLanguageLovins', 'SnowballTokenFilterLanguageNorwegian', 'SnowballTokenFilterLanguagePorter', 'SnowballTokenFilterLanguagePortuguese', 'SnowballTokenFilterLanguageRomanian', 'SnowballTokenFilterLanguageRussian', 'SnowballTokenFilterLanguageSpanish', 'SnowballTokenFilterLanguageSwedish', 'SnowballTokenFilterLanguageTurkish'
	Language SnowballTokenFilterLanguage `json:"language,omitempty"`
}

SnowballTokenFilter a filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

func (SnowballTokenFilter) AsASCIIFoldingTokenFilter

func (stf SnowballTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsBasicTokenFilter

func (stf SnowballTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsCjkBigramTokenFilter

func (stf SnowballTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsCommonGramTokenFilter

func (stf SnowballTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsDictionaryDecompounderTokenFilter

func (stf SnowballTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsEdgeNGramTokenFilter

func (stf SnowballTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsEdgeNGramTokenFilterV2

func (stf SnowballTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsElisionTokenFilter

func (stf SnowballTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsKeepTokenFilter

func (stf SnowballTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsKeywordMarkerTokenFilter

func (stf SnowballTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsLengthTokenFilter

func (stf SnowballTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsLimitTokenFilter

func (stf SnowballTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsNGramTokenFilter

func (stf SnowballTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsNGramTokenFilterV2

func (stf SnowballTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsPatternCaptureTokenFilter

func (stf SnowballTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsPatternReplaceTokenFilter

func (stf SnowballTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsPhoneticTokenFilter

func (stf SnowballTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsShingleTokenFilter

func (stf SnowballTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsSnowballTokenFilter

func (stf SnowballTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsStemmerOverrideTokenFilter

func (stf SnowballTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsStemmerTokenFilter

func (stf SnowballTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsStopwordsTokenFilter

func (stf SnowballTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsSynonymTokenFilter

func (stf SnowballTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsTokenFilter

func (stf SnowballTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsTruncateTokenFilter

func (stf SnowballTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsUniqueTokenFilter

func (stf SnowballTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) AsWordDelimiterTokenFilter

func (stf SnowballTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for SnowballTokenFilter.

func (SnowballTokenFilter) MarshalJSON

func (stf SnowballTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for SnowballTokenFilter.

type SnowballTokenFilterLanguage

type SnowballTokenFilterLanguage string

SnowballTokenFilterLanguage enumerates the values for snowball token filter language.

const (
	// SnowballTokenFilterLanguageArmenian ...
	SnowballTokenFilterLanguageArmenian SnowballTokenFilterLanguage = "armenian"
	// SnowballTokenFilterLanguageBasque ...
	SnowballTokenFilterLanguageBasque SnowballTokenFilterLanguage = "basque"
	// SnowballTokenFilterLanguageCatalan ...
	SnowballTokenFilterLanguageCatalan SnowballTokenFilterLanguage = "catalan"
	// SnowballTokenFilterLanguageDanish ...
	SnowballTokenFilterLanguageDanish SnowballTokenFilterLanguage = "danish"
	// SnowballTokenFilterLanguageDutch ...
	SnowballTokenFilterLanguageDutch SnowballTokenFilterLanguage = "dutch"
	// SnowballTokenFilterLanguageEnglish ...
	SnowballTokenFilterLanguageEnglish SnowballTokenFilterLanguage = "english"
	// SnowballTokenFilterLanguageFinnish ...
	SnowballTokenFilterLanguageFinnish SnowballTokenFilterLanguage = "finnish"
	// SnowballTokenFilterLanguageFrench ...
	SnowballTokenFilterLanguageFrench SnowballTokenFilterLanguage = "french"
	// SnowballTokenFilterLanguageGerman ...
	SnowballTokenFilterLanguageGerman SnowballTokenFilterLanguage = "german"
	// SnowballTokenFilterLanguageGerman2 ...
	SnowballTokenFilterLanguageGerman2 SnowballTokenFilterLanguage = "german2"
	// SnowballTokenFilterLanguageHungarian ...
	SnowballTokenFilterLanguageHungarian SnowballTokenFilterLanguage = "hungarian"
	// SnowballTokenFilterLanguageItalian ...
	SnowballTokenFilterLanguageItalian SnowballTokenFilterLanguage = "italian"
	// SnowballTokenFilterLanguageKp ...
	SnowballTokenFilterLanguageKp SnowballTokenFilterLanguage = "kp"
	// SnowballTokenFilterLanguageLovins ...
	SnowballTokenFilterLanguageLovins SnowballTokenFilterLanguage = "lovins"
	// SnowballTokenFilterLanguageNorwegian ...
	SnowballTokenFilterLanguageNorwegian SnowballTokenFilterLanguage = "norwegian"
	// SnowballTokenFilterLanguagePorter ...
	SnowballTokenFilterLanguagePorter SnowballTokenFilterLanguage = "porter"
	// SnowballTokenFilterLanguagePortuguese ...
	SnowballTokenFilterLanguagePortuguese SnowballTokenFilterLanguage = "portuguese"
	// SnowballTokenFilterLanguageRomanian ...
	SnowballTokenFilterLanguageRomanian SnowballTokenFilterLanguage = "romanian"
	// SnowballTokenFilterLanguageRussian ...
	SnowballTokenFilterLanguageRussian SnowballTokenFilterLanguage = "russian"
	// SnowballTokenFilterLanguageSpanish ...
	SnowballTokenFilterLanguageSpanish SnowballTokenFilterLanguage = "spanish"
	// SnowballTokenFilterLanguageSwedish ...
	SnowballTokenFilterLanguageSwedish SnowballTokenFilterLanguage = "swedish"
	// SnowballTokenFilterLanguageTurkish ...
	SnowballTokenFilterLanguageTurkish SnowballTokenFilterLanguage = "turkish"
)

type SoftDeleteColumnDeletionDetectionPolicy

type SoftDeleteColumnDeletionDetectionPolicy struct {
	// OdataType - Possible values include: 'OdataTypeDataDeletionDetectionPolicy', 'OdataTypeMicrosoftAzureSearchSoftDeleteColumnDeletionDetectionPolicy'
	OdataType OdataTypeBasicDataDeletionDetectionPolicy `json:"@odata.type,omitempty"`
	// SoftDeleteColumnName - The name of the column to use for soft-deletion detection.
	SoftDeleteColumnName *string `json:"softDeleteColumnName,omitempty"`
	// SoftDeleteMarkerValue - The marker value that indentifies an item as deleted.
	SoftDeleteMarkerValue *string `json:"softDeleteMarkerValue,omitempty"`
}

SoftDeleteColumnDeletionDetectionPolicy defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

func (SoftDeleteColumnDeletionDetectionPolicy) AsBasicDataDeletionDetectionPolicy

func (sdcddp SoftDeleteColumnDeletionDetectionPolicy) AsBasicDataDeletionDetectionPolicy() (BasicDataDeletionDetectionPolicy, bool)

AsBasicDataDeletionDetectionPolicy is the BasicDataDeletionDetectionPolicy implementation for SoftDeleteColumnDeletionDetectionPolicy.

func (SoftDeleteColumnDeletionDetectionPolicy) AsDataDeletionDetectionPolicy

func (sdcddp SoftDeleteColumnDeletionDetectionPolicy) AsDataDeletionDetectionPolicy() (*DataDeletionDetectionPolicy, bool)

AsDataDeletionDetectionPolicy is the BasicDataDeletionDetectionPolicy implementation for SoftDeleteColumnDeletionDetectionPolicy.

func (SoftDeleteColumnDeletionDetectionPolicy) AsSoftDeleteColumnDeletionDetectionPolicy

func (sdcddp SoftDeleteColumnDeletionDetectionPolicy) AsSoftDeleteColumnDeletionDetectionPolicy() (*SoftDeleteColumnDeletionDetectionPolicy, bool)

AsSoftDeleteColumnDeletionDetectionPolicy is the BasicDataDeletionDetectionPolicy implementation for SoftDeleteColumnDeletionDetectionPolicy.

func (SoftDeleteColumnDeletionDetectionPolicy) MarshalJSON

func (sdcddp SoftDeleteColumnDeletionDetectionPolicy) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for SoftDeleteColumnDeletionDetectionPolicy.

type StandardAnalyzer

type StandardAnalyzer struct {
	// Name - The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeAnalyzer', 'OdataTypeMicrosoftAzureSearchCustomAnalyzer', 'OdataTypeMicrosoftAzureSearchPatternAnalyzer', 'OdataTypeMicrosoftAzureSearchStandardAnalyzer', 'OdataTypeMicrosoftAzureSearchStopAnalyzer'
	OdataType OdataType `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
	// Stopwords - A list of stopwords.
	Stopwords *[]string `json:"stopwords,omitempty"`
}

StandardAnalyzer standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

func (StandardAnalyzer) AsAnalyzer

func (sa StandardAnalyzer) AsAnalyzer() (*Analyzer, bool)

AsAnalyzer is the BasicAnalyzer implementation for StandardAnalyzer.

func (StandardAnalyzer) AsBasicAnalyzer

func (sa StandardAnalyzer) AsBasicAnalyzer() (BasicAnalyzer, bool)

AsBasicAnalyzer is the BasicAnalyzer implementation for StandardAnalyzer.

func (StandardAnalyzer) AsCustomAnalyzer

func (sa StandardAnalyzer) AsCustomAnalyzer() (*CustomAnalyzer, bool)

AsCustomAnalyzer is the BasicAnalyzer implementation for StandardAnalyzer.

func (StandardAnalyzer) AsPatternAnalyzer

func (sa StandardAnalyzer) AsPatternAnalyzer() (*PatternAnalyzer, bool)

AsPatternAnalyzer is the BasicAnalyzer implementation for StandardAnalyzer.

func (StandardAnalyzer) AsStandardAnalyzer

func (sa StandardAnalyzer) AsStandardAnalyzer() (*StandardAnalyzer, bool)

AsStandardAnalyzer is the BasicAnalyzer implementation for StandardAnalyzer.

func (StandardAnalyzer) AsStopAnalyzer

func (sa StandardAnalyzer) AsStopAnalyzer() (*StopAnalyzer, bool)

AsStopAnalyzer is the BasicAnalyzer implementation for StandardAnalyzer.

func (StandardAnalyzer) MarshalJSON

func (sa StandardAnalyzer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for StandardAnalyzer.

type StandardTokenizer

type StandardTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Default is 255. Tokens longer than the maximum length are split
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
}

StandardTokenizer breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

func (StandardTokenizer) AsBasicTokenizer

func (st StandardTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsClassicTokenizer

func (st StandardTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsEdgeNGramTokenizer

func (st StandardTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsKeywordTokenizer

func (st StandardTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsKeywordTokenizerV2

func (st StandardTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (st StandardTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsMicrosoftLanguageTokenizer

func (st StandardTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsNGramTokenizer

func (st StandardTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsPathHierarchyTokenizer

func (st StandardTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsPathHierarchyTokenizerV2

func (st StandardTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsPatternTokenizer

func (st StandardTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsStandardTokenizer

func (st StandardTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsStandardTokenizerV2

func (st StandardTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsTokenizer

func (st StandardTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) AsUaxURLEmailTokenizer

func (st StandardTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for StandardTokenizer.

func (StandardTokenizer) MarshalJSON

func (st StandardTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for StandardTokenizer.

type StandardTokenizerV2

type StandardTokenizerV2 struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
}

StandardTokenizerV2 breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

func (StandardTokenizerV2) AsBasicTokenizer

func (stv StandardTokenizerV2) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsClassicTokenizer

func (stv StandardTokenizerV2) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsEdgeNGramTokenizer

func (stv StandardTokenizerV2) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsKeywordTokenizer

func (stv StandardTokenizerV2) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsKeywordTokenizerV2

func (stv StandardTokenizerV2) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsMicrosoftLanguageStemmingTokenizer

func (stv StandardTokenizerV2) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsMicrosoftLanguageTokenizer

func (stv StandardTokenizerV2) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsNGramTokenizer

func (stv StandardTokenizerV2) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsPathHierarchyTokenizer

func (stv StandardTokenizerV2) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsPathHierarchyTokenizerV2

func (stv StandardTokenizerV2) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsPatternTokenizer

func (stv StandardTokenizerV2) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsStandardTokenizer

func (stv StandardTokenizerV2) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsStandardTokenizerV2

func (stv StandardTokenizerV2) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsTokenizer

func (stv StandardTokenizerV2) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) AsUaxURLEmailTokenizer

func (stv StandardTokenizerV2) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for StandardTokenizerV2.

func (StandardTokenizerV2) MarshalJSON

func (stv StandardTokenizerV2) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for StandardTokenizerV2.

type StemmerOverrideTokenFilter

type StemmerOverrideTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Rules - A list of stemming rules in the following format: "word => stem", for example: "ran => run"
	Rules *[]string `json:"rules,omitempty"`
}

StemmerOverrideTokenFilter provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.

func (StemmerOverrideTokenFilter) AsASCIIFoldingTokenFilter

func (sotf StemmerOverrideTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsBasicTokenFilter

func (sotf StemmerOverrideTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsCjkBigramTokenFilter

func (sotf StemmerOverrideTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsCommonGramTokenFilter

func (sotf StemmerOverrideTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsDictionaryDecompounderTokenFilter

func (sotf StemmerOverrideTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsEdgeNGramTokenFilter

func (sotf StemmerOverrideTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsEdgeNGramTokenFilterV2

func (sotf StemmerOverrideTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsElisionTokenFilter

func (sotf StemmerOverrideTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsKeepTokenFilter

func (sotf StemmerOverrideTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsKeywordMarkerTokenFilter

func (sotf StemmerOverrideTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsLengthTokenFilter

func (sotf StemmerOverrideTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsLimitTokenFilter

func (sotf StemmerOverrideTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsNGramTokenFilter

func (sotf StemmerOverrideTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsNGramTokenFilterV2

func (sotf StemmerOverrideTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsPatternCaptureTokenFilter

func (sotf StemmerOverrideTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsPatternReplaceTokenFilter

func (sotf StemmerOverrideTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsPhoneticTokenFilter

func (sotf StemmerOverrideTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsShingleTokenFilter

func (sotf StemmerOverrideTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsSnowballTokenFilter

func (sotf StemmerOverrideTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsStemmerOverrideTokenFilter

func (sotf StemmerOverrideTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsStemmerTokenFilter

func (sotf StemmerOverrideTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsStopwordsTokenFilter

func (sotf StemmerOverrideTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsSynonymTokenFilter

func (sotf StemmerOverrideTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsTokenFilter

func (sotf StemmerOverrideTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsTruncateTokenFilter

func (sotf StemmerOverrideTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsUniqueTokenFilter

func (sotf StemmerOverrideTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) AsWordDelimiterTokenFilter

func (sotf StemmerOverrideTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for StemmerOverrideTokenFilter.

func (StemmerOverrideTokenFilter) MarshalJSON

func (sotf StemmerOverrideTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for StemmerOverrideTokenFilter.

type StemmerTokenFilter

type StemmerTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Language - The language to use. Possible values include: 'StemmerTokenFilterLanguageArabic', 'StemmerTokenFilterLanguageArmenian', 'StemmerTokenFilterLanguageBasque', 'StemmerTokenFilterLanguageBrazilian', 'StemmerTokenFilterLanguageBulgarian', 'StemmerTokenFilterLanguageCatalan', 'StemmerTokenFilterLanguageCzech', 'StemmerTokenFilterLanguageDanish', 'StemmerTokenFilterLanguageDutch', 'StemmerTokenFilterLanguageDutchKp', 'StemmerTokenFilterLanguageEnglish', 'StemmerTokenFilterLanguageLightEnglish', 'StemmerTokenFilterLanguageMinimalEnglish', 'StemmerTokenFilterLanguagePossessiveEnglish', 'StemmerTokenFilterLanguagePorter2', 'StemmerTokenFilterLanguageLovins', 'StemmerTokenFilterLanguageFinnish', 'StemmerTokenFilterLanguageLightFinnish', 'StemmerTokenFilterLanguageFrench', 'StemmerTokenFilterLanguageLightFrench', 'StemmerTokenFilterLanguageMinimalFrench', 'StemmerTokenFilterLanguageGalician', 'StemmerTokenFilterLanguageMinimalGalician', 'StemmerTokenFilterLanguageGerman', 'StemmerTokenFilterLanguageGerman2', 'StemmerTokenFilterLanguageLightGerman', 'StemmerTokenFilterLanguageMinimalGerman', 'StemmerTokenFilterLanguageGreek', 'StemmerTokenFilterLanguageHindi', 'StemmerTokenFilterLanguageHungarian', 'StemmerTokenFilterLanguageLightHungarian', 'StemmerTokenFilterLanguageIndonesian', 'StemmerTokenFilterLanguageIrish', 'StemmerTokenFilterLanguageItalian', 'StemmerTokenFilterLanguageLightItalian', 'StemmerTokenFilterLanguageSorani', 'StemmerTokenFilterLanguageLatvian', 'StemmerTokenFilterLanguageNorwegian', 'StemmerTokenFilterLanguageLightNorwegian', 'StemmerTokenFilterLanguageMinimalNorwegian', 'StemmerTokenFilterLanguageLightNynorsk', 'StemmerTokenFilterLanguageMinimalNynorsk', 'StemmerTokenFilterLanguagePortuguese', 'StemmerTokenFilterLanguageLightPortuguese', 'StemmerTokenFilterLanguageMinimalPortuguese', 'StemmerTokenFilterLanguagePortugueseRslp', 'StemmerTokenFilterLanguageRomanian', 'StemmerTokenFilterLanguageRussian', 'StemmerTokenFilterLanguageLightRussian', 'StemmerTokenFilterLanguageSpanish', 'StemmerTokenFilterLanguageLightSpanish', 'StemmerTokenFilterLanguageSwedish', 'StemmerTokenFilterLanguageLightSwedish', 'StemmerTokenFilterLanguageTurkish'
	Language StemmerTokenFilterLanguage `json:"language,omitempty"`
}

StemmerTokenFilter language specific stemming filter. This token filter is implemented using Apache Lucene.

func (StemmerTokenFilter) AsASCIIFoldingTokenFilter

func (stf StemmerTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsBasicTokenFilter

func (stf StemmerTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsCjkBigramTokenFilter

func (stf StemmerTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsCommonGramTokenFilter

func (stf StemmerTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsDictionaryDecompounderTokenFilter

func (stf StemmerTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsEdgeNGramTokenFilter

func (stf StemmerTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsEdgeNGramTokenFilterV2

func (stf StemmerTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsElisionTokenFilter

func (stf StemmerTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsKeepTokenFilter

func (stf StemmerTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsKeywordMarkerTokenFilter

func (stf StemmerTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsLengthTokenFilter

func (stf StemmerTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsLimitTokenFilter

func (stf StemmerTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsNGramTokenFilter

func (stf StemmerTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsNGramTokenFilterV2

func (stf StemmerTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsPatternCaptureTokenFilter

func (stf StemmerTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsPatternReplaceTokenFilter

func (stf StemmerTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsPhoneticTokenFilter

func (stf StemmerTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsShingleTokenFilter

func (stf StemmerTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsSnowballTokenFilter

func (stf StemmerTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsStemmerOverrideTokenFilter

func (stf StemmerTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsStemmerTokenFilter

func (stf StemmerTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsStopwordsTokenFilter

func (stf StemmerTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsSynonymTokenFilter

func (stf StemmerTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsTokenFilter

func (stf StemmerTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsTruncateTokenFilter

func (stf StemmerTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsUniqueTokenFilter

func (stf StemmerTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) AsWordDelimiterTokenFilter

func (stf StemmerTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for StemmerTokenFilter.

func (StemmerTokenFilter) MarshalJSON

func (stf StemmerTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for StemmerTokenFilter.

type StemmerTokenFilterLanguage

type StemmerTokenFilterLanguage string

StemmerTokenFilterLanguage enumerates the values for stemmer token filter language.

const (
	// StemmerTokenFilterLanguageArabic ...
	StemmerTokenFilterLanguageArabic StemmerTokenFilterLanguage = "arabic"
	// StemmerTokenFilterLanguageArmenian ...
	StemmerTokenFilterLanguageArmenian StemmerTokenFilterLanguage = "armenian"
	// StemmerTokenFilterLanguageBasque ...
	StemmerTokenFilterLanguageBasque StemmerTokenFilterLanguage = "basque"
	// StemmerTokenFilterLanguageBrazilian ...
	StemmerTokenFilterLanguageBrazilian StemmerTokenFilterLanguage = "brazilian"
	// StemmerTokenFilterLanguageBulgarian ...
	StemmerTokenFilterLanguageBulgarian StemmerTokenFilterLanguage = "bulgarian"
	// StemmerTokenFilterLanguageCatalan ...
	StemmerTokenFilterLanguageCatalan StemmerTokenFilterLanguage = "catalan"
	// StemmerTokenFilterLanguageCzech ...
	StemmerTokenFilterLanguageCzech StemmerTokenFilterLanguage = "czech"
	// StemmerTokenFilterLanguageDanish ...
	StemmerTokenFilterLanguageDanish StemmerTokenFilterLanguage = "danish"
	// StemmerTokenFilterLanguageDutch ...
	StemmerTokenFilterLanguageDutch StemmerTokenFilterLanguage = "dutch"
	// StemmerTokenFilterLanguageDutchKp ...
	StemmerTokenFilterLanguageDutchKp StemmerTokenFilterLanguage = "dutchKp"
	// StemmerTokenFilterLanguageEnglish ...
	StemmerTokenFilterLanguageEnglish StemmerTokenFilterLanguage = "english"
	// StemmerTokenFilterLanguageFinnish ...
	StemmerTokenFilterLanguageFinnish StemmerTokenFilterLanguage = "finnish"
	// StemmerTokenFilterLanguageFrench ...
	StemmerTokenFilterLanguageFrench StemmerTokenFilterLanguage = "french"
	// StemmerTokenFilterLanguageGalician ...
	StemmerTokenFilterLanguageGalician StemmerTokenFilterLanguage = "galician"
	// StemmerTokenFilterLanguageGerman ...
	StemmerTokenFilterLanguageGerman StemmerTokenFilterLanguage = "german"
	// StemmerTokenFilterLanguageGerman2 ...
	StemmerTokenFilterLanguageGerman2 StemmerTokenFilterLanguage = "german2"
	// StemmerTokenFilterLanguageGreek ...
	StemmerTokenFilterLanguageGreek StemmerTokenFilterLanguage = "greek"
	// StemmerTokenFilterLanguageHindi ...
	StemmerTokenFilterLanguageHindi StemmerTokenFilterLanguage = "hindi"
	// StemmerTokenFilterLanguageHungarian ...
	StemmerTokenFilterLanguageHungarian StemmerTokenFilterLanguage = "hungarian"
	// StemmerTokenFilterLanguageIndonesian ...
	StemmerTokenFilterLanguageIndonesian StemmerTokenFilterLanguage = "indonesian"
	// StemmerTokenFilterLanguageIrish ...
	StemmerTokenFilterLanguageIrish StemmerTokenFilterLanguage = "irish"
	// StemmerTokenFilterLanguageItalian ...
	StemmerTokenFilterLanguageItalian StemmerTokenFilterLanguage = "italian"
	// StemmerTokenFilterLanguageLatvian ...
	StemmerTokenFilterLanguageLatvian StemmerTokenFilterLanguage = "latvian"
	// StemmerTokenFilterLanguageLightEnglish ...
	StemmerTokenFilterLanguageLightEnglish StemmerTokenFilterLanguage = "lightEnglish"
	// StemmerTokenFilterLanguageLightFinnish ...
	StemmerTokenFilterLanguageLightFinnish StemmerTokenFilterLanguage = "lightFinnish"
	// StemmerTokenFilterLanguageLightFrench ...
	StemmerTokenFilterLanguageLightFrench StemmerTokenFilterLanguage = "lightFrench"
	// StemmerTokenFilterLanguageLightGerman ...
	StemmerTokenFilterLanguageLightGerman StemmerTokenFilterLanguage = "lightGerman"
	// StemmerTokenFilterLanguageLightHungarian ...
	StemmerTokenFilterLanguageLightHungarian StemmerTokenFilterLanguage = "lightHungarian"
	// StemmerTokenFilterLanguageLightItalian ...
	StemmerTokenFilterLanguageLightItalian StemmerTokenFilterLanguage = "lightItalian"
	// StemmerTokenFilterLanguageLightNorwegian ...
	StemmerTokenFilterLanguageLightNorwegian StemmerTokenFilterLanguage = "lightNorwegian"
	// StemmerTokenFilterLanguageLightNynorsk ...
	StemmerTokenFilterLanguageLightNynorsk StemmerTokenFilterLanguage = "lightNynorsk"
	// StemmerTokenFilterLanguageLightPortuguese ...
	StemmerTokenFilterLanguageLightPortuguese StemmerTokenFilterLanguage = "lightPortuguese"
	// StemmerTokenFilterLanguageLightRussian ...
	StemmerTokenFilterLanguageLightRussian StemmerTokenFilterLanguage = "lightRussian"
	// StemmerTokenFilterLanguageLightSpanish ...
	StemmerTokenFilterLanguageLightSpanish StemmerTokenFilterLanguage = "lightSpanish"
	// StemmerTokenFilterLanguageLightSwedish ...
	StemmerTokenFilterLanguageLightSwedish StemmerTokenFilterLanguage = "lightSwedish"
	// StemmerTokenFilterLanguageLovins ...
	StemmerTokenFilterLanguageLovins StemmerTokenFilterLanguage = "lovins"
	// StemmerTokenFilterLanguageMinimalEnglish ...
	StemmerTokenFilterLanguageMinimalEnglish StemmerTokenFilterLanguage = "minimalEnglish"
	// StemmerTokenFilterLanguageMinimalFrench ...
	StemmerTokenFilterLanguageMinimalFrench StemmerTokenFilterLanguage = "minimalFrench"
	// StemmerTokenFilterLanguageMinimalGalician ...
	StemmerTokenFilterLanguageMinimalGalician StemmerTokenFilterLanguage = "minimalGalician"
	// StemmerTokenFilterLanguageMinimalGerman ...
	StemmerTokenFilterLanguageMinimalGerman StemmerTokenFilterLanguage = "minimalGerman"
	// StemmerTokenFilterLanguageMinimalNorwegian ...
	StemmerTokenFilterLanguageMinimalNorwegian StemmerTokenFilterLanguage = "minimalNorwegian"
	// StemmerTokenFilterLanguageMinimalNynorsk ...
	StemmerTokenFilterLanguageMinimalNynorsk StemmerTokenFilterLanguage = "minimalNynorsk"
	// StemmerTokenFilterLanguageMinimalPortuguese ...
	StemmerTokenFilterLanguageMinimalPortuguese StemmerTokenFilterLanguage = "minimalPortuguese"
	// StemmerTokenFilterLanguageNorwegian ...
	StemmerTokenFilterLanguageNorwegian StemmerTokenFilterLanguage = "norwegian"
	// StemmerTokenFilterLanguagePorter2 ...
	StemmerTokenFilterLanguagePorter2 StemmerTokenFilterLanguage = "porter2"
	// StemmerTokenFilterLanguagePortuguese ...
	StemmerTokenFilterLanguagePortuguese StemmerTokenFilterLanguage = "portuguese"
	// StemmerTokenFilterLanguagePortugueseRslp ...
	StemmerTokenFilterLanguagePortugueseRslp StemmerTokenFilterLanguage = "portugueseRslp"
	// StemmerTokenFilterLanguagePossessiveEnglish ...
	StemmerTokenFilterLanguagePossessiveEnglish StemmerTokenFilterLanguage = "possessiveEnglish"
	// StemmerTokenFilterLanguageRomanian ...
	StemmerTokenFilterLanguageRomanian StemmerTokenFilterLanguage = "romanian"
	// StemmerTokenFilterLanguageRussian ...
	StemmerTokenFilterLanguageRussian StemmerTokenFilterLanguage = "russian"
	// StemmerTokenFilterLanguageSorani ...
	StemmerTokenFilterLanguageSorani StemmerTokenFilterLanguage = "sorani"
	// StemmerTokenFilterLanguageSpanish ...
	StemmerTokenFilterLanguageSpanish StemmerTokenFilterLanguage = "spanish"
	// StemmerTokenFilterLanguageSwedish ...
	StemmerTokenFilterLanguageSwedish StemmerTokenFilterLanguage = "swedish"
	// StemmerTokenFilterLanguageTurkish ...
	StemmerTokenFilterLanguageTurkish StemmerTokenFilterLanguage = "turkish"
)

type StopAnalyzer

type StopAnalyzer struct {
	// Name - The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeAnalyzer', 'OdataTypeMicrosoftAzureSearchCustomAnalyzer', 'OdataTypeMicrosoftAzureSearchPatternAnalyzer', 'OdataTypeMicrosoftAzureSearchStandardAnalyzer', 'OdataTypeMicrosoftAzureSearchStopAnalyzer'
	OdataType OdataType `json:"@odata.type,omitempty"`
	// Stopwords - A list of stopwords.
	Stopwords *[]string `json:"stopwords,omitempty"`
}

StopAnalyzer divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

func (StopAnalyzer) AsAnalyzer

func (sa StopAnalyzer) AsAnalyzer() (*Analyzer, bool)

AsAnalyzer is the BasicAnalyzer implementation for StopAnalyzer.

func (StopAnalyzer) AsBasicAnalyzer

func (sa StopAnalyzer) AsBasicAnalyzer() (BasicAnalyzer, bool)

AsBasicAnalyzer is the BasicAnalyzer implementation for StopAnalyzer.

func (StopAnalyzer) AsCustomAnalyzer

func (sa StopAnalyzer) AsCustomAnalyzer() (*CustomAnalyzer, bool)

AsCustomAnalyzer is the BasicAnalyzer implementation for StopAnalyzer.

func (StopAnalyzer) AsPatternAnalyzer

func (sa StopAnalyzer) AsPatternAnalyzer() (*PatternAnalyzer, bool)

AsPatternAnalyzer is the BasicAnalyzer implementation for StopAnalyzer.

func (StopAnalyzer) AsStandardAnalyzer

func (sa StopAnalyzer) AsStandardAnalyzer() (*StandardAnalyzer, bool)

AsStandardAnalyzer is the BasicAnalyzer implementation for StopAnalyzer.

func (StopAnalyzer) AsStopAnalyzer

func (sa StopAnalyzer) AsStopAnalyzer() (*StopAnalyzer, bool)

AsStopAnalyzer is the BasicAnalyzer implementation for StopAnalyzer.

func (StopAnalyzer) MarshalJSON

func (sa StopAnalyzer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for StopAnalyzer.

type StopwordsList

type StopwordsList string

StopwordsList enumerates the values for stopwords list.

const (
	// StopwordsListArabic ...
	StopwordsListArabic StopwordsList = "arabic"
	// StopwordsListArmenian ...
	StopwordsListArmenian StopwordsList = "armenian"
	// StopwordsListBasque ...
	StopwordsListBasque StopwordsList = "basque"
	// StopwordsListBrazilian ...
	StopwordsListBrazilian StopwordsList = "brazilian"
	// StopwordsListBulgarian ...
	StopwordsListBulgarian StopwordsList = "bulgarian"
	// StopwordsListCatalan ...
	StopwordsListCatalan StopwordsList = "catalan"
	// StopwordsListCzech ...
	StopwordsListCzech StopwordsList = "czech"
	// StopwordsListDanish ...
	StopwordsListDanish StopwordsList = "danish"
	// StopwordsListDutch ...
	StopwordsListDutch StopwordsList = "dutch"
	// StopwordsListEnglish ...
	StopwordsListEnglish StopwordsList = "english"
	// StopwordsListFinnish ...
	StopwordsListFinnish StopwordsList = "finnish"
	// StopwordsListFrench ...
	StopwordsListFrench StopwordsList = "french"
	// StopwordsListGalician ...
	StopwordsListGalician StopwordsList = "galician"
	// StopwordsListGerman ...
	StopwordsListGerman StopwordsList = "german"
	// StopwordsListGreek ...
	StopwordsListGreek StopwordsList = "greek"
	// StopwordsListHindi ...
	StopwordsListHindi StopwordsList = "hindi"
	// StopwordsListHungarian ...
	StopwordsListHungarian StopwordsList = "hungarian"
	// StopwordsListIndonesian ...
	StopwordsListIndonesian StopwordsList = "indonesian"
	// StopwordsListIrish ...
	StopwordsListIrish StopwordsList = "irish"
	// StopwordsListItalian ...
	StopwordsListItalian StopwordsList = "italian"
	// StopwordsListLatvian ...
	StopwordsListLatvian StopwordsList = "latvian"
	// StopwordsListNorwegian ...
	StopwordsListNorwegian StopwordsList = "norwegian"
	// StopwordsListPersian ...
	StopwordsListPersian StopwordsList = "persian"
	// StopwordsListPortuguese ...
	StopwordsListPortuguese StopwordsList = "portuguese"
	// StopwordsListRomanian ...
	StopwordsListRomanian StopwordsList = "romanian"
	// StopwordsListRussian ...
	StopwordsListRussian StopwordsList = "russian"
	// StopwordsListSorani ...
	StopwordsListSorani StopwordsList = "sorani"
	// StopwordsListSpanish ...
	StopwordsListSpanish StopwordsList = "spanish"
	// StopwordsListSwedish ...
	StopwordsListSwedish StopwordsList = "swedish"
	// StopwordsListThai ...
	StopwordsListThai StopwordsList = "thai"
	// StopwordsListTurkish ...
	StopwordsListTurkish StopwordsList = "turkish"
)

type StopwordsTokenFilter

type StopwordsTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Stopwords - The list of stopwords. This property and the stopwords list property cannot both be set.
	Stopwords *[]string `json:"stopwords,omitempty"`
	// StopwordsList - A predefined list of stopwords to use. This property and the stopwords property cannot both be set. Default is English. Possible values include: 'StopwordsListArabic', 'StopwordsListArmenian', 'StopwordsListBasque', 'StopwordsListBrazilian', 'StopwordsListBulgarian', 'StopwordsListCatalan', 'StopwordsListCzech', 'StopwordsListDanish', 'StopwordsListDutch', 'StopwordsListEnglish', 'StopwordsListFinnish', 'StopwordsListFrench', 'StopwordsListGalician', 'StopwordsListGerman', 'StopwordsListGreek', 'StopwordsListHindi', 'StopwordsListHungarian', 'StopwordsListIndonesian', 'StopwordsListIrish', 'StopwordsListItalian', 'StopwordsListLatvian', 'StopwordsListNorwegian', 'StopwordsListPersian', 'StopwordsListPortuguese', 'StopwordsListRomanian', 'StopwordsListRussian', 'StopwordsListSorani', 'StopwordsListSpanish', 'StopwordsListSwedish', 'StopwordsListThai', 'StopwordsListTurkish'
	StopwordsList StopwordsList `json:"stopwordsList,omitempty"`
	// IgnoreCase - A value indicating whether to ignore case. If true, all words are converted to lower case first. Default is false.
	IgnoreCase *bool `json:"ignoreCase,omitempty"`
	// RemoveTrailingStopWords - A value indicating whether to ignore the last search term if it's a stop word. Default is true.
	RemoveTrailingStopWords *bool `json:"removeTrailing,omitempty"`
}

StopwordsTokenFilter removes stop words from a token stream. This token filter is implemented using Apache Lucene.

func (StopwordsTokenFilter) AsASCIIFoldingTokenFilter

func (stf StopwordsTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsBasicTokenFilter

func (stf StopwordsTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsCjkBigramTokenFilter

func (stf StopwordsTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsCommonGramTokenFilter

func (stf StopwordsTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsDictionaryDecompounderTokenFilter

func (stf StopwordsTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsEdgeNGramTokenFilter

func (stf StopwordsTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsEdgeNGramTokenFilterV2

func (stf StopwordsTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsElisionTokenFilter

func (stf StopwordsTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsKeepTokenFilter

func (stf StopwordsTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsKeywordMarkerTokenFilter

func (stf StopwordsTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsLengthTokenFilter

func (stf StopwordsTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsLimitTokenFilter

func (stf StopwordsTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsNGramTokenFilter

func (stf StopwordsTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsNGramTokenFilterV2

func (stf StopwordsTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsPatternCaptureTokenFilter

func (stf StopwordsTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsPatternReplaceTokenFilter

func (stf StopwordsTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsPhoneticTokenFilter

func (stf StopwordsTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsShingleTokenFilter

func (stf StopwordsTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsSnowballTokenFilter

func (stf StopwordsTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsStemmerOverrideTokenFilter

func (stf StopwordsTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsStemmerTokenFilter

func (stf StopwordsTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsStopwordsTokenFilter

func (stf StopwordsTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsSynonymTokenFilter

func (stf StopwordsTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsTokenFilter

func (stf StopwordsTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsTruncateTokenFilter

func (stf StopwordsTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsUniqueTokenFilter

func (stf StopwordsTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) AsWordDelimiterTokenFilter

func (stf StopwordsTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for StopwordsTokenFilter.

func (StopwordsTokenFilter) MarshalJSON

func (stf StopwordsTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for StopwordsTokenFilter.

type SuggestParametersPayload

type SuggestParametersPayload struct {
	// Filter - The OData $filter expression to apply to the suggestions query.
	Filter *string `json:"filter,omitempty"`
	// Fuzzy - A value indicating whether to use fuzzy matching for the suggestion query. Default is false. when set to true, the query will find suggestions even if there's a substituted or missing character in the search text. While this provides a better experience in some scenarios it comes at a performance cost as fuzzy suggestion searches are slower and consume more resources.
	Fuzzy *bool `json:"fuzzy,omitempty"`
	// HighlightPostTag - A string tag that is appended to hit highlights. Must be set with HighlightPreTag. If omitted, hit highlighting of suggestions is disabled.
	HighlightPostTag *string `json:"highlightPostTag,omitempty"`
	// HighlightPreTag - A string tag that is prepended to hit highlights. Must be set with HighlightPostTag. If omitted, hit highlighting of suggestions is disabled.
	HighlightPreTag *string `json:"highlightPreTag,omitempty"`
	// MinimumCoverage - A number between 0 and 100 indicating the percentage of the index that must be covered by a suggestion query in order for the query to be reported as a success. This parameter can be useful for ensuring search availability even for services with only one replica. The default is 80.
	MinimumCoverage *float64 `json:"minimumCoverage,omitempty"`
	// OrderBy - The comma-separated list of OData $orderby expressions by which to sort the results. Each expression can be either a field name or a call to the geo.distance() function. Each expression can be followed by asc to indicate ascending, and desc to indicate descending. The default is ascending order. Ties will be broken by the match scores of documents. If no OrderBy is specified, the default sort order is descending by document match score. There can be at most 32 Orderby clauses.
	OrderBy *string `json:"orderby,omitempty"`
	// SearchProperty - The search text on which to base suggestions.
	SearchProperty *string `json:"search,omitempty"`
	// SearchFields - The comma-separated list of field names to consider when querying for suggestions.
	SearchFields *string `json:"searchFields,omitempty"`
	// Select - The comma-separated list of fields to retrieve. If unspecified, all fields marked as retrievable in the schema are included.
	Select *string `json:"select,omitempty"`
	// SuggesterName - The name of the suggester as specified in the suggesters collection that's part of the index definition.
	SuggesterName *string `json:"suggesterName,omitempty"`
	// Top - The number of suggestions to retrieve. This must be a value between 1 and 100. The default is to 5.
	Top *int32 `json:"top,omitempty"`
}

SuggestParametersPayload parameters for filtering, sorting, fuzzy matching, and other suggestions query behaviors.

type Suggester

type Suggester struct {
	// Name - The name of the suggester.
	Name *string `json:"name,omitempty"`
	// SearchMode - A value indicating the capabilities of the suggester.
	SearchMode *string `json:"searchMode,omitempty"`
	// SourceFields - The list of field names to which the suggester applies. Each field must be searchable.
	SourceFields *[]string `json:"sourceFields,omitempty"`
}

Suggester defines how the Suggest API should apply to a group of fields in the index.

type SuggesterSearchMode

type SuggesterSearchMode string

SuggesterSearchMode enumerates the values for suggester search mode.

const (
	// AnalyzingInfixMatching ...
	AnalyzingInfixMatching SuggesterSearchMode = "analyzingInfixMatching"
)

type SynonymTokenFilter

type SynonymTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Synonyms - A list of synonyms in following one of two formats: 1. incredible, unbelievable, fabulous => amazing - all terms on the left side of => symbol will be replaced with all terms on its right side; 2. incredible, unbelievable, fabulous, amazing - comma separated list of equivalent words. Set the expand option to change how this list is interpreted.
	Synonyms *[]string `json:"synonyms,omitempty"`
	// IgnoreCase - A value indicating whether to case-fold input for matching. Default is false.
	IgnoreCase *bool `json:"ignoreCase,omitempty"`
	// Expand - A value indicating whether all words in the list of synonyms (if => notation is not used) will map to one another. If true, all words in the list of synonyms (if => notation is not used) will map to one another. The following list: incredible, unbelievable, fabulous, amazing is equivalent to: incredible, unbelievable, fabulous, amazing => incredible, unbelievable, fabulous, amazing. If false, the following list: incredible, unbelievable, fabulous, amazing will be equivalent to: incredible, unbelievable, fabulous, amazing => incredible. Default is true.
	Expand *bool `json:"expand,omitempty"`
}

SynonymTokenFilter matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

func (SynonymTokenFilter) AsASCIIFoldingTokenFilter

func (stf SynonymTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsBasicTokenFilter

func (stf SynonymTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsCjkBigramTokenFilter

func (stf SynonymTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsCommonGramTokenFilter

func (stf SynonymTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsDictionaryDecompounderTokenFilter

func (stf SynonymTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsEdgeNGramTokenFilter

func (stf SynonymTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsEdgeNGramTokenFilterV2

func (stf SynonymTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsElisionTokenFilter

func (stf SynonymTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsKeepTokenFilter

func (stf SynonymTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsKeywordMarkerTokenFilter

func (stf SynonymTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsLengthTokenFilter

func (stf SynonymTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsLimitTokenFilter

func (stf SynonymTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsNGramTokenFilter

func (stf SynonymTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsNGramTokenFilterV2

func (stf SynonymTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsPatternCaptureTokenFilter

func (stf SynonymTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsPatternReplaceTokenFilter

func (stf SynonymTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsPhoneticTokenFilter

func (stf SynonymTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsShingleTokenFilter

func (stf SynonymTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsSnowballTokenFilter

func (stf SynonymTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsStemmerOverrideTokenFilter

func (stf SynonymTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsStemmerTokenFilter

func (stf SynonymTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsStopwordsTokenFilter

func (stf SynonymTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsSynonymTokenFilter

func (stf SynonymTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsTokenFilter

func (stf SynonymTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsTruncateTokenFilter

func (stf SynonymTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsUniqueTokenFilter

func (stf SynonymTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) AsWordDelimiterTokenFilter

func (stf SynonymTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for SynonymTokenFilter.

func (SynonymTokenFilter) MarshalJSON

func (stf SynonymTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for SynonymTokenFilter.

type TagScoringFunction

type TagScoringFunction struct {
	// FieldName - The name of the field used as input to the scoring function.
	FieldName *string `json:"fieldName,omitempty"`
	// Boost - A multiplier for the raw score. Must be a positive number not equal to 1.0.
	Boost *float64 `json:"boost,omitempty"`
	// Interpolation - A value indicating how boosting will be interpolated across document scores; defaults to "Linear". Possible values include: 'Linear', 'Constant', 'Quadratic', 'Logarithmic'
	Interpolation ScoringFunctionInterpolation `json:"interpolation,omitempty"`
	// Type - Possible values include: 'TypeScoringFunction', 'TypeDistance', 'TypeFreshness', 'TypeMagnitude', 'TypeTag'
	Type Type `json:"type,omitempty"`
	// Parameters - Parameter values for the tag scoring function.
	Parameters *TagScoringParameters `json:"tag,omitempty"`
}

TagScoringFunction defines a function that boosts scores of documents with string values matching a given list of tags.

func (TagScoringFunction) AsBasicScoringFunction

func (tsf TagScoringFunction) AsBasicScoringFunction() (BasicScoringFunction, bool)

AsBasicScoringFunction is the BasicScoringFunction implementation for TagScoringFunction.

func (TagScoringFunction) AsDistanceScoringFunction

func (tsf TagScoringFunction) AsDistanceScoringFunction() (*DistanceScoringFunction, bool)

AsDistanceScoringFunction is the BasicScoringFunction implementation for TagScoringFunction.

func (TagScoringFunction) AsFreshnessScoringFunction

func (tsf TagScoringFunction) AsFreshnessScoringFunction() (*FreshnessScoringFunction, bool)

AsFreshnessScoringFunction is the BasicScoringFunction implementation for TagScoringFunction.

func (TagScoringFunction) AsMagnitudeScoringFunction

func (tsf TagScoringFunction) AsMagnitudeScoringFunction() (*MagnitudeScoringFunction, bool)

AsMagnitudeScoringFunction is the BasicScoringFunction implementation for TagScoringFunction.

func (TagScoringFunction) AsScoringFunction

func (tsf TagScoringFunction) AsScoringFunction() (*ScoringFunction, bool)

AsScoringFunction is the BasicScoringFunction implementation for TagScoringFunction.

func (TagScoringFunction) AsTagScoringFunction

func (tsf TagScoringFunction) AsTagScoringFunction() (*TagScoringFunction, bool)

AsTagScoringFunction is the BasicScoringFunction implementation for TagScoringFunction.

func (TagScoringFunction) MarshalJSON

func (tsf TagScoringFunction) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for TagScoringFunction.

type TagScoringParameters

type TagScoringParameters struct {
	// TagsParameter - The name of the parameter passed in search queries to specify the list of tags to compare against the target field.
	TagsParameter *string `json:"tagsParameter,omitempty"`
}

TagScoringParameters provides parameter values to a tag scoring function.

type TextWeights

type TextWeights struct {
	// Weights - The dictionary of per-field weights to boost document scoring. The keys are field names and the values are the weights for each field.
	Weights *map[string]*float64 `json:"weights,omitempty"`
}

TextWeights defines weights on index fields for which matches should boost scoring in search queries.

type TokenCharacterKind

type TokenCharacterKind string

TokenCharacterKind enumerates the values for token character kind.

const (
	// Digit ...
	Digit TokenCharacterKind = "digit"
	// Letter ...
	Letter TokenCharacterKind = "letter"
	// Punctuation ...
	Punctuation TokenCharacterKind = "punctuation"
	// Symbol ...
	Symbol TokenCharacterKind = "symbol"
	// Whitespace ...
	Whitespace TokenCharacterKind = "whitespace"
)

type TokenFilter

type TokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
}

TokenFilter abstract base class for token filters.

func (TokenFilter) AsASCIIFoldingTokenFilter

func (tf TokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsBasicTokenFilter

func (tf TokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsCjkBigramTokenFilter

func (tf TokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsCommonGramTokenFilter

func (tf TokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsDictionaryDecompounderTokenFilter

func (tf TokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsEdgeNGramTokenFilter

func (tf TokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsEdgeNGramTokenFilterV2

func (tf TokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsElisionTokenFilter

func (tf TokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsKeepTokenFilter

func (tf TokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsKeywordMarkerTokenFilter

func (tf TokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsLengthTokenFilter

func (tf TokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsLimitTokenFilter

func (tf TokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsNGramTokenFilter

func (tf TokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsNGramTokenFilterV2

func (tf TokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsPatternCaptureTokenFilter

func (tf TokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsPatternReplaceTokenFilter

func (tf TokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsPhoneticTokenFilter

func (tf TokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsShingleTokenFilter

func (tf TokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsSnowballTokenFilter

func (tf TokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsStemmerOverrideTokenFilter

func (tf TokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsStemmerTokenFilter

func (tf TokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsStopwordsTokenFilter

func (tf TokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsSynonymTokenFilter

func (tf TokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsTokenFilter

func (tf TokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsTruncateTokenFilter

func (tf TokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsUniqueTokenFilter

func (tf TokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) AsWordDelimiterTokenFilter

func (tf TokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for TokenFilter.

func (TokenFilter) MarshalJSON

func (tf TokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for TokenFilter.

type TokenFilterName

type TokenFilterName struct {
	Name *string `json:"name,omitempty"`
}

TokenFilterName defines the names of all token filters supported by Azure Search.

type TokenInfo

type TokenInfo struct {
	// Token - The token returned by the analyzer.
	Token *string `json:"token,omitempty"`
	// StartOffset - The index of the first character of the token in the input text.
	StartOffset *int32 `json:"startOffset,omitempty"`
	// EndOffset - The index of the last character of the token in the input text.
	EndOffset *int32 `json:"endOffset,omitempty"`
	// Position - The position of the token in the input text relative to other tokens. The first token in the input text has position 0, the next has position 1, and so on. Depending on the analyzer used, some tokens might have the same position, for example if they are synonyms of each other.
	Position *int32 `json:"position,omitempty"`
}

TokenInfo information about a token returned by an analyzer.

type Tokenizer

type Tokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
}

Tokenizer abstract base class for tokenizers.

func (Tokenizer) AsBasicTokenizer

func (t Tokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsClassicTokenizer

func (t Tokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsEdgeNGramTokenizer

func (t Tokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsKeywordTokenizer

func (t Tokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsKeywordTokenizerV2

func (t Tokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsMicrosoftLanguageStemmingTokenizer

func (t Tokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsMicrosoftLanguageTokenizer

func (t Tokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsNGramTokenizer

func (t Tokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsPathHierarchyTokenizer

func (t Tokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsPathHierarchyTokenizerV2

func (t Tokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsPatternTokenizer

func (t Tokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsStandardTokenizer

func (t Tokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsStandardTokenizerV2

func (t Tokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsTokenizer

func (t Tokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) AsUaxURLEmailTokenizer

func (t Tokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for Tokenizer.

func (Tokenizer) MarshalJSON

func (t Tokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for Tokenizer.

type TokenizerName

type TokenizerName struct {
	Name *string `json:"name,omitempty"`
}

TokenizerName defines the names of all tokenizers supported by Azure Search.

type TruncateTokenFilter

type TruncateTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// Length - The length at which terms will be truncated. Default and maximum is 300.
	Length *int32 `json:"length,omitempty"`
}

TruncateTokenFilter truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

func (TruncateTokenFilter) AsASCIIFoldingTokenFilter

func (ttf TruncateTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsBasicTokenFilter

func (ttf TruncateTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsCjkBigramTokenFilter

func (ttf TruncateTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsCommonGramTokenFilter

func (ttf TruncateTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsDictionaryDecompounderTokenFilter

func (ttf TruncateTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsEdgeNGramTokenFilter

func (ttf TruncateTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsEdgeNGramTokenFilterV2

func (ttf TruncateTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsElisionTokenFilter

func (ttf TruncateTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsKeepTokenFilter

func (ttf TruncateTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsKeywordMarkerTokenFilter

func (ttf TruncateTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsLengthTokenFilter

func (ttf TruncateTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsLimitTokenFilter

func (ttf TruncateTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsNGramTokenFilter

func (ttf TruncateTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsNGramTokenFilterV2

func (ttf TruncateTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsPatternCaptureTokenFilter

func (ttf TruncateTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsPatternReplaceTokenFilter

func (ttf TruncateTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsPhoneticTokenFilter

func (ttf TruncateTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsShingleTokenFilter

func (ttf TruncateTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsSnowballTokenFilter

func (ttf TruncateTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsStemmerOverrideTokenFilter

func (ttf TruncateTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsStemmerTokenFilter

func (ttf TruncateTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsStopwordsTokenFilter

func (ttf TruncateTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsSynonymTokenFilter

func (ttf TruncateTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsTokenFilter

func (ttf TruncateTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsTruncateTokenFilter

func (ttf TruncateTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsUniqueTokenFilter

func (ttf TruncateTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) AsWordDelimiterTokenFilter

func (ttf TruncateTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for TruncateTokenFilter.

func (TruncateTokenFilter) MarshalJSON

func (ttf TruncateTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for TruncateTokenFilter.

type Type

type Type string

Type enumerates the values for type.

const (
	// TypeDistance ...
	TypeDistance Type = "distance"
	// TypeFreshness ...
	TypeFreshness Type = "freshness"
	// TypeMagnitude ...
	TypeMagnitude Type = "magnitude"
	// TypeScoringFunction ...
	TypeScoringFunction Type = "ScoringFunction"
	// TypeTag ...
	TypeTag Type = "tag"
)

type UaxURLEmailTokenizer

type UaxURLEmailTokenizer struct {
	// Name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenizer', 'OdataTypeMicrosoftAzureSearchClassicTokenizer', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizer', 'OdataTypeMicrosoftAzureSearchKeywordTokenizerV2', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageTokenizer', 'OdataTypeMicrosoftAzureSearchMicrosoftLanguageStemmingTokenizer', 'OdataTypeMicrosoftAzureSearchNGramTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizer', 'OdataTypeMicrosoftAzureSearchPathHierarchyTokenizerV2', 'OdataTypeMicrosoftAzureSearchPatternTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizer', 'OdataTypeMicrosoftAzureSearchStandardTokenizerV2', 'OdataTypeMicrosoftAzureSearchUaxURLEmailTokenizer'
	OdataType OdataTypeBasicTokenizer `json:"@odata.type,omitempty"`
	// MaxTokenLength - The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
	MaxTokenLength *int32 `json:"maxTokenLength,omitempty"`
}

UaxURLEmailTokenizer tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

func (UaxURLEmailTokenizer) AsBasicTokenizer

func (uuet UaxURLEmailTokenizer) AsBasicTokenizer() (BasicTokenizer, bool)

AsBasicTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsClassicTokenizer

func (uuet UaxURLEmailTokenizer) AsClassicTokenizer() (*ClassicTokenizer, bool)

AsClassicTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsEdgeNGramTokenizer

func (uuet UaxURLEmailTokenizer) AsEdgeNGramTokenizer() (*EdgeNGramTokenizer, bool)

AsEdgeNGramTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsKeywordTokenizer

func (uuet UaxURLEmailTokenizer) AsKeywordTokenizer() (*KeywordTokenizer, bool)

AsKeywordTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsKeywordTokenizerV2

func (uuet UaxURLEmailTokenizer) AsKeywordTokenizerV2() (*KeywordTokenizerV2, bool)

AsKeywordTokenizerV2 is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsMicrosoftLanguageStemmingTokenizer

func (uuet UaxURLEmailTokenizer) AsMicrosoftLanguageStemmingTokenizer() (*MicrosoftLanguageStemmingTokenizer, bool)

AsMicrosoftLanguageStemmingTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsMicrosoftLanguageTokenizer

func (uuet UaxURLEmailTokenizer) AsMicrosoftLanguageTokenizer() (*MicrosoftLanguageTokenizer, bool)

AsMicrosoftLanguageTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsNGramTokenizer

func (uuet UaxURLEmailTokenizer) AsNGramTokenizer() (*NGramTokenizer, bool)

AsNGramTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsPathHierarchyTokenizer

func (uuet UaxURLEmailTokenizer) AsPathHierarchyTokenizer() (*PathHierarchyTokenizer, bool)

AsPathHierarchyTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsPathHierarchyTokenizerV2

func (uuet UaxURLEmailTokenizer) AsPathHierarchyTokenizerV2() (*PathHierarchyTokenizerV2, bool)

AsPathHierarchyTokenizerV2 is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsPatternTokenizer

func (uuet UaxURLEmailTokenizer) AsPatternTokenizer() (*PatternTokenizer, bool)

AsPatternTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsStandardTokenizer

func (uuet UaxURLEmailTokenizer) AsStandardTokenizer() (*StandardTokenizer, bool)

AsStandardTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsStandardTokenizerV2

func (uuet UaxURLEmailTokenizer) AsStandardTokenizerV2() (*StandardTokenizerV2, bool)

AsStandardTokenizerV2 is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsTokenizer

func (uuet UaxURLEmailTokenizer) AsTokenizer() (*Tokenizer, bool)

AsTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) AsUaxURLEmailTokenizer

func (uuet UaxURLEmailTokenizer) AsUaxURLEmailTokenizer() (*UaxURLEmailTokenizer, bool)

AsUaxURLEmailTokenizer is the BasicTokenizer implementation for UaxURLEmailTokenizer.

func (UaxURLEmailTokenizer) MarshalJSON

func (uuet UaxURLEmailTokenizer) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for UaxURLEmailTokenizer.

type UniqueTokenFilter

type UniqueTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// OnlyOnSamePosition - A value indicating whether to remove duplicates only at the same position. Default is false.
	OnlyOnSamePosition *bool `json:"onlyOnSamePosition,omitempty"`
}

UniqueTokenFilter filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

func (UniqueTokenFilter) AsASCIIFoldingTokenFilter

func (utf UniqueTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsBasicTokenFilter

func (utf UniqueTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsCjkBigramTokenFilter

func (utf UniqueTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsCommonGramTokenFilter

func (utf UniqueTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsDictionaryDecompounderTokenFilter

func (utf UniqueTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsEdgeNGramTokenFilter

func (utf UniqueTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsEdgeNGramTokenFilterV2

func (utf UniqueTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsElisionTokenFilter

func (utf UniqueTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsKeepTokenFilter

func (utf UniqueTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsKeywordMarkerTokenFilter

func (utf UniqueTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsLengthTokenFilter

func (utf UniqueTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsLimitTokenFilter

func (utf UniqueTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsNGramTokenFilter

func (utf UniqueTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsNGramTokenFilterV2

func (utf UniqueTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsPatternCaptureTokenFilter

func (utf UniqueTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsPatternReplaceTokenFilter

func (utf UniqueTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsPhoneticTokenFilter

func (utf UniqueTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsShingleTokenFilter

func (utf UniqueTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsSnowballTokenFilter

func (utf UniqueTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsStemmerOverrideTokenFilter

func (utf UniqueTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsStemmerTokenFilter

func (utf UniqueTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsStopwordsTokenFilter

func (utf UniqueTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsSynonymTokenFilter

func (utf UniqueTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsTokenFilter

func (utf UniqueTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsTruncateTokenFilter

func (utf UniqueTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsUniqueTokenFilter

func (utf UniqueTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) AsWordDelimiterTokenFilter

func (utf UniqueTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for UniqueTokenFilter.

func (UniqueTokenFilter) MarshalJSON

func (utf UniqueTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for UniqueTokenFilter.

type WordDelimiterTokenFilter

type WordDelimiterTokenFilter struct {
	// Name - The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
	Name *string `json:"name,omitempty"`
	// OdataType - Possible values include: 'OdataTypeTokenFilter', 'OdataTypeMicrosoftAzureSearchASCIIFoldingTokenFilter', 'OdataTypeMicrosoftAzureSearchCjkBigramTokenFilter', 'OdataTypeMicrosoftAzureSearchCommonGramTokenFilter', 'OdataTypeMicrosoftAzureSearchDictionaryDecompounderTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchEdgeNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchElisionTokenFilter', 'OdataTypeMicrosoftAzureSearchKeepTokenFilter', 'OdataTypeMicrosoftAzureSearchKeywordMarkerTokenFilter', 'OdataTypeMicrosoftAzureSearchLengthTokenFilter', 'OdataTypeMicrosoftAzureSearchLimitTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilter', 'OdataTypeMicrosoftAzureSearchNGramTokenFilterV2', 'OdataTypeMicrosoftAzureSearchPatternCaptureTokenFilter', 'OdataTypeMicrosoftAzureSearchPatternReplaceTokenFilter', 'OdataTypeMicrosoftAzureSearchPhoneticTokenFilter', 'OdataTypeMicrosoftAzureSearchShingleTokenFilter', 'OdataTypeMicrosoftAzureSearchSnowballTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerTokenFilter', 'OdataTypeMicrosoftAzureSearchStemmerOverrideTokenFilter', 'OdataTypeMicrosoftAzureSearchStopwordsTokenFilter', 'OdataTypeMicrosoftAzureSearchSynonymTokenFilter', 'OdataTypeMicrosoftAzureSearchTruncateTokenFilter', 'OdataTypeMicrosoftAzureSearchUniqueTokenFilter', 'OdataTypeMicrosoftAzureSearchWordDelimiterTokenFilter'
	OdataType OdataTypeBasicTokenFilter `json:"@odata.type,omitempty"`
	// GenerateWordParts - A value indicating whether to generate part words. If set, causes parts of words to be generated; for example "AzureSearch" becomes "Azure" "Search". Default is true.
	GenerateWordParts *bool `json:"generateWordParts,omitempty"`
	// GenerateNumberParts - A value indicating whether to generate number subwords. Default is true.
	GenerateNumberParts *bool `json:"generateNumberParts,omitempty"`
	// CatenateWords - A value indicating whether maximum runs of word parts will be catenated. For example, if this is set to true, "Azure-Search" becomes "AzureSearch". Default is false.
	CatenateWords *bool `json:"catenateWords,omitempty"`
	// CatenateNumbers - A value indicating whether maximum runs of number parts will be catenated. For example, if this is set to true, "1-2" becomes "12". Default is false.
	CatenateNumbers *bool `json:"catenateNumbers,omitempty"`
	// CatenateAll - A value indicating whether all subword parts will be catenated. For example, if this is set to true, "Azure-Search-1" becomes "AzureSearch1". Default is false.
	CatenateAll *bool `json:"catenateAll,omitempty"`
	// SplitOnCaseChange - A value indicating whether to split words on caseChange. For example, if this is set to true, "AzureSearch" becomes "Azure" "Search". Default is true.
	SplitOnCaseChange *bool `json:"splitOnCaseChange,omitempty"`
	// PreserveOriginal - A value indicating whether original words will be preserved and added to the subword list. Default is false.
	PreserveOriginal *bool `json:"preserveOriginal,omitempty"`
	// SplitOnNumerics - A value indicating whether to split on numbers. For example, if this is set to true, "Azure1Search" becomes "Azure" "1" "Search". Default is true.
	SplitOnNumerics *bool `json:"splitOnNumerics,omitempty"`
	// StemEnglishPossessive - A value indicating whether to remove trailing "'s" for each subword. Default is true.
	StemEnglishPossessive *bool `json:"stemEnglishPossessive,omitempty"`
	// ProtectedWords - A list of tokens to protect from being delimited.
	ProtectedWords *[]string `json:"protectedWords,omitempty"`
}

WordDelimiterTokenFilter splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

func (WordDelimiterTokenFilter) AsASCIIFoldingTokenFilter

func (wdtf WordDelimiterTokenFilter) AsASCIIFoldingTokenFilter() (*ASCIIFoldingTokenFilter, bool)

AsASCIIFoldingTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsBasicTokenFilter

func (wdtf WordDelimiterTokenFilter) AsBasicTokenFilter() (BasicTokenFilter, bool)

AsBasicTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsCjkBigramTokenFilter

func (wdtf WordDelimiterTokenFilter) AsCjkBigramTokenFilter() (*CjkBigramTokenFilter, bool)

AsCjkBigramTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsCommonGramTokenFilter

func (wdtf WordDelimiterTokenFilter) AsCommonGramTokenFilter() (*CommonGramTokenFilter, bool)

AsCommonGramTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsDictionaryDecompounderTokenFilter

func (wdtf WordDelimiterTokenFilter) AsDictionaryDecompounderTokenFilter() (*DictionaryDecompounderTokenFilter, bool)

AsDictionaryDecompounderTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsEdgeNGramTokenFilter

func (wdtf WordDelimiterTokenFilter) AsEdgeNGramTokenFilter() (*EdgeNGramTokenFilter, bool)

AsEdgeNGramTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsEdgeNGramTokenFilterV2

func (wdtf WordDelimiterTokenFilter) AsEdgeNGramTokenFilterV2() (*EdgeNGramTokenFilterV2, bool)

AsEdgeNGramTokenFilterV2 is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsElisionTokenFilter

func (wdtf WordDelimiterTokenFilter) AsElisionTokenFilter() (*ElisionTokenFilter, bool)

AsElisionTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsKeepTokenFilter

func (wdtf WordDelimiterTokenFilter) AsKeepTokenFilter() (*KeepTokenFilter, bool)

AsKeepTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsKeywordMarkerTokenFilter

func (wdtf WordDelimiterTokenFilter) AsKeywordMarkerTokenFilter() (*KeywordMarkerTokenFilter, bool)

AsKeywordMarkerTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsLengthTokenFilter

func (wdtf WordDelimiterTokenFilter) AsLengthTokenFilter() (*LengthTokenFilter, bool)

AsLengthTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsLimitTokenFilter

func (wdtf WordDelimiterTokenFilter) AsLimitTokenFilter() (*LimitTokenFilter, bool)

AsLimitTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsNGramTokenFilter

func (wdtf WordDelimiterTokenFilter) AsNGramTokenFilter() (*NGramTokenFilter, bool)

AsNGramTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsNGramTokenFilterV2

func (wdtf WordDelimiterTokenFilter) AsNGramTokenFilterV2() (*NGramTokenFilterV2, bool)

AsNGramTokenFilterV2 is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsPatternCaptureTokenFilter

func (wdtf WordDelimiterTokenFilter) AsPatternCaptureTokenFilter() (*PatternCaptureTokenFilter, bool)

AsPatternCaptureTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsPatternReplaceTokenFilter

func (wdtf WordDelimiterTokenFilter) AsPatternReplaceTokenFilter() (*PatternReplaceTokenFilter, bool)

AsPatternReplaceTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsPhoneticTokenFilter

func (wdtf WordDelimiterTokenFilter) AsPhoneticTokenFilter() (*PhoneticTokenFilter, bool)

AsPhoneticTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsShingleTokenFilter

func (wdtf WordDelimiterTokenFilter) AsShingleTokenFilter() (*ShingleTokenFilter, bool)

AsShingleTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsSnowballTokenFilter

func (wdtf WordDelimiterTokenFilter) AsSnowballTokenFilter() (*SnowballTokenFilter, bool)

AsSnowballTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsStemmerOverrideTokenFilter

func (wdtf WordDelimiterTokenFilter) AsStemmerOverrideTokenFilter() (*StemmerOverrideTokenFilter, bool)

AsStemmerOverrideTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsStemmerTokenFilter

func (wdtf WordDelimiterTokenFilter) AsStemmerTokenFilter() (*StemmerTokenFilter, bool)

AsStemmerTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsStopwordsTokenFilter

func (wdtf WordDelimiterTokenFilter) AsStopwordsTokenFilter() (*StopwordsTokenFilter, bool)

AsStopwordsTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsSynonymTokenFilter

func (wdtf WordDelimiterTokenFilter) AsSynonymTokenFilter() (*SynonymTokenFilter, bool)

AsSynonymTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsTokenFilter

func (wdtf WordDelimiterTokenFilter) AsTokenFilter() (*TokenFilter, bool)

AsTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsTruncateTokenFilter

func (wdtf WordDelimiterTokenFilter) AsTruncateTokenFilter() (*TruncateTokenFilter, bool)

AsTruncateTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsUniqueTokenFilter

func (wdtf WordDelimiterTokenFilter) AsUniqueTokenFilter() (*UniqueTokenFilter, bool)

AsUniqueTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) AsWordDelimiterTokenFilter

func (wdtf WordDelimiterTokenFilter) AsWordDelimiterTokenFilter() (*WordDelimiterTokenFilter, bool)

AsWordDelimiterTokenFilter is the BasicTokenFilter implementation for WordDelimiterTokenFilter.

func (WordDelimiterTokenFilter) MarshalJSON

func (wdtf WordDelimiterTokenFilter) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for WordDelimiterTokenFilter.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL