Documentation ¶
Overview ¶
Evaluate a trained model.
Index ¶
- Variables
- type InferTrainedModel
- func (r InferTrainedModel) Do(providedCtx context.Context) (*Response, error)
- func (r *InferTrainedModel) Docs(docs ...map[string]json.RawMessage) *InferTrainedModel
- func (r *InferTrainedModel) Header(key, value string) *InferTrainedModel
- func (r *InferTrainedModel) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *InferTrainedModel) InferenceConfig(inferenceconfig *types.InferenceConfigUpdateContainer) *InferTrainedModel
- func (r InferTrainedModel) Perform(providedCtx context.Context) (*http.Response, error)
- func (r *InferTrainedModel) Raw(raw io.Reader) *InferTrainedModel
- func (r *InferTrainedModel) Request(req *Request) *InferTrainedModel
- func (r *InferTrainedModel) Timeout(duration string) *InferTrainedModel
- type NewInferTrainedModel
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type InferTrainedModel ¶
type InferTrainedModel struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *InferTrainedModel
Evaluate a trained model.
https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
func (InferTrainedModel) Do ¶
func (r InferTrainedModel) Do(providedCtx context.Context) (*Response, error)
Do runs the request through the transport, handle the response and returns a infertrainedmodel.Response
func (*InferTrainedModel) Docs ¶ added in v8.9.0
func (r *InferTrainedModel) Docs(docs ...map[string]json.RawMessage) *InferTrainedModel
Docs An array of objects to pass to the model for inference. The objects should contain a fields matching your configured trained model input. Typically, for NLP models, the field name is `text_field`. Currently, for NLP models, only a single value is allowed. API name: docs
func (*InferTrainedModel) Header ¶
func (r *InferTrainedModel) Header(key, value string) *InferTrainedModel
Header set a key, value pair in the InferTrainedModel headers map.
func (*InferTrainedModel) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*InferTrainedModel) InferenceConfig ¶ added in v8.9.0
func (r *InferTrainedModel) InferenceConfig(inferenceconfig *types.InferenceConfigUpdateContainer) *InferTrainedModel
InferenceConfig The inference configuration updates to apply on the API call API name: inference_config
func (InferTrainedModel) Perform ¶ added in v8.7.0
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*InferTrainedModel) Raw ¶
func (r *InferTrainedModel) Raw(raw io.Reader) *InferTrainedModel
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
func (*InferTrainedModel) Request ¶
func (r *InferTrainedModel) Request(req *Request) *InferTrainedModel
Request allows to set the request property with the appropriate payload.
func (*InferTrainedModel) Timeout ¶
func (r *InferTrainedModel) Timeout(duration string) *InferTrainedModel
Timeout Controls the amount of time to wait for inference results. API name: timeout
type NewInferTrainedModel ¶
type NewInferTrainedModel func(modelid string) *InferTrainedModel
NewInferTrainedModel type alias for index.
func NewInferTrainedModelFunc ¶
func NewInferTrainedModelFunc(tp elastictransport.Interface) NewInferTrainedModel
NewInferTrainedModelFunc returns a new instance of InferTrainedModel with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type Request ¶
type Request struct { // Docs An array of objects to pass to the model for inference. The objects should // contain a fields matching your // configured trained model input. Typically, for NLP models, the field name is // `text_field`. // Currently, for NLP models, only a single value is allowed. Docs []map[string]json.RawMessage `json:"docs"` // InferenceConfig The inference configuration updates to apply on the API call InferenceConfig *types.InferenceConfigUpdateContainer `json:"inference_config,omitempty"` }
Request holds the request body struct for the package infertrainedmodel
type Response ¶ added in v8.7.0
type Response struct {
InferenceResults []types.InferenceResponseResult `json:"inference_results"`
}
Response holds the response body struct for the package infertrainedmodel