Documentation ¶
Overview ¶
Creates an inference trained model.
Index ¶
- Variables
- type NewPutTrainedModel
- type PutTrainedModel
- func (r *PutTrainedModel) DeferDefinitionDecompression(b bool) *PutTrainedModel
- func (r PutTrainedModel) Do(ctx context.Context) (*Response, error)
- func (r *PutTrainedModel) Header(key, value string) *PutTrainedModel
- func (r *PutTrainedModel) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *PutTrainedModel) ModelId(v string) *PutTrainedModel
- func (r PutTrainedModel) Perform(ctx context.Context) (*http.Response, error)
- func (r *PutTrainedModel) Raw(raw io.Reader) *PutTrainedModel
- func (r *PutTrainedModel) Request(req *Request) *PutTrainedModel
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type NewPutTrainedModel ¶
type NewPutTrainedModel func(modelid string) *PutTrainedModel
NewPutTrainedModel type alias for index.
func NewPutTrainedModelFunc ¶
func NewPutTrainedModelFunc(tp elastictransport.Interface) NewPutTrainedModel
NewPutTrainedModelFunc returns a new instance of PutTrainedModel with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type PutTrainedModel ¶
type PutTrainedModel struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *PutTrainedModel
Creates an inference trained model.
https://www.elastic.co/guide/en/elasticsearch/reference/current/put-trained-models.html
func (*PutTrainedModel) DeferDefinitionDecompression ¶
func (r *PutTrainedModel) DeferDefinitionDecompression(b bool) *PutTrainedModel
DeferDefinitionDecompression If set to `true` and a `compressed_definition` is provided, the request defers definition decompression and skips relevant validations. API name: defer_definition_decompression
func (PutTrainedModel) Do ¶
func (r PutTrainedModel) Do(ctx context.Context) (*Response, error)
Do runs the request through the transport, handle the response and returns a puttrainedmodel.Response
func (*PutTrainedModel) Header ¶
func (r *PutTrainedModel) Header(key, value string) *PutTrainedModel
Header set a key, value pair in the PutTrainedModel headers map.
func (*PutTrainedModel) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*PutTrainedModel) ModelId ¶
func (r *PutTrainedModel) ModelId(v string) *PutTrainedModel
ModelId The unique identifier of the trained model. API Name: modelid
func (PutTrainedModel) Perform ¶
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*PutTrainedModel) Raw ¶
func (r *PutTrainedModel) Raw(raw io.Reader) *PutTrainedModel
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
func (*PutTrainedModel) Request ¶
func (r *PutTrainedModel) Request(req *Request) *PutTrainedModel
Request allows to set the request property with the appropriate payload.
type Request ¶
type Request struct { // CompressedDefinition The compressed (GZipped and Base64 encoded) inference definition of the // model. If compressed_definition is specified, then definition cannot be // specified. CompressedDefinition *string `json:"compressed_definition,omitempty"` // Definition The inference definition for the model. If definition is specified, then // compressed_definition cannot be specified. Definition *types.Definition `json:"definition,omitempty"` // Description A human-readable description of the inference trained model. Description *string `json:"description,omitempty"` // InferenceConfig The default configuration for inference. This can be either a regression // or classification configuration. It must match the underlying // definition.trained_model's target_type. InferenceConfig types.InferenceConfigCreateContainer `json:"inference_config"` // Input The input field names for the model definition. Input *types.Input `json:"input,omitempty"` // Metadata An object map that contains metadata about the model. Metadata json.RawMessage `json:"metadata,omitempty"` // ModelSizeBytes The estimated memory usage in bytes to keep the trained model in memory. // This property is supported only if defer_definition_decompression is true // or the model definition is not supplied. ModelSizeBytes *int64 `json:"model_size_bytes,omitempty"` // ModelType The model type. ModelType *trainedmodeltype.TrainedModelType `json:"model_type,omitempty"` // Tags An array of tags to organize the model. Tags []string `json:"tags,omitempty"` }
Request holds the request body struct for the package puttrainedmodel
type Response ¶
type Response struct { CompressedDefinition *string `json:"compressed_definition,omitempty"` // CreateTime The time when the trained model was created. CreateTime types.DateTime `json:"create_time,omitempty"` // CreatedBy Information on the creator of the trained model. CreatedBy *string `json:"created_by,omitempty"` // DefaultFieldMap Any field map described in the inference configuration takes precedence. DefaultFieldMap map[string]string `json:"default_field_map,omitempty"` // Description The free-text description of the trained model. Description *string `json:"description,omitempty"` // EstimatedHeapMemoryUsageBytes The estimated heap usage in bytes to keep the trained model in memory. EstimatedHeapMemoryUsageBytes *int `json:"estimated_heap_memory_usage_bytes,omitempty"` // EstimatedOperations The estimated number of operations to use the trained model. EstimatedOperations *int `json:"estimated_operations,omitempty"` // InferenceConfig The default configuration for inference. This can be either a regression, // classification, or one of the many NLP focused configurations. It must match // the underlying definition.trained_model's target_type. InferenceConfig types.InferenceConfigCreateContainer `json:"inference_config"` // Input The input field names for the model definition. Input types.TrainedModelConfigInput `json:"input"` // LicenseLevel The license level of the trained model. LicenseLevel *string `json:"license_level,omitempty"` Location *types.TrainedModelLocation `json:"location,omitempty"` // Metadata An object containing metadata about the trained model. For example, models // created by data frame analytics contain analysis_config and input objects. Metadata *types.TrainedModelConfigMetadata `json:"metadata,omitempty"` // ModelId Identifier for the trained model. ModelId string `json:"model_id"` ModelSizeBytes types.ByteSize `json:"model_size_bytes,omitempty"` // ModelType The model type ModelType *trainedmodeltype.TrainedModelType `json:"model_type,omitempty"` // Tags A comma delimited string of tags. A trained model can have many tags, or // none. Tags []string `json:"tags"` // Version The Elasticsearch version number in which the trained model was created. Version *string `json:"version,omitempty"` }