Documentation ¶
Overview ¶
Package models contains structs on model requests/responses
Package models contains structs on model requests/responses
Package models contains structs on model requests/responses
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ClaudeContent ¶
type ClaudeContent struct { Type string `json:"type"` // The type of the content. Valid values are image and text. Text string `json:"text,omitempty"` // The text content. }
ClaudeContent contains content
type ClaudeMessage ¶
type ClaudeMessage struct { Role string `json:"role"` // The role of the conversation turn. Valid values are user and assistant. Content []ClaudeContent `json:"content"` // The content of the conversation turn. }
ClaudeMessage contains messages
type ClaudeMessagesInput ¶
type ClaudeMessagesInput struct { AnthropicVersion string `json:"anthropic_version"` // The anthropic version. The value must be bedrock-2023-05-31. Messages []ClaudeMessage `json:"messages"` // The input messages. MaxTokens int `json:"max_tokens"` // The maximum number of tokens to generate before stopping. Temperature float64 `json:"temperature"` // The amount of randomness injected into the response. TopK int `json:"top_k"` // Only sample from the top K options for each subsequent token. TopP float64 `json:"top_p"` // Use nucleus sampling. System string `json:"system,omitempty"` // The system prompt for the request. Which provides context, instructions, and guidelines to Claude before presenting it with a question or task. StopSequences []string `json:"stop_sequences,omitempty"` // The stop sequences that signal the model to stop generating text. }
ClaudeMessagesInput is needed to marshal the new request type Supported Models: claude-instant-v1.2, claude-v2, claude-v2.1, claude-v3
type ClaudeMessagesOutput ¶
type ClaudeMessagesOutput struct { Type string `json:"type"` // The type of the response. Valid values are image and text. Index int `json:"index"` // The index of the response. Delta TextDelta `json:"delta"` // The delta of the response. }
ClaudeMessagesOutput is needed to unmarshal the new request type Supported Models: claude-instant-v1.2, claude-v2, claude-v2.1, claude-v3
type ClaudeModelInputs ¶
type ClaudeModelInputs struct { Prompt string `json:"prompt"` // The prompt that you want Claude to complete. For proper response generation you need to format your prompt using alternating \n\nHuman: and \n\nAssistant: conversational turns. MaxTokensToSample int `json:"max_tokens_to_sample"` // The maximum number of tokens to generate before stopping. Temperature float64 `json:"temperature,omitempty"` // The amount of randomness injected into the response. TopP float64 `json:"top_p,omitempty"` // Use nucleus sampling. TopK int `json:"top_k,omitempty"` // Only sample from the top K options for each subsequent token. }
ClaudeModelInputs is needed to marshal the legacy request type Supported Models: claude-instant-v1, claude-v1, claude-v2, claude-v2:1
type ClaudeModelOutputs ¶
type ClaudeModelOutputs struct { Completion string `json:"completion,omitempty"` // The resulting completion up to and excluding the stop sequences. StopReason string `json:"stop_reason,omitempty"` // The reason why the model stopped generating the response. Stop string `json:"stop,omitempty"` // contains the stop sequence that signalled the model to stop generating text. }
ClaudeModelOutputs is needed to unmarshal the legacy response Supported Models: claude-instant-v1, claude-v1, claude-v2, claude-v2:1
type CommandGeneration ¶
type CommandGeneration struct { ID string `json:"id"` // An identifier for the generation. Text string `json:"text"` // The generated text. }
CommandGeneration contains the chunks of text
type CommandModelInput ¶
type CommandModelInput struct { Prompt string `json:"prompt"` // The input text that serves as the starting point for generating the response. Temperature float64 `json:"temperature"` // Use a lower value to decrease randomness in the response. TopP float64 `json:"p"` // Use a lower value to ignore less probable options. Set to 0 or 1.0 to disable. TopK int `json:"k"` // Specify the number of token choices the model uses to generate the next token. MaxTokensToSample int `json:"max_tokens"` // Specify the maximum number of tokens to use in the generated response. StopSequences []string `json:"stop_sequences"` // Configure up to four sequences that the model recognizes. ReturnLiklihoods string `json:"return_likelihoods"` // Specify how and if the token likelihoods are returned with the response. Stream bool `json:"stream"` // Specify true to return the response piece-by-piece in real-time and false to return the complete response after the process finishes. NumGenerations int `json:"num_generations"` // The maximum number of generations that the model should return. }
CommandModelInput contains the request
type CommandModelOutput ¶
type CommandModelOutput struct {
Generations []CommandGeneration `json:"generations"` // A list of generated results along with the likelihoods for tokens requested.
}
CommandModelOutput contains the response
type MistralOutput ¶
MistralOutput contains the response text and stop response
type MistralRequest ¶
type MistralRequest struct { Prompt string `json:"prompt"` MaxTokens int `json:"max_tokens"` Temperature float64 `json:"temperature,omitempty"` TopP float64 `json:"top_p,omitempty"` TopK int `json:"top_k,omitempty"` }
MistralRequest contains the request needed for mistral models
type MistralResponse ¶
type MistralResponse struct {
Outputs []MistralOutput `json:"outputs"`
}
MistralResponse contains the response obtained from mistral models