Documentation
¶
Index ¶
Constants ¶
View Source
const ( DefaultAPIBase = "https://api.openai.com/v1" DefaultModel = "gpt-4o" DefaultRetries = 3 DefaultMaxTokens = 1024 DefaultTemperature = 0.7 DefaultTopP = 1.0 DefaultFrequencyPenalty = 0.0 )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Choice ¶
type Choice struct {
Message Message `json:"message"`
}
Choice represents a completion choice
type ClientConfig ¶
type ClientConfig struct { APIBase string `json:"api_base"` APIKey string `json:"api_key"` Model string `json:"model"` Provider string `json:"provider"` Retries int `json:"retries"` Timeout int64 `json:"timeout"` Proxy string `json:"proxy,omitempty"` ExtraHeaders map[string]string `json:"extra_headers,omitempty"` CompletionPath string `json:"completion_path"` AnswerPath string `json:"answer_path"` MaxTokens int `json:"max_tokens"` TopP float64 `json:"top_p"` Temperature float64 `json:"temperature"` FrequencyPenalty float64 `json:"frequency_penalty"` Debug bool `json:"debug,omitempty"` }
ClientConfig represents the configuration for an LLM client
type CompletionRequest ¶
type CompletionRequest struct { Model string `json:"model"` Messages []Message `json:"messages"` MaxTokens *int `json:"max_tokens,omitempty"` Temperature *float64 `json:"temperature,omitempty"` TopP *float64 `json:"top_p,omitempty"` FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"` }
CompletionRequest represents a chat completion request
type CompletionResponse ¶
type CompletionResponse struct {
Result map[string]interface{} `json:"result"`
}
CompletionResponse represents a chat completion response
Click to show internal directories.
Click to hide internal directories.