Documentation ¶
Index ¶
- Variables
- func IsAzure(apiType APIType) bool
- type APIType
- type ChatCompletionChoice
- type ChatCompletionResponse
- type ChatMessage
- type ChatRequest
- type ChatUsage
- type Client
- type Completion
- type CompletionRequest
- type CompletionResponse
- type Doer
- type EmbeddingRequest
- type FinishReason
- type FunctionCall
- type FunctionCallBehavior
- type FunctionDefinition
- type LogProb
- type LogProbs
- type Option
- type ResponseFormat
- type StreamOptions
- type StreamedChatResponsePayload
- type Tool
- type ToolCall
- type ToolChoice
- type ToolFunction
- type ToolType
- type TopLogProbs
- type Usage
Constants ¶
This section is empty.
Variables ¶
var ErrContentExclusive = errors.New("only one of Content / MultiContent allowed in message")
var ErrEmptyResponse = errors.New("empty response")
ErrEmptyResponse is returned when the OpenAI API returns an empty response.
Functions ¶
Types ¶
type ChatCompletionChoice ¶
type ChatCompletionChoice struct { Index int `json:"index"` Message ChatMessage `json:"message"` FinishReason FinishReason `json:"finish_reason"` LogProbs *LogProbs `json:"logprobs,omitempty"` }
ChatCompletionChoice is a choice in a chat response.
type ChatCompletionResponse ¶
type ChatCompletionResponse struct { ID string `json:"id,omitempty"` Created int64 `json:"created,omitempty"` Choices []*ChatCompletionChoice `json:"choices,omitempty"` Model string `json:"model,omitempty"` Object string `json:"object,omitempty"` Usage ChatUsage `json:"usage,omitempty"` SystemFingerprint string `json:"system_fingerprint"` }
ChatCompletionResponse is a response to a chat request.
type ChatMessage ¶
type ChatMessage struct { // The role of the author of this message. One of system, user, assistant, function, or tool. Role string // The content of the message. // This field is mutually exclusive with MultiContent. Content string // MultiContent is a list of content parts to use in the message. MultiContent []llms.ContentPart // The name of the author of this message. May contain a-z, A-Z, 0-9, and underscores, // with a maximum length of 64 characters. Name string // ToolCalls is a list of tools that were called in the message. ToolCalls []ToolCall `json:"tool_calls,omitempty"` // FunctionCall represents a function call that was made in the message. // Deprecated: use ToolCalls instead. FunctionCall *FunctionCall // ToolCallID is the ID of the tool call this message is for. // Only present in tool messages. ToolCallID string `json:"tool_call_id,omitempty"` }
ChatMessage is a message in a chat request.
func (ChatMessage) MarshalJSON ¶
func (m ChatMessage) MarshalJSON() ([]byte, error)
func (*ChatMessage) UnmarshalJSON ¶
func (m *ChatMessage) UnmarshalJSON(data []byte) error
type ChatRequest ¶
type ChatRequest struct { Model string `json:"model"` Messages []*ChatMessage `json:"messages"` Temperature float64 `json:"temperature"` TopP float64 `json:"top_p,omitempty"` MaxTokens int `json:"max_tokens,omitempty"` N int `json:"n,omitempty"` StopWords []string `json:"stop,omitempty"` Stream bool `json:"stream,omitempty"` FrequencyPenalty float64 `json:"frequency_penalty,omitempty"` PresencePenalty float64 `json:"presence_penalty,omitempty"` Seed int `json:"seed,omitempty"` // ResponseFormat is the format of the response. ResponseFormat *ResponseFormat `json:"response_format,omitempty"` // LogProbs indicates whether to return log probabilities of the output tokens or not. // If true, returns the log probabilities of each output token returned in the content of message. // This option is currently not available on the gpt-4-vision-preview model. LogProbs bool `json:"logprobs,omitempty"` // TopLogProbs is an integer between 0 and 5 specifying the number of most likely tokens to return at each // token position, each with an associated log probability. // logprobs must be set to true if this parameter is used. TopLogProbs int `json:"top_logprobs,omitempty"` Tools []Tool `json:"tools,omitempty"` // This can be either a string or a ToolChoice object. // If it is a string, it should be one of 'none', or 'auto', otherwise it should be a ToolChoice object specifying a specific tool to use. ToolChoice any `json:"tool_choice,omitempty"` // Options for streaming response. Only set this when you set stream: true. StreamOptions *StreamOptions `json:"stream_options,omitempty"` // StreamingFunc is a function to be called for each chunk of a streaming response. // Return an error to stop streaming early. StreamingFunc func(ctx context.Context, chunk []byte) error `json:"-"` // Deprecated: use Tools instead. Functions []FunctionDefinition `json:"functions,omitempty"` // Deprecated: use ToolChoice instead. FunctionCallBehavior FunctionCallBehavior `json:"function_call,omitempty"` // Metadata allows you to specify additional information that will be passed to the model. Metadata map[string]any `json:"metadata,omitempty"` }
ChatRequest is a request to complete a chat completion..
type ChatUsage ¶
type ChatUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
ChatUsage is the usage of a chat completion request.
type Client ¶
Client is a client for the OpenAI API.
func New ¶
func New(token string, model string, baseURL string, organization string, apiType APIType, apiVersion string, httpClient Doer, embeddingModel string, opts ...Option, ) (*Client, error)
New returns a new OpenAI client.
func (*Client) CreateChat ¶
func (c *Client) CreateChat(ctx context.Context, r *ChatRequest) (*ChatCompletionResponse, error)
CreateChat creates chat request.
func (*Client) CreateCompletion ¶
func (c *Client) CreateCompletion(ctx context.Context, r *CompletionRequest) (*Completion, error)
CreateCompletion creates a completion.
func (*Client) CreateEmbedding ¶
CreateEmbedding creates embeddings.
type CompletionRequest ¶
type CompletionRequest struct { Model string `json:"model"` Prompt string `json:"prompt"` Temperature float64 `json:"temperature"` MaxTokens int `json:"max_tokens,omitempty"` N int `json:"n,omitempty"` FrequencyPenalty float64 `json:"frequency_penalty,omitempty"` PresencePenalty float64 `json:"presence_penalty,omitempty"` TopP float64 `json:"top_p,omitempty"` StopWords []string `json:"stop,omitempty"` Seed int `json:"seed,omitempty"` // StreamingFunc is a function to be called for each chunk of a streaming response. // Return an error to stop streaming early. StreamingFunc func(ctx context.Context, chunk []byte) error `json:"-"` }
CompletionRequest is a request to complete a completion.
type CompletionResponse ¶
type CompletionResponse struct { ID string `json:"id,omitempty"` Created float64 `json:"created,omitempty"` Choices []struct { FinishReason string `json:"finish_reason,omitempty"` Index float64 `json:"index,omitempty"` Logprobs interface{} `json:"logprobs,omitempty"` Text string `json:"text,omitempty"` } `json:"choices,omitempty"` Model string `json:"model,omitempty"` Object string `json:"object,omitempty"` Usage struct { CompletionTokens float64 `json:"completion_tokens,omitempty"` PromptTokens float64 `json:"prompt_tokens,omitempty"` TotalTokens float64 `json:"total_tokens,omitempty"` } `json:"usage,omitempty"` }
type EmbeddingRequest ¶
EmbeddingRequest is a request to create an embedding.
type FinishReason ¶
type FinishReason string
const ( FinishReasonStop FinishReason = "stop" FinishReasonLength FinishReason = "length" FinishReasonFunctionCall FinishReason = "function_call" FinishReasonToolCalls FinishReason = "tool_calls" FinishReasonContentFilter FinishReason = "content_filter" FinishReasonNull FinishReason = "null" )
func (FinishReason) MarshalJSON ¶
func (r FinishReason) MarshalJSON() ([]byte, error)
type FunctionCall ¶
type FunctionCall struct { // Name is the name of the function to call. Name string `json:"name"` // Arguments is the set of arguments to pass to the function. Arguments string `json:"arguments"` }
FunctionCall is a call to a function.
type FunctionCallBehavior ¶
type FunctionCallBehavior string
FunctionCallBehavior is the behavior to use when calling functions.
const ( // FunctionCallBehaviorUnspecified is the empty string. FunctionCallBehaviorUnspecified FunctionCallBehavior = "" // FunctionCallBehaviorNone will not call any functions. FunctionCallBehaviorNone FunctionCallBehavior = "none" // FunctionCallBehaviorAuto will call functions automatically. FunctionCallBehaviorAuto FunctionCallBehavior = "auto" )
type FunctionDefinition ¶
type FunctionDefinition struct { // Name is the name of the function. Name string `json:"name"` // Description is a description of the function. Description string `json:"description,omitempty"` // Parameters is a list of parameters for the function. Parameters any `json:"parameters"` }
FunctionDefinition is a definition of a function that can be called by the model.
type LogProb ¶
type LogProb struct { Token string `json:"token"` LogProb float64 `json:"logprob"` Bytes []byte `json:"bytes,omitempty"` // Omitting the field if it is null // TopLogProbs is a list of the most likely tokens and their log probability, at this token position. // In rare cases, there may be fewer than the number of requested top_logprobs returned. TopLogProbs []TopLogProbs `json:"top_logprobs"` }
LogProb represents the probability information for a token.
type LogProbs ¶
type LogProbs struct { // Content is a list of message content tokens with log probability information. Content []LogProb `json:"content"` }
LogProbs is the top-level structure containing the log probability information.
type ResponseFormat ¶
type ResponseFormat struct {
Type string `json:"type"`
}
ResponseFormat is the format of the response.
type StreamOptions ¶
type StreamOptions struct { // If set, an additional chunk will be streamed before the data: [DONE] message. // The usage field on this chunk shows the token usage statistics for the entire request, // and the choices field will always be an empty array. // All other chunks will also include a usage field, but with a null value. IncludeUsage bool `json:"include_usage,omitempty"` }
type StreamedChatResponsePayload ¶
type StreamedChatResponsePayload struct { ID string `json:"id,omitempty"` Created float64 `json:"created,omitempty"` Model string `json:"model,omitempty"` Stop bool `json:"stop,omitempty"` Object string `json:"object,omitempty"` Choices []struct { Index float64 `json:"index,omitempty"` Delta struct { Role string `json:"role,omitempty"` Content string `json:"content,omitempty"` FunctionCall *FunctionCall `json:"function_call,omitempty"` // ToolCalls is a list of tools that were called in the message. ToolCalls []*ToolCall `json:"tool_calls,omitempty"` } `json:"delta,omitempty"` FinishReason FinishReason `json:"finish_reason,omitempty"` } `json:"choices,omitempty"` SystemFingerprint string `json:"system_fingerprint"` // An optional field that will only be present when you set stream_options: {"include_usage": true} in your request. // When present, it contains a null value except for the last chunk which contains the token usage statistics // for the entire request. Usage *Usage `json:"usage,omitempty"` Error error `json:"-"` // use for error handling only }
StreamedChatResponsePayload is a chunk from the stream.
type Tool ¶
type Tool struct { Type ToolType `json:"type"` Function FunctionDefinition `json:"function,omitempty"` }
Tool is a tool to use in a chat request.
type ToolCall ¶
type ToolCall struct { ID string `json:"id,omitempty"` Type ToolType `json:"type"` Function ToolFunction `json:"function,omitempty"` }
ToolCall is a call to a tool.
type ToolChoice ¶
type ToolChoice struct { Type ToolType `json:"type"` Function ToolFunction `json:"function,omitempty"` }
ToolChoice is a choice of a tool to use.
type ToolFunction ¶
ToolFunction is a function to be called in a tool choice.
type ToolType ¶
type ToolType string
ToolType is the type of a tool.
const (
ToolTypeFunction ToolType = "function"
)