Documentation ¶
Index ¶
- func ToPtr[T any](i T) *T
- type APIError
- type APIErrorResponse
- type Client
- type ClientOption
- type CompletionRequest
- type CompletionResponse
- type CompletionResponseChoice
- type CompletionResponseUsage
- type EditsRequest
- type EditsResponse
- type EditsResponseChoice
- type EditsResponseUsage
- type EmbeddingsRequest
- type EmbeddingsResponse
- type EmbeddingsResult
- type EmbeddingsUsage
- type EngineObject
- type EnginesResponse
- type LogprobResult
- type SearchData
- type SearchRequest
- type SearchResponse
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type APIError ¶
type APIError struct { StatusCode int `json:"status_code"` Message string `json:"message"` Type string `json:"type"` }
APIError represents an error that occured on an API
type APIErrorResponse ¶
type APIErrorResponse struct {
Error APIError `json:"error"`
}
APIErrorResponse is the full error respnose that has been returned by an API.
type Client ¶
type Client interface { // Completion creates a completion with the default engine. This is the main endpoint of the API // which auto-completes based on the given prompt. Completion(ctx context.Context, request CompletionRequest) (*CompletionResponse, error) // CompletionStream creates a completion with the default engine and streams the results through // multiple calls to onData. CompletionStream(ctx context.Context, request CompletionRequest, onData func(*CompletionResponse)) error // Given a prompt and an instruction, the model will return an edited version of the prompt. Edits(ctx context.Context, request EditsRequest) (*EditsResponse, error) // Search performs a semantic search over a list of documents with the default engine. Search(ctx context.Context, request SearchRequest) (*SearchResponse, error) // Returns an embedding using the provided request. Embeddings(ctx context.Context, request EmbeddingsRequest) (*EmbeddingsResponse, error) }
A Client is an API client to communicate with the OpenAI gpt-3 APIs
type ClientOption ¶
type ClientOption func(*client) error
ClientOption are options that can be passed when creating a new client
func WithApiVersion ¶
func WithApiVersion(apiVersion string) ClientOption
WithApiVersion is a client option that allows you to override the default api version of the client
func WithHTTPClient ¶
func WithHTTPClient(httpClient *http.Client) ClientOption
WithHTTPClient allows you to override the internal http.Client used
func WithTimeout ¶
func WithTimeout(timeout time.Duration) ClientOption
WithTimeout is a client option that allows you to override the default timeout duration of requests for the client. The default is 30 seconds. If you are overriding the http client as well, just include the timeout there.
func WithUserAgent ¶
func WithUserAgent(userAgent string) ClientOption
WithUserAgent is a client option that allows you to override the default user agent of the client
type CompletionRequest ¶
type CompletionRequest struct { // A list of string prompts to use. // TODO there are other prompt types here for using token integers that we could add support for. Prompt []string `json:"prompt"` // How many tokens to complete up to. Max of 512 MaxTokens *int `json:"max_tokens,omitempty"` // Sampling temperature to use Temperature *float32 `json:"temperature,omitempty"` // Alternative to temperature for nucleus sampling TopP *float32 `json:"top_p,omitempty"` // How many choice to create for each prompt N *int `json:"n"` // Include the probabilities of most likely tokens LogProbs *int `json:"logprobs"` // Echo back the prompt in addition to the completion Echo bool `json:"echo"` // Up to 4 sequences where the API will stop generating tokens. Response will not contain the stop sequence. Stop []string `json:"stop,omitempty"` // PresencePenalty number between 0 and 1 that penalizes tokens that have already appeared in the text so far. PresencePenalty float32 `json:"presence_penalty"` // FrequencyPenalty number between 0 and 1 that penalizes tokens on existing frequency in the text so far. FrequencyPenalty float32 `json:"frequency_penalty"` // Whether to stream back results or not. Don't set this value in the request yourself // as it will be overriden depending on if you use CompletionStream or Completion methods. Stream bool `json:"stream,omitempty"` }
CompletionRequest is a request for the completions API
type CompletionResponse ¶
type CompletionResponse struct { ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []CompletionResponseChoice `json:"choices"` Usage CompletionResponseUsage `json:"usage"` }
CompletionResponse is the full response from a request to the completions API
type CompletionResponseChoice ¶
type CompletionResponseChoice struct { Text string `json:"text"` Index int `json:"index"` LogProbs LogprobResult `json:"logprobs"` FinishReason string `json:"finish_reason"` }
CompletionResponseChoice is one of the choices returned in the response to the Completions API
type CompletionResponseUsage ¶
type CompletionResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
CompletionResponseUsage is the object that returns how many tokens the completion's request used
type EditsRequest ¶
type EditsRequest struct { // ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. Model string `json:"model"` // The input text to use as a starting point for the edit. Input string `json:"input"` // The instruction that tells the model how to edit the prompt. Instruction string `json:"instruction"` // Sampling temperature to use Temperature *float32 `json:"temperature,omitempty"` // Alternative to temperature for nucleus sampling TopP *float32 `json:"top_p,omitempty"` // How many edits to generate for the input and instruction. Defaults to 1 N *int `json:"n"` }
EditsRequest is a request for the edits API
type EditsResponse ¶
type EditsResponse struct { Object string `json:"object"` Created int `json:"created"` Choices []EditsResponseChoice `json:"choices"` Usage EditsResponseUsage `json:"usage"` }
EditsResponse is the full response from a request to the edits API
type EditsResponseChoice ¶
EditsResponseChoice is one of the choices returned in the response to the Edits API
type EditsResponseUsage ¶
type EditsResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
EditsResponseUsage is a structure used in the response from a request to the edits API
type EmbeddingsRequest ¶
type EmbeddingsRequest struct { // Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings // for multiple inputs in a single request, pass an array of strings or array of token arrays. // Each input must not exceed 2048 tokens in length. Input []string `json:"input"` // ID of the model to use Model string `json:"model"` // The request user is an optional parameter meant to be used to trace abusive requests // back to the originating user. OpenAI states: // "The [user] IDs should be a string that uniquely identifies each user. We recommend hashing // their username or email address, in order to avoid sending us any identifying information. // If you offer a preview of your product to non-logged in users, you can send a session ID // instead." User string `json:"user,omitempty"` }
EmbeddingsRequest is a request for the Embeddings API
type EmbeddingsResponse ¶
type EmbeddingsResponse struct { Object string `json:"object"` Data []EmbeddingsResult `json:"data"` Usage EmbeddingsUsage `json:"usage"` }
EmbeddingsResponse is the response from a create embeddings request.
See: https://beta.openai.com/docs/api-reference/embeddings/create
type EmbeddingsResult ¶
type EmbeddingsResult struct { // The type of object returned (e.g., "list", "object") Object string `json:"object"` // The embedding data for the input Embedding []float64 `json:"embedding"` Index int `json:"index"` }
The inner result of a create embeddings request, containing the embeddings for a single input.
type EmbeddingsUsage ¶
type EmbeddingsUsage struct { // The number of tokens used by the prompt PromptTokens int `json:"prompt_tokens"` // The total tokens used TotalTokens int `json:"total_tokens"` }
The usage stats for an embeddings response
type EngineObject ¶
type EngineObject struct { ID string `json:"id"` Object string `json:"object"` Owner string `json:"owner"` Ready bool `json:"ready"` }
EngineObject contained in an engine reponse
type EnginesResponse ¶
type EnginesResponse struct { Data []EngineObject `json:"data"` Object string `json:"object"` }
EnginesResponse is returned from the Engines API
type LogprobResult ¶
type LogprobResult struct { Tokens []string `json:"tokens"` TokenLogprobs []float32 `json:"token_logprobs"` TopLogprobs []map[string]float32 `json:"top_logprobs"` TextOffset []int `json:"text_offset"` }
LogprobResult represents logprob result of Choice
type SearchData ¶
type SearchData struct { Document int `json:"document"` Object string `json:"object"` Score float64 `json:"score"` }
SearchData is a single search result from the document search API
type SearchRequest ¶
SearchRequest is a request for the document search API
type SearchResponse ¶
type SearchResponse struct { Data []SearchData `json:"data"` Object string `json:"object"` }
SearchResponse is the full response from a request to the document search API