Documentation ¶
Index ¶
- Constants
- func Float32Ptr(f float32) *float32
- func IntPtr(i int) *int
- type APIError
- type APIErrorResponse
- type ChatCompletionFunctionParameters
- type ChatCompletionFunctions
- type ChatCompletionRequest
- type ChatCompletionRequestMessage
- type ChatCompletionResponse
- type ChatCompletionResponseChoice
- type ChatCompletionResponseMessage
- type ChatCompletionStreamResponse
- type ChatCompletionStreamResponseChoice
- type ChatCompletionsResponseUsage
- type Client
- type ClientOption
- type CompletionRequest
- type CompletionResponse
- type CompletionResponseChoice
- type CompletionResponseUsage
- type EditsRequest
- type EditsResponse
- type EditsResponseChoice
- type EditsResponseUsage
- type EmbeddingEngine
- type EmbeddingsRequest
- type EmbeddingsResponse
- type EmbeddingsResult
- type EmbeddingsUsage
- type EngineObject
- type EnginesResponse
- type Function
- type FunctionParameterPropertyMetadata
- type LogprobResult
- type ModerationCategoryResult
- type ModerationCategoryScores
- type ModerationRequest
- type ModerationResponse
- type ModerationResult
- type RateLimitHeaders
- type SearchData
- type SearchRequest
- type SearchResponse
Constants ¶
const ( TextAda001Engine = "text-ada-001" TextBabbage001Engine = "text-babbage-001" TextCurie001Engine = "text-curie-001" TextDavinci001Engine = "text-davinci-001" TextDavinci002Engine = "text-davinci-002" TextDavinci003Engine = "text-davinci-003" AdaEngine = "ada" BabbageEngine = "babbage" CurieEngine = "curie" DavinciEngine = "davinci" DefaultEngine = DavinciEngine )
Engine Types
const ( GPT3Dot5Turbo = "gpt-3.5-turbo" GPT3Dot5Turbo0301 = "gpt-3.5-turbo-0301" GPT3Dot5Turbo0613 = "gpt-3.5-turbo-0613" TextSimilarityAda001 = "text-similarity-ada-001" TextSimilarityBabbage001 = "text-similarity-babbage-001" TextSimilarityCurie001 = "text-similarity-curie-001" TextSimilarityDavinci001 = "text-similarity-davinci-001" TextSearchAdaDoc001 = "text-search-ada-doc-001" TextSearchAdaQuery001 = "text-search-ada-query-001" TextSearchBabbageDoc001 = "text-search-babbage-doc-001" TextSearchBabbageQuery001 = "text-search-babbage-query-001" TextSearchCurieDoc001 = "text-search-curie-doc-001" TextSearchCurieQuery001 = "text-search-curie-query-001" TextSearchDavinciDoc001 = "text-search-davinci-doc-001" TextSearchDavinciQuery001 = "text-search-davinci-query-001" CodeSearchAdaCode001 = "code-search-ada-code-001" CodeSearchAdaText001 = "code-search-ada-text-001" CodeSearchBabbageCode001 = "code-search-babbage-code-001" CodeSearchBabbageText001 = "code-search-babbage-text-001" TextEmbeddingAda002 = "text-embedding-ada-002" )
const ( TextModerationLatest = "text-moderation-latest" TextModerationStable = "text-moderation-stable" )
Variables ¶
This section is empty.
Functions ¶
func Float32Ptr ¶ added in v1.1.5
Float32Ptr converts a float32 to a *float32 as a convenience
Types ¶
type APIError ¶ added in v1.1.2
type APIError struct { RateLimitHeaders RateLimitHeaders StatusCode int `json:"status_code"` Message string `json:"message"` Type string `json:"type"` }
APIError represents an error that occured on an API
type APIErrorResponse ¶ added in v1.1.2
type APIErrorResponse struct {
Error APIError `json:"error"`
}
APIErrorResponse is the full error respnose that has been returned by an API.
type ChatCompletionFunctionParameters ¶ added in v1.1.16
type ChatCompletionFunctionParameters struct { Type string `json:"type"` Description string `json:"description,omitempty"` Properties map[string]FunctionParameterPropertyMetadata `json:"properties"` Required []string `json:"required"` }
ChatCompletionFunctionParameters captures the metadata of the function parameter.
type ChatCompletionFunctions ¶ added in v1.1.16
type ChatCompletionFunctions struct { Name string `json:"name"` Description string `json:"description,omitempty"` Parameters ChatCompletionFunctionParameters `json:"parameters"` }
ChatCompletionFunctions represents the functions the model may generate JSON inputs for.
type ChatCompletionRequest ¶ added in v1.1.12
type ChatCompletionRequest struct { // Model is the name of the model to use. If not specified, will default to gpt-3.5-turbo. Model string `json:"model"` // Messages is a list of messages to use as the context for the chat completion. Messages []ChatCompletionRequestMessage `json:"messages"` // Functions is a list of functions the model may generate JSON inputs for. Functions []ChatCompletionFunctions `json:"functions,omitempty"` // What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic Temperature *float32 `json:"temperature,omitempty"` // An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. TopP float32 `json:"top_p,omitempty"` // Number of responses to generate N int `json:"n,omitempty"` // Whether or not to stream responses back as they are generated Stream bool `json:"stream,omitempty"` // Up to 4 sequences where the API will stop generating further tokens. Stop []string `json:"stop,omitempty"` // MaxTokens is the maximum number of tokens to return. MaxTokens int `json:"max_tokens,omitempty"` // (-2, 2) Penalize tokens that haven't appeared yet in the history. PresencePenalty float32 `json:"presence_penalty,omitempty"` // (-2, 2) Penalize tokens that appear too frequently in the history. FrequencyPenalty float32 `json:"frequency_penalty,omitempty"` // Modify the probability of specific tokens appearing in the completion. LogitBias map[string]float32 `json:"logit_bias,omitempty"` // Can be used to identify an end-user User string `json:"user,omitempty"` }
ChatCompletionRequest is a request for the chat completion API
type ChatCompletionRequestMessage ¶ added in v1.1.12
type ChatCompletionRequestMessage struct { // Role is the role is the role of the the message. Can be "system", "user", or "assistant" Role string `json:"role"` // Content is the content of the message Content string `json:"content"` // FunctionCall is the name and arguments of a function that should be called, as generated by the model. FunctionCall *Function `json:"function_call,omitempty"` // Name is the the name of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. Name string `json:"name,omitempty"` }
ChatCompletionRequestMessage is a message to use as the context for the chat completion API
type ChatCompletionResponse ¶ added in v1.1.12
type ChatCompletionResponse struct { RateLimitHeaders RateLimitHeaders ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []ChatCompletionResponseChoice `json:"choices"` Usage ChatCompletionsResponseUsage `json:"usage"` }
ChatCompletionResponse is the full response from a request to the Chat Completions API
type ChatCompletionResponseChoice ¶ added in v1.1.12
type ChatCompletionResponseChoice struct { Index int `json:"index"` FinishReason string `json:"finish_reason"` Message ChatCompletionResponseMessage `json:"message"` }
ChatCompletionResponseChoice is one of the choices returned in the response to the Chat Completions API
type ChatCompletionResponseMessage ¶ added in v1.1.12
type ChatCompletionResponseMessage struct { Role string `json:"role"` Content string `json:"content"` FunctionCall *Function `json:"function_call,omitempty"` }
ChatCompletionResponseMessage is a message returned in the response to the Chat Completions API
type ChatCompletionStreamResponse ¶ added in v1.1.13
type ChatCompletionStreamResponse struct { ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []ChatCompletionStreamResponseChoice `json:"choices"` Usage ChatCompletionsResponseUsage `json:"usage"` }
type ChatCompletionStreamResponseChoice ¶ added in v1.1.13
type ChatCompletionStreamResponseChoice struct { Index int `json:"index"` FinishReason string `json:"finish_reason"` Delta ChatCompletionResponseMessage `json:"delta"` }
ChatCompletionResponseChoice is one of the choices returned in the response to the Chat Completions API
type ChatCompletionsResponseUsage ¶ added in v1.1.12
type ChatCompletionsResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
ChatCompletionsResponseUsage is the object that returns how many tokens the completion's request used
type Client ¶
type Client interface { // Engines lists the currently available engines, and provides basic information about each // option such as the owner and availability. Engines(ctx context.Context) (*EnginesResponse, error) // Engine retrieves an engine instance, providing basic information about the engine such // as the owner and availability. Engine(ctx context.Context, engine string) (*EngineObject, error) // ChatCompletion creates a completion with the Chat completion endpoint which // is what powers the ChatGPT experience. ChatCompletion(ctx context.Context, request ChatCompletionRequest) (*ChatCompletionResponse, error) // ChatCompletion creates a completion with the Chat completion endpoint which // is what powers the ChatGPT experience. ChatCompletionStream(ctx context.Context, request ChatCompletionRequest, onData func(*ChatCompletionStreamResponse) error) error // Completion creates a completion with the default engine. This is the main endpoint of the API // which auto-completes based on the given prompt. Completion(ctx context.Context, request CompletionRequest) (*CompletionResponse, error) // CompletionStream creates a completion with the default engine and streams the results through // multiple calls to onData. CompletionStream(ctx context.Context, request CompletionRequest, onData func(*CompletionResponse)) error // CompletionWithEngine is the same as Completion except allows overriding the default engine on the client CompletionWithEngine(ctx context.Context, engine string, request CompletionRequest) (*CompletionResponse, error) // CompletionStreamWithEngine is the same as CompletionStream except allows overriding the default engine on the client CompletionStreamWithEngine(ctx context.Context, engine string, request CompletionRequest, onData func(*CompletionResponse)) error // Given a prompt and an instruction, the model will return an edited version of the prompt. Edits(ctx context.Context, request EditsRequest) (*EditsResponse, error) // Search performs a semantic search over a list of documents with the default engine. Search(ctx context.Context, request SearchRequest) (*SearchResponse, error) // SearchWithEngine performs a semantic search over a list of documents with the specified engine. SearchWithEngine(ctx context.Context, engine string, request SearchRequest) (*SearchResponse, error) // Returns an embedding using the provided request. Embeddings(ctx context.Context, request EmbeddingsRequest) (*EmbeddingsResponse, error) // Moderation performs a moderation check on the given text against an OpenAI classifier to determine whether the // provided content complies with OpenAI's usage policies. Moderation(ctx context.Context, request ModerationRequest) (*ModerationResponse, error) }
A Client is an API client to communicate with the OpenAI gpt-3 APIs
func NewClient ¶
func NewClient(apiKey string, options ...ClientOption) Client
NewClient returns a new OpenAI GPT-3 API client. An apiKey is required to use the client
type ClientOption ¶
type ClientOption func(*client) error
ClientOption are options that can be passed when creating a new client
func WithBaseURL ¶
func WithBaseURL(baseURL string) ClientOption
WithBaseURL is a client option that allows you to override the default base url of the client. The default base url is "https://api.openai.com/v1"
func WithDefaultEngine ¶
func WithDefaultEngine(engine string) ClientOption
WithDefaultEngine is a client option that allows you to override the default engine of the client
func WithHTTPClient ¶ added in v1.1.2
func WithHTTPClient(httpClient *http.Client) ClientOption
WithHTTPClient allows you to override the internal http.Client used
func WithOrg ¶ added in v1.1.4
func WithOrg(id string) ClientOption
WithOrg is a client option that allows you to override the organization ID
func WithTimeout ¶
func WithTimeout(timeout time.Duration) ClientOption
WithTimeout is a client option that allows you to override the default timeout duration of requests for the client. The default is 30 seconds. If you are overriding the http client as well, just include the timeout there.
func WithUserAgent ¶
func WithUserAgent(userAgent string) ClientOption
WithUserAgent is a client option that allows you to override the default user agent of the client
type CompletionRequest ¶
type CompletionRequest struct { // A list of string prompts to use. // TODO there are other prompt types here for using token integers that we could add support for. Prompt []string `json:"prompt"` // How many tokens to complete up to. Max of 512 MaxTokens *int `json:"max_tokens,omitempty"` // Sampling temperature to use Temperature *float32 `json:"temperature,omitempty"` // Alternative to temperature for nucleus sampling TopP *float32 `json:"top_p,omitempty"` // How many choice to create for each prompt N *int `json:"n"` // Include the probabilities of most likely tokens LogProbs *int `json:"logprobs"` // Echo back the prompt in addition to the completion Echo bool `json:"echo"` // Up to 4 sequences where the API will stop generating tokens. Response will not contain the stop sequence. Stop []string `json:"stop,omitempty"` // PresencePenalty number between 0 and 1 that penalizes tokens that have already appeared in the text so far. PresencePenalty float32 `json:"presence_penalty"` // FrequencyPenalty number between 0 and 1 that penalizes tokens on existing frequency in the text so far. FrequencyPenalty float32 `json:"frequency_penalty"` // Whether to stream back results or not. Don't set this value in the request yourself // as it will be overriden depending on if you use CompletionStream or Completion methods. Stream bool `json:"stream,omitempty"` }
CompletionRequest is a request for the completions API
type CompletionResponse ¶
type CompletionResponse struct { RateLimitHeaders RateLimitHeaders ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []CompletionResponseChoice `json:"choices"` Usage CompletionResponseUsage `json:"usage"` }
CompletionResponse is the full response from a request to the completions API
type CompletionResponseChoice ¶
type CompletionResponseChoice struct { Text string `json:"text"` Index int `json:"index"` LogProbs LogprobResult `json:"logprobs"` FinishReason string `json:"finish_reason"` }
CompletionResponseChoice is one of the choices returned in the response to the Completions API
type CompletionResponseUsage ¶ added in v1.1.10
type CompletionResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
CompletionResponseUsage is the object that returns how many tokens the completion's request used
type EditsRequest ¶ added in v1.1.7
type EditsRequest struct { // ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. Model string `json:"model"` // The input text to use as a starting point for the edit. Input string `json:"input"` // The instruction that tells the model how to edit the prompt. Instruction string `json:"instruction"` // Sampling temperature to use Temperature *float32 `json:"temperature,omitempty"` // Alternative to temperature for nucleus sampling TopP *float32 `json:"top_p,omitempty"` // How many edits to generate for the input and instruction. Defaults to 1 N *int `json:"n"` }
EditsRequest is a request for the edits API
type EditsResponse ¶ added in v1.1.7
type EditsResponse struct { Object string `json:"object"` Created int `json:"created"` Choices []EditsResponseChoice `json:"choices"` Usage EditsResponseUsage `json:"usage"` }
EditsResponse is the full response from a request to the edits API
type EditsResponseChoice ¶ added in v1.1.7
EditsResponseChoice is one of the choices returned in the response to the Edits API
type EditsResponseUsage ¶ added in v1.1.7
type EditsResponseUsage struct { PromptTokens int `json:"prompt_tokens"` CompletionTokens int `json:"completion_tokens"` TotalTokens int `json:"total_tokens"` }
EditsResponseUsage is a structure used in the response from a request to the edits API
type EmbeddingEngine ¶ added in v1.1.8
type EmbeddingEngine string
type EmbeddingsRequest ¶ added in v1.1.8
type EmbeddingsRequest struct { // Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings // for multiple inputs in a single request, pass an array of strings or array of token arrays. // Each input must not exceed 2048 tokens in length. Input []string `json:"input"` // ID of the model to use Model string `json:"model"` // The request user is an optional parameter meant to be used to trace abusive requests // back to the originating user. OpenAI states: // "The [user] IDs should be a string that uniquely identifies each user. We recommend hashing // their username or email address, in order to avoid sending us any identifying information. // If you offer a preview of your product to non-logged in users, you can send a session ID // instead." User string `json:"user,omitempty"` }
EmbeddingsRequest is a request for the Embeddings API
type EmbeddingsResponse ¶ added in v1.1.8
type EmbeddingsResponse struct { Object string `json:"object"` Data []EmbeddingsResult `json:"data"` Usage EmbeddingsUsage `json:"usage"` }
EmbeddingsResponse is the response from a create embeddings request.
See: https://beta.openai.com/docs/api-reference/embeddings/create
type EmbeddingsResult ¶ added in v1.1.8
type EmbeddingsResult struct { // The type of object returned (e.g., "list", "object") Object string `json:"object"` // The embedding data for the input Embedding []float64 `json:"embedding"` Index int `json:"index"` }
The inner result of a create embeddings request, containing the embeddings for a single input.
type EmbeddingsUsage ¶ added in v1.1.8
type EmbeddingsUsage struct { // The number of tokens used by the prompt PromptTokens int `json:"prompt_tokens"` // The total tokens used TotalTokens int `json:"total_tokens"` }
The usage stats for an embeddings response
type EngineObject ¶
type EngineObject struct { ID string `json:"id"` Object string `json:"object"` Owner string `json:"owner"` Ready bool `json:"ready"` }
EngineObject contained in an engine reponse
type EnginesResponse ¶
type EnginesResponse struct { Data []EngineObject `json:"data"` Object string `json:"object"` }
EnginesResponse is returned from the Engines API
type FunctionParameterPropertyMetadata ¶ added in v1.1.16
type FunctionParameterPropertyMetadata struct { Type string `json:"type"` Description string `json:"description,omitempty"` Enum []string `json:"enum,omitempty"` }
FunctionParameterPropertyMetadata represents the metadata of the function parameter property.
type LogprobResult ¶ added in v1.1.6
type LogprobResult struct { Tokens []string `json:"tokens"` TokenLogprobs []float32 `json:"token_logprobs"` TopLogprobs []map[string]float32 `json:"top_logprobs"` TextOffset []int `json:"text_offset"` }
LogprobResult represents logprob result of Choice
type ModerationCategoryResult ¶ added in v1.1.15
type ModerationCategoryResult struct { Hate bool `json:"hate"` HateThreatening bool `json:"hate/threatening"` SelfHarm bool `json:"self-harm"` Sexual bool `json:"sexual"` SexualMinors bool `json:"sexual/minors"` Violence bool `json:"violence"` ViolenceGraphic bool `json:"violence/graphic"` }
ModerationCategoryResult shows the categories that the moderation classifier flagged the input text for.
type ModerationCategoryScores ¶ added in v1.1.15
type ModerationCategoryScores struct { Hate float32 `json:"hate"` HateThreatening float32 `json:"hate/threatening"` SelfHarm float32 `json:"self-harm"` Sexual float32 `json:"sexual"` SexualMinors float32 `json:"sexual/minors"` Violence float32 `json:"violence"` ViolenceGraphic float32 `json:"violence/graphic"` }
ModerationCategoryScores shows the classifier scores for each moderation category.
type ModerationRequest ¶ added in v1.1.15
type ModerationRequest struct { // Input is the input text that should be classified. Required. Input string `json:"input"` // Model is the content moderation model to use. If not specified, will default to OpenAI API defaults, which is // currently "text-moderation-latest". Model string `json:"model,omitempty"` }
ModerationRequest is a request for the moderation API.
type ModerationResponse ¶ added in v1.1.15
type ModerationResponse struct { ID string `json:"id"` Model string `json:"model"` Results []ModerationResult `json:"results"` }
ModerationResponse is the full response from a request to the moderation API.
type ModerationResult ¶ added in v1.1.15
type ModerationResult struct { Flagged bool `json:"flagged"` Categories ModerationCategoryResult `json:"categories"` CategoryScores ModerationCategoryScores `json:"category_scores"` }
ModerationResult represents a single moderation classification result returned by the moderation API.
type RateLimitHeaders ¶ added in v1.1.18
type RateLimitHeaders struct { // x-ratelimit-limit-requests: The maximum number of requests that are permitted before exhausting the rate limit. LimitRequests int // x-ratelimit-limit-tokens: The maximum number of tokens that are permitted before exhausting the rate limit. LimitTokens int // x-ratelimit-remaining-requests: The remaining number of requests that are permitted before exhausting the rate limit. RemainingRequests int // x-ratelimit-remaining-tokens: The remaining number of tokens that are permitted before exhausting the rate limit. RemainingTokens int // x-ratelimit-reset-requests: The time until the rate limit (based on requests) resets to its initial state. ResetRequests time.Duration // x-ratelimit-reset-tokens: The time until the rate limit (based on tokens) resets to its initial state. ResetTokens time.Duration }
RateLimitHeaders contain the HTTP response headers indicating rate limiting status
func NewRateLimitHeadersFromResponse ¶ added in v1.1.18
func NewRateLimitHeadersFromResponse(resp *http.Response) RateLimitHeaders
NewRateLimitHeadersFromResponse does a best effort to parse the rate limit information included in response headers
type SearchData ¶
type SearchData struct { Document int `json:"document"` Object string `json:"object"` Score float64 `json:"score"` }
SearchData is a single search result from the document search API
type SearchRequest ¶
SearchRequest is a request for the document search API
type SearchResponse ¶
type SearchResponse struct { Data []SearchData `json:"data"` Object string `json:"object"` }
SearchResponse is the full response from a request to the document search API