Documentation ¶
Index ¶
- Constants
- type AudioTranscriptionResponse
- type ChatCompletionRequestParams
- type Client
- func (oc *Client) AudioTranscription(requestParams *WhisperParams) (*AudioTranscriptionResponse, error)
- func (oc *Client) ChatGPT(requestParams *ChatCompletionRequestParams) (*chat.ChatCompletion, *OpenAIError)
- func (oc *Client) Context() context.Context
- func (oc *Client) TextToSpeech(requestParams *TextToSpeechParams) (*TTSResult, error)
- type ClientWithContext
- type OpenAIClient
- type OpenAIError
- type TTSResult
- type TextToSpeechParams
- type WhisperParams
Constants ¶
View Source
const BASE_URL = "https://api.openai.com/v1"
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AudioTranscriptionResponse ¶
type AudioTranscriptionResponse struct {
Text string `json:"text"`
}
type ChatCompletionRequestParams ¶
func (*ChatCompletionRequestParams) Response ¶
func (param *ChatCompletionRequestParams) Response(oc OpenAIClient) (*chat.ChatCompletion, *OpenAIError)
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
func New ¶
New creates a new OpenAI client with the provided API key.
Example of use:
apiKey := "your-api-key-here" client := openai.New(apiKey)
func (*Client) AudioTranscription ¶
func (oc *Client) AudioTranscription(requestParams *WhisperParams) (*AudioTranscriptionResponse, error)
AudioTranscription performs audio transcription using the specified template.
Example of use:
client := openai.New("your-api-key") model := "model_name" filename := "audio.mp3" audioFilePath := "./path/to/audio.mp3" resp, err := client.AudioTranscription(&openai.WhisperParams{ Model:"whisper-1", Filename: filename, AudioFilePath: filePath, }) if err != nil { log.Fatalf("Error transcribing audio: %v", err) } fmt.Printf("Audio transcription: %s\n", resp.Text)
WhisperParams:
- Model: Name of the model to be used for audio transcription. See documentation for supported models: https://platform.openai.com/docs/models/model-endpoint-compatibility
- Filename: Name of the audio file.
- AudioFilePath: Path of the audio file on the local file system.
Returns:
A pointer to an AudioTranscriptionResponse containing the audio transcription and possible errors found.
Possible errors:
- If an error occurs when opening the audio file.
- If an error occurs while creating the multipart form.
- If an error occurs while copying the file contents to the request body.
- If an error occurs when creating the HTTP request.
- If an error occurs when making the HTTP request to the OpenAI service.
- If an error occurs while decoding the JSON response from the OpenAI service.
- If the HTTP response status code is not 200 (OK).
func (*Client) ChatGPT ¶
func (oc *Client) ChatGPT(requestParams *ChatCompletionRequestParams) (*chat.ChatCompletion, *OpenAIError)
ChatGPT sends a chat request to GPT.
Example:
// Create a new OpenAI client client := openai.New("your-api-key") // Define messages for the conversation messages := []chat.Message{ {Role: "system", Content: "You are a helpful assistant"}, {Role: "user", Content: "Hello"}, } // Send chat request to GPT res, err := client.ChatGPT(&openai.ChatCompletionRequestParams{ Model: "gpt-3.5-turbo", Messages: messages, }) if err != nil { fmt.Println("Error:", err) return } fmt.Println("Response:", res)
func (*Client) TextToSpeech ¶
func (oc *Client) TextToSpeech(requestParams *TextToSpeechParams) (*TTSResult, error)
TextToSpeech converts the given text input into speech using the specified model and voice.
TextToSpeechParams:
- Model: A string representing the model to use for text-to-speech conversion.
- Input: The input text to be converted into speech.
- Voice: A string representing the voice to be used for speech synthesis.
Returns:
- *TTSResult: A pointer to a TTSResult struct containing the synthesized audio as an io.ReadCloser.
- error: An error if any occurred during the text-to-speech conversion process.
Example:
ttsResult, err := openai.New("your-api-key").TextToSpeech(&openai.TextToSpeechParams{ Model: "tts-1", Input: "oi", Voice: "onix", }) if err != nil { log.Fatal("Text-to-speech conversion failed:", err) } defer ttsResult.Audio.Close() // Use ttsResult.Audio for further processing, e.g., saving to a file or streaming to a client.
type ClientWithContext ¶
func WithContext ¶
func WithContext(ctx context.Context, apiKey string) *ClientWithContext
func (*ClientWithContext) Context ¶
func (oc *ClientWithContext) Context() context.Context
type OpenAIClient ¶
type OpenAIClient interface { ChatGPT(*ChatCompletionRequestParams) (*chat.ChatCompletion, *OpenAIError) AudioTranscription(*WhisperParams) (*AudioTranscriptionResponse, error) TextToSpeech(*TextToSpeechParams) (*TTSResult, error) Context() context.Context // contains filtered or unexported methods }
type OpenAIError ¶
type OpenAIError struct {
// contains filtered or unexported fields
}
func CreateBodyError ¶
func CreateBodyError(err error) *OpenAIError
func CreateRequestError ¶
func CreateRequestError(err error) *OpenAIError
func DecodeJSONError ¶
func DecodeJSONError(err error) *OpenAIError
func InvalidAPIKey ¶
func InvalidAPIKey() *OpenAIError
func RequestError ¶
func RequestError(statusCode int) *OpenAIError
func SendRequestError ¶
func SendRequestError(err error) *OpenAIError
func (*OpenAIError) Error ¶
func (oerr *OpenAIError) Error() string
func (*OpenAIError) StatusCode ¶
func (oerr *OpenAIError) StatusCode() int
type TTSResult ¶
type TTSResult struct {
Audio io.ReadCloser
}
type TextToSpeechParams ¶
type TextToSpeechParams struct {
Model, Input, Voice string
}
func (*TextToSpeechParams) Response ¶
func (params *TextToSpeechParams) Response(oc OpenAIClient) (*TTSResult, error)
type WhisperParams ¶
type WhisperParams struct {
Model, Filename, AudioFilePath string
}
func (*WhisperParams) Response ¶
func (params *WhisperParams) Response(oc OpenAIClient) (*AudioTranscriptionResponse, error)
Source Files ¶
Click to show internal directories.
Click to hide internal directories.