Documentation ¶
Overview ¶
Package groq is an unofficial API wrapper for GroqCloud https://groq.com/
groq requires Go 1.14 or newer
Index ¶
- Constants
- type ChatCompletionConfig
- type ChatCompletionRequest
- type ChatCompletionResponse
- type Conversation
- type GroqClient
- func (g *GroqClient) GetModel(modelId string) (Model, error)
- func (g *GroqClient) GetModels() ([]Model, error)
- func (g *GroqClient) TranscribeAudio(filename string, model string, config *TranscriptionConfig) (Transcription, error)
- func (g *GroqClient) TranslateAudio(filename string, model string, config *TranslationConfig) (Transcription, error)
- type Message
- type Model
- type Transcription
- type TranscriptionConfig
- type TranslationConfig
Constants ¶
const ( MessageRoleUser = "user" MessageRoleSystem = "system" MessageRoleAssistant = "assistant" )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ChatCompletionConfig ¶
type ChatCompletionConfig struct { // Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, // decreasing the model's likelihood to repeat the same line verbatim. FrequencyPenalty float64 // Maximum amount of tokens that can be generated in the completion MaxTokens int // Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, // increasing the model's likelihood to talk about new topics. PresencePenalty float64 ResponseFormat struct { Type string } // Random seed for the model Seed int // Up to 4 sequences where the API will stop generating tokens Stop []string // Whether or not the API should stream responses. Currently UNSUPPORTED Stream bool // The sampling temperature, between 0 and 1. Temperature float64 // Unique identifier for your end-user User string // An alternative to sampling with temperature, called nucleus sampling, // where the model considers the results of the tokens with top_p probability mass. // So 0.1 means only the tokens comprising the top 10% probability mass are considered. // DO NOT altering this if you have altered temperature and vice versa. TopP float64 }
type ChatCompletionRequest ¶
type ChatCompletionRequest struct { FrequencyPenalty float64 `json:"frequency_penalty,omitempty"` MaxTokens int `json:"max_tokens,omitempty"` Messages []Message `json:"messages"` Model string `json:"model"` PresencePenalty float64 `json:"presence_penalty,omitempty"` ResponseFormat struct { Type string `json:"type,omitempty"` } `json:"response_format,omitempty"` Seed int `json:"seed,omitempty"` Stop []string `json:"stop,omitempty"` Stream bool `json:"stream,omitempty"` Temperature float64 `json:"temperature,omitempty"` User string `json:"user,omitempty"` TopP float64 `json:"top_p,omitempty"` }
type ChatCompletionResponse ¶
type ChatCompletionResponse struct { ID string `json:"id"` Object string `json:"object"` Created int `json:"created"` Model string `json:"model"` Choices []struct { Index int `json:"index"` Message Message `json:"message"` Logprobs any `json:"logprobs"` FinishReason string `json:"finish_reason"` } `json:"choices"` Usage struct { QueueTime float64 `json:"queue_time"` PromptTokens int `json:"prompt_tokens"` PromptTime float64 `json:"prompt_time"` CompletionTokens int `json:"completion_tokens"` CompletionTime float64 `json:"completion_time"` TotalTokens int `json:"total_tokens"` TotalTime float64 `json:"total_time"` } `json:"usage"` SystemFingerprint string `json:"system_fingerprint"` XGroq struct { ID string `json:"id"` } `json:"x_groq"` }
type Conversation ¶
type Conversation struct {
// contains filtered or unexported fields
}
Conversation is a struct that allows you to construct chat completion requests
func NewConversation ¶
func NewConversation(systemPrompt string) *Conversation
Creates a new conversation
func (*Conversation) AddMessages ¶
func (c *Conversation) AddMessages(messages ...Message)
adds message(s) to the conversations
func (*Conversation) ClearHistory ¶
func (c *Conversation) ClearHistory()
clears the history of the conversation
func (*Conversation) Complete ¶
func (c *Conversation) Complete(g *GroqClient, model string, config *ChatCompletionConfig) (ChatCompletionResponse, error)
sends the conversation to the AI returning the API's result and adding the message to the conversation's history
type GroqClient ¶
type GroqClient struct {
// contains filtered or unexported fields
}
GroqClient is the main client that interacts with the GroqCloud API
func NewGroqClient ¶
func NewGroqClient(apiKey string) (*GroqClient, error)
Creates a new Groq client. Returns an error if the API key given is invalid
func (*GroqClient) GetModel ¶
func (g *GroqClient) GetModel(modelId string) (Model, error)
returns the information for a specific model
func (*GroqClient) GetModels ¶
func (g *GroqClient) GetModels() ([]Model, error)
returns all models available on GroqCloud
func (*GroqClient) TranscribeAudio ¶
func (g *GroqClient) TranscribeAudio(filename string, model string, config *TranscriptionConfig) (Transcription, error)
Transcribes a given audio file using one of Groq's hosted Whipser models
func (*GroqClient) TranslateAudio ¶
func (g *GroqClient) TranslateAudio(filename string, model string, config *TranslationConfig) (Transcription, error)
Translates a given audio file into English.
type Model ¶
type Model struct { // the model's ID Id string `json:"id"` Object string `json:"object"` // Unix timestamp when the model was created Created int `json:"created"` // Who owns this model OwnedBy string `json:"owned_by"` // Is the model currently active? Active bool `json:"active"` // How many context window tokens the model supports ContextWindow int `json:"context_window"` }
Model represents the metadata for an LLM hosted on Groqcloud
type Transcription ¶
type Transcription struct { Task string `json:"task"` Language string `json:"language"` Duration float64 `json:"duration"` Text string `json:"text"` Segments []transcriptionSegment `json:"segments"` XGroq struct { ID string `json:"id"` } `json:"x_groq"` }
Transcription represents an audio transcription/translation result from one of Groq's models
type TranscriptionConfig ¶
type TranscriptionConfig struct { // What language the audio is in. if blank the model will guess it Language string // An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language. Prompt string // The format of the transcript output, in one of these options: json, text, or verbose_json ResponseFormat string // The sampling temperature, between 0 and 1. Temperature float64 }
TranscriptionConfig houses configuration options for transcription requests
type TranslationConfig ¶
type TranslationConfig struct { // An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language. Prompt string // The format of the transcript output, in one of these options: json, text, or verbose_json ResponseFormat string // The sampling temperature, between 0 and 1. Temperature float64 }
TranslationConfig houses configuration options for translation requests