Documentation ¶
Overview ¶
Package chatmodel provides functionalities for working with Large Language Models (LLMs).
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Anthropic ¶
Anthropic is a chat model based on the Anthropic API.
func NewAnthropic ¶
func NewAnthropic(apiKey string, optFns ...func(o *AnthropicOptions)) (*Anthropic, error)
NewAnthropic creates a new instance of the Anthropic chat model with the provided options.
func (*Anthropic) Generate ¶
func (cm *Anthropic) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Anthropic) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type AnthropicOptions ¶
type AnthropicOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model name to use. ModelName string `map:"model_name,omitempty"` // Temperature parameter controls the randomness of the generation output. Temperature float64 `map:"temperature,omitempty"` // Denotes the number of tokens to predict per generation. MaxTokens int `map:"max_tokens,omitempty"` // TopK parameter specifies the number of highest probability tokens to consider for generation. TopK int `map:"top_k,omitempty"` // TopP parameter specifies the cumulative probability threshold for generating tokens. TopP float64 `map:"top_p,omitempty"` }
AnthropicOptions contains options for configuring the Anthropic chat model.
type AzureOpenAI ¶ added in v0.0.36
type AzureOpenAI struct {
*OpenAI
}
func NewAzureOpenAI ¶ added in v0.0.26
func NewAzureOpenAI(apiKey, baseURL string, optFns ...func(o *AzureOpenAIOptions)) (*AzureOpenAI, error)
func (*AzureOpenAI) Type ¶ added in v0.0.36
func (cm *AzureOpenAI) Type() string
Type returns the type of the model.
type AzureOpenAIOptions ¶ added in v0.0.26
type AzureOpenAIOptions struct { OpenAIOptions Deployment string }
type Fake ¶ added in v0.0.14
func (*Fake) Generate ¶ added in v0.0.14
func (cm *Fake) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Fake) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type OpenAI ¶
OpenAI represents the OpenAI chat model.
func NewOpenAI ¶
func NewOpenAI(apiKey string, optFns ...func(o *OpenAIOptions)) (*OpenAI, error)
NewOpenAI creates a new instance of the OpenAI chat model.
func NewOpenAIFromClient ¶ added in v0.0.36
func NewOpenAIFromClient(client OpenAIClient, optFns ...func(o *OpenAIOptions)) (*OpenAI, error)
NewOpenAIFromClient creates a new instance of the OpenAI chat model with the provided client and options.
func (*OpenAI) Generate ¶
func (cm *OpenAI) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*OpenAI) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type OpenAIClient ¶ added in v0.0.36
type OpenAIClient interface {
CreateChatCompletion(ctx context.Context, request openai.ChatCompletionRequest) (response openai.ChatCompletionResponse, err error)
}
OpenAIClient is an interface for the OpenAI chat model client.
type OpenAIOptions ¶
type OpenAIOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model name to use. ModelName string // Sampling temperature to use. Temperatur float32 // The maximum number of tokens to generate in the completion. // -1 returns as many tokens as possible given the prompt and //the models maximal context size. MaxTokens int // Total probability mass of tokens to consider at each step. TopP float32 // Penalizes repeated tokens. PresencePenalty float32 // Penalizes repeated tokens according to frequency. FrequencyPenalty float32 // How many completions to generate for each prompt. N int // BaseURL is the base URL of the OpenAI service. BaseURL string // OrgID is the organization ID for accessing the OpenAI service. OrgID string }
OpenAIOptions contains the options for the OpenAI chat model.
type Palm ¶ added in v0.0.36
type Palm struct {
// contains filtered or unexported fields
}
Palm is a struct representing the PALM language model.
func NewPalm ¶ added in v0.0.36
func NewPalm(client PalmClient, optFns ...func(o *PalmOptions)) (*Palm, error)
NewPalm creates a new instance of the PALM language model.
func (*Palm) Generate ¶ added in v0.0.36
func (l *Palm) Generate(ctx context.Context, prompt string, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided prompt and options.
func (*Palm) InvocationParams ¶ added in v0.0.36
InvocationParams returns the parameters used in the model invocation.
type PalmClient ¶ added in v0.0.36
type PalmClient interface {
GenerateMessage(ctx context.Context, req *generativelanguagepb.GenerateMessageRequest, opts ...gax.CallOption) (*generativelanguagepb.GenerateMessageResponse, error)
}
PalmClient is the interface for the PALM client.
type PalmOptions ¶ added in v0.0.36
type PalmOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // ModelName is the name of the Palm chat model to use. ModelName string `map:"model_name,omitempty"` // Temperature is the sampling temperature to use during text generation. Temperatur float32 `map:"temperatur,omitempty"` // TopP is the total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // TopK determines how the model selects tokens for output. TopK int32 `map:"top_k"` // CandidateCount specifies the number of candidates to generate during text completion. CandidateCount int32 `map:"candidate_count"` }
PalmOptions is the options struct for the PALM chat model.