Documentation ¶
Overview ¶
Package chatmodel provides functionalities for working with Large Language Models (LLMs).
Index ¶
- Variables
- type Anthropic
- type AnthropicClient
- type AnthropicOptions
- type AzureOpenAI
- type AzureOpenAIOptions
- type Bedrock
- func NewBedrock(client BedrockRuntimeClient, optFns ...func(o *BedrockOptions)) (*Bedrock, error)
- func NewBedrockAntrophic(client BedrockRuntimeClient, optFns ...func(o *BedrockAnthropicOptions)) (*Bedrock, error)
- func NewBedrockMeta(client BedrockRuntimeClient, optFns ...func(o *BedrockMetaOptions)) (*Bedrock, error)
- type BedrockAnthropicOptions
- type BedrockInputOutputAdapter
- type BedrockMetaOptions
- type BedrockOptions
- type BedrockRuntimeClient
- type Ernie
- type ErnieClient
- type ErnieOptions
- type Fake
- type FakeOptions
- type FakeResultFunc
- type Ollama
- type OllamaClient
- type OllamaOptions
- type OpenAI
- type OpenAIClient
- type OpenAIOptions
- type Palm
- type PalmClient
- type PalmOptions
Constants ¶
This section is empty.
Variables ¶
var DefaultOpenAIOptions = OpenAIOptions{ CallbackOptions: &schema.CallbackOptions{ Verbose: golc.Verbose, }, ModelName: openai.GPT3Dot5Turbo, Temperature: 1, TopP: 1, PresencePenalty: 0, FrequencyPenalty: 0, MaxRetries: 3, }
Functions ¶
This section is empty.
Types ¶
type Anthropic ¶
Anthropic is a chat model based on the Anthropic API.
func NewAnthropic ¶
func NewAnthropic(apiKey string, optFns ...func(o *AnthropicOptions)) (*Anthropic, error)
NewAnthropic creates a new instance of the Anthropic chat model with the provided options.
func NewAnthropicFromClient ¶ added in v0.0.67
func NewAnthropicFromClient(client AnthropicClient, optFns ...func(o *AnthropicOptions)) (*Anthropic, error)
NewAnthropicFromClient creates a new instance of the Anthropic chat model with the provided options.
func (*Anthropic) Generate ¶
func (cm *Anthropic) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Anthropic) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type AnthropicClient ¶ added in v0.0.67
type AnthropicClient interface {
CreateCompletion(ctx context.Context, request *anthropic.CompletionRequest) (*anthropic.CompletionResponse, error)
}
AnthropicClient is the interface for the Anthropic client.
type AnthropicOptions ¶
type AnthropicOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model name to use. ModelName string `map:"model_name,omitempty"` // Temperature parameter controls the randomness of the generation output. Temperature float32 `map:"temperature,omitempty"` // Denotes the number of tokens to predict per generation. MaxTokens int `map:"max_tokens,omitempty"` // TopK parameter specifies the number of highest probability tokens to consider for generation. TopK int `map:"top_k,omitempty"` // TopP parameter specifies the cumulative probability threshold for generating tokens. TopP float32 `map:"top_p,omitempty"` }
AnthropicOptions contains options for configuring the Anthropic chat model.
type AzureOpenAI ¶ added in v0.0.36
type AzureOpenAI struct { *OpenAI // contains filtered or unexported fields }
func NewAzureOpenAI ¶ added in v0.0.26
func NewAzureOpenAI(apiKey, baseURL string, optFns ...func(o *AzureOpenAIOptions)) (*AzureOpenAI, error)
func (*AzureOpenAI) InvocationParams ¶ added in v0.0.55
func (cm *AzureOpenAI) InvocationParams() map[string]any
InvocationParams returns the parameters used in the model invocation.
func (*AzureOpenAI) Type ¶ added in v0.0.36
func (cm *AzureOpenAI) Type() string
Type returns the type of the model.
type AzureOpenAIOptions ¶ added in v0.0.26
type AzureOpenAIOptions struct { OpenAIOptions Deployment string `map:"deployment,omitempty"` }
type Bedrock ¶ added in v0.0.71
Bedrock is a model implementation of the schema.ChatModel interface for the Bedrock model.
func NewBedrock ¶ added in v0.0.71
func NewBedrock(client BedrockRuntimeClient, optFns ...func(o *BedrockOptions)) (*Bedrock, error)
NewBedrock creates an instance of the Bedrock model.
func NewBedrockAntrophic ¶ added in v0.0.78
func NewBedrockAntrophic(client BedrockRuntimeClient, optFns ...func(o *BedrockAnthropicOptions)) (*Bedrock, error)
NewBedrockAntrophic creates a new instance of Bedrock for the "anthropic" provider.
func NewBedrockMeta ¶ added in v0.0.87
func NewBedrockMeta(client BedrockRuntimeClient, optFns ...func(o *BedrockMetaOptions)) (*Bedrock, error)
NewBedrockMeta creates a new instance of Bedrock for the "meta" provider.
func (*Bedrock) Callbacks ¶ added in v0.0.71
Callbacks returns the registered callbacks of the model.
func (*Bedrock) Generate ¶ added in v0.0.71
func (cm *Bedrock) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Bedrock) InvocationParams ¶ added in v0.0.71
InvocationParams returns the parameters used in the model invocation.
type BedrockAnthropicOptions ¶ added in v0.0.78
type BedrockAnthropicOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model id to use. ModelID string `map:"model_id,omitempty"` // MaxTokensToSmaple sets the maximum number of tokens in the generated text. MaxTokensToSample int `map:"max_tokens_to_sample"` // Temperature controls the randomness of text generation. Higher values make it more random. Temperature float32 `map:"temperature"` // TopP is the total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // TopK determines how the model selects tokens for output. TopK int `map:"top_k"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
BedrockAnthropicOptions contains options for configuring the Bedrock model with the "anthropic" provider.
type BedrockInputOutputAdapter ¶ added in v0.0.71
type BedrockInputOutputAdapter struct {
// contains filtered or unexported fields
}
BedrockInputOutputAdapter is a helper struct for preparing input and handling output for Bedrock model.
func NewBedrockInputOutputAdapter ¶ added in v0.0.71
func NewBedrockInputOutputAdapter(provider string) *BedrockInputOutputAdapter
NewBedrockInputOutputAdpter creates a new instance of BedrockInputOutputAdpter.
func (*BedrockInputOutputAdapter) PrepareInput ¶ added in v0.0.71
func (bioa *BedrockInputOutputAdapter) PrepareInput(messages schema.ChatMessages, modelParams map[string]any, stop []string) ([]byte, error)
PrepareInput prepares the input for the Bedrock model based on the specified provider.
func (*BedrockInputOutputAdapter) PrepareOutput ¶ added in v0.0.71
func (bioa *BedrockInputOutputAdapter) PrepareOutput(response []byte) (string, error)
PrepareOutput prepares the output for the Bedrock model based on the specified provider.
type BedrockMetaOptions ¶ added in v0.0.87
type BedrockMetaOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model id to use. ModelID string `map:"model_id,omitempty"` // Temperature controls the randomness of text generation. Higher values make it more random. Temperature float32 `map:"temperature"` // TopP is the total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // MaxGenLen specify the maximum number of tokens to use in the generated response. MaxGenLen int `map:"max_gen_len"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
BedrockMetaOptions contains options for configuring the Bedrock model with the "meta" provider.
type BedrockOptions ¶ added in v0.0.71
type BedrockOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model id to use. ModelID string `map:"model_id,omitempty"` // Model params to use. ModelParams map[string]any `map:"model_params,omitempty"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
BedrockOptions contains options for configuring the Bedrock model.
type BedrockRuntimeClient ¶ added in v0.0.71
type BedrockRuntimeClient interface { InvokeModel(ctx context.Context, params *bedrockruntime.InvokeModelInput, optFns ...func(*bedrockruntime.Options)) (*bedrockruntime.InvokeModelOutput, error) InvokeModelWithResponseStream(ctx context.Context, params *bedrockruntime.InvokeModelWithResponseStreamInput, optFns ...func(*bedrockruntime.Options)) (*bedrockruntime.InvokeModelWithResponseStreamOutput, error) }
BedrockRuntimeClient is an interface for the Bedrock model runtime client.
type Ernie ¶ added in v0.0.67
Ernie is a struct representing the Ernie language model.
func NewErnie ¶ added in v0.0.67
func NewErnie(clientID, clientSecret string, optFns ...func(o *ErnieOptions)) (*Ernie, error)
NewErnie creates a new instance of the Ernie chat model.
func NewErnieFromClient ¶ added in v0.0.67
func NewErnieFromClient(client ErnieClient, optFns ...func(o *ErnieOptions)) (*Ernie, error)
NewErnieFromClient creates a new instance of the Ernie chat model from a custom ErnieClient.
func (*Ernie) Generate ¶ added in v0.0.67
func (cm *Ernie) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Ernie) InvocationParams ¶ added in v0.0.67
InvocationParams returns the parameters used in the model invocation.
type ErnieClient ¶ added in v0.0.67
type ErnieClient interface {
CreateChatCompletion(ctx context.Context, model string, request *ernie.ChatCompletionRequest) (*ernie.ChatCompletionResponse, error)
}
ErnieClient is the interface for the Ernie client.
type ErnieOptions ¶ added in v0.0.67
type ErnieOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // ModelName is the name of the Ernie chat model to use. ModelName string `map:"model_name,omitempty"` // Temperature is the sampling temperature to use during text generation. Temperature float64 `map:"temperature,omitempty"` // TopP is the total probability mass of tokens to consider at each step. TopP float64 `map:"top_p,omitempty"` // PenaltyScore is a parameter used during text generation to apply a penalty for generating longer responses. PenaltyScore float64 `map:"penalty_score"` }
ErnieOptions is the options struct for the Ernie chat model.
type Fake ¶ added in v0.0.14
Fake is a mock implementation of the schema.ChatModel interface for testing purposes.
func NewFake ¶ added in v0.0.14
func NewFake(fakeResultFunc FakeResultFunc, optFns ...func(o *FakeOptions)) *Fake
NewFake creates an instance of the Fake model with the provided custom result function.
func NewSimpleFake ¶ added in v0.0.58
func NewSimpleFake(response string, optFns ...func(o *FakeOptions)) *Fake
NewSimpleFake creates a simple instance of the Fake model with a fixed response for all inputs.
func (*Fake) Generate ¶ added in v0.0.14
func (cm *Fake) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Fake) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type FakeOptions ¶ added in v0.0.58
type FakeOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` ChatModelType string `map:"-"` }
FakeOptions contains options for configuring the Fake model.
type FakeResultFunc ¶ added in v0.0.58
type FakeResultFunc func(ctx context.Context, messages schema.ChatMessages) (*schema.ModelResult, error)
FakeResultFunc is a function type used for providing custom model results in the Fake model.
type Ollama ¶ added in v0.0.91
Ollama is a struct representing the Ollama generative model.
func NewOllama ¶ added in v0.0.91
func NewOllama(client OllamaClient, optFns ...func(o *OllamaOptions)) (*Ollama, error)
NewOllama creates a new instance of the Ollama model with the provided client and options.
func (*Ollama) Callbacks ¶ added in v0.0.91
Callbacks returns the registered callbacks of the model.
func (*Ollama) Generate ¶ added in v0.0.91
func (cm *Ollama) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Ollama) InvocationParams ¶ added in v0.0.91
InvocationParams returns the parameters used in the model invocation.
type OllamaClient ¶ added in v0.0.91
type OllamaClient interface { // GenerateChat produces a single request and response for the Ollama generative model. GenerateChat(ctx context.Context, req *ollama.ChatRequest) (*ollama.ChatResponse, error) }
OllamaClient is an interface for the Ollama generative model client.
type OllamaOptions ¶ added in v0.0.91
type OllamaOptions struct { // CallbackOptions specify options for handling callbacks during text generation. *schema.CallbackOptions `map:"-"` // Tokenizer represents the tokenizer to be used with the LLM model. schema.Tokenizer `map:"-"` // ModelName is the name of the Gemini model to use. ModelName string `map:"model_name,omitempty"` // Temperature controls the randomness of the generation. Higher values make the output more random. Temperature float32 `map:"temperature,omitempty"` // MaxTokens is the maximum number of tokens to generate in the completion. MaxTokens int `map:"max_tokens,omitempty"` // TopP is the nucleus sampling parameter. It controls the cumulative probability of the most likely tokens to sample from. TopP float32 `map:"top_p,omitempty"` // TopK is the number of top tokens to consider for sampling. TopK int `map:"top_k,omitempty"` // PresencePenalty penalizes repeated tokens. PresencePenalty float32 `map:"presence_penalty,omitempty"` // FrequencyPenalty penalizes repeated tokens according to frequency. FrequencyPenalty float32 `map:"frequency_penalty,omitempty"` }
OllamaOptions contains options for the Ollama model.
type OpenAI ¶
OpenAI represents the OpenAI chat model.
func NewOpenAI ¶
func NewOpenAI(apiKey string, optFns ...func(o *OpenAIOptions)) (*OpenAI, error)
NewOpenAI creates a new instance of the OpenAI chat model.
func NewOpenAIFromClient ¶ added in v0.0.36
func NewOpenAIFromClient(client OpenAIClient, optFns ...func(o *OpenAIOptions)) (*OpenAI, error)
NewOpenAIFromClient creates a new instance of the OpenAI chat model with the provided client and options.
func (*OpenAI) Generate ¶
func (cm *OpenAI) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*OpenAI) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type OpenAIClient ¶ added in v0.0.36
type OpenAIClient interface { CreateChatCompletion(ctx context.Context, request openai.ChatCompletionRequest) (response openai.ChatCompletionResponse, err error) CreateChatCompletionStream(ctx context.Context, request openai.ChatCompletionRequest) (stream *openai.ChatCompletionStream, err error) }
OpenAIClient is an interface for the OpenAI chat model client.
type OpenAIOptions ¶
type OpenAIOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model name to use. ModelName string `map:"model_name,omitempty"` // Sampling temperature to use. Temperature float32 `map:"temperature,omitempty"` // The maximum number of tokens to generate in the completion. // -1 returns as many tokens as possible given the prompt and //the models maximal context size. MaxTokens int `map:"max_tokens,omitempty"` // Total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // Penalizes repeated tokens. PresencePenalty float32 `map:"presence_penalty,omitempty"` // Penalizes repeated tokens according to frequency. FrequencyPenalty float32 `map:"frequency_penalty,omitempty"` // How many completions to generate for each prompt. N int `map:"n,omitempty"` // BaseURL is the base URL of the OpenAI service. BaseURL string `map:"base_url,omitempty"` // OrgID is the organization ID for accessing the OpenAI service. OrgID string `map:"org_id,omitempty"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` // MaxRetries represents the maximum number of retries to make when generating. MaxRetries uint `map:"max_retries,omitempty"` }
OpenAIOptions contains the options for the OpenAI chat model.
type Palm ¶ added in v0.0.36
Palm is a struct representing the PALM language model.
func NewPalm ¶ added in v0.0.36
func NewPalm(client PalmClient, optFns ...func(o *PalmOptions)) (*Palm, error)
NewPalm creates a new instance of the PALM language model.
func (*Palm) Generate ¶ added in v0.0.36
func (cm *Palm) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Palm) InvocationParams ¶ added in v0.0.36
InvocationParams returns the parameters used in the model invocation.
type PalmClient ¶ added in v0.0.36
type PalmClient interface {
GenerateMessage(ctx context.Context, req *generativelanguagepb.GenerateMessageRequest, opts ...gax.CallOption) (*generativelanguagepb.GenerateMessageResponse, error)
}
PalmClient is the interface for the PALM client.
type PalmOptions ¶ added in v0.0.36
type PalmOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // ModelName is the name of the Palm chat model to use. ModelName string `map:"model_name,omitempty"` // Temperature is the sampling temperature to use during text generation. Temperature float32 `map:"temperature,omitempty"` // TopP is the total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // TopK determines how the model selects tokens for output. TopK int32 `map:"top_k"` // CandidateCount specifies the number of candidates to generate during text completion. CandidateCount int32 `map:"candidate_count"` }
PalmOptions is the options struct for the PALM chat model.