Documentation ¶
Overview ¶
Package chatmodel provides a framework for working with chat-based large language models (LLMs).
Index ¶
- Variables
- type Anthropic
- type AnthropicClient
- type AnthropicOptions
- type AzureOpenAI
- type AzureOpenAIOptions
- type Bedrock
- func NewBedrock(client BedrockRuntimeClient, modelID string, optFns ...func(o *BedrockOptions)) (*Bedrock, error)
- func NewBedrockAntrophic(client BedrockRuntimeClient, optFns ...func(o *BedrockAnthropicOptions)) (*Bedrock, error)
- func NewBedrockMeta(client BedrockRuntimeClient, optFns ...func(o *BedrockMetaOptions)) (*Bedrock, error)
- type BedrockAnthropicOptions
- type BedrockInputOutputAdapter
- func (bioa *BedrockInputOutputAdapter) PrepareInput(messages schema.ChatMessages, modelParams map[string]any, stop []string) ([]byte, error)
- func (bioa *BedrockInputOutputAdapter) PrepareOutput(response []byte) (string, error)
- func (bioa *BedrockInputOutputAdapter) PrepareStreamOutput(response []byte) (string, error)
- type BedrockMetaOptions
- type BedrockOptions
- type BedrockRuntimeClient
- type Cohere
- type CohereClient
- type CohereOptions
- type Ernie
- type ErnieClient
- type ErnieOptions
- type Fake
- type FakeOptions
- type FakeResultFunc
- type GoogleGenAI
- func (cm *GoogleGenAI) Callbacks() []schema.Callback
- func (cm *GoogleGenAI) Generate(ctx context.Context, messages schema.ChatMessages, ...) (*schema.ModelResult, error)
- func (cm *GoogleGenAI) InvocationParams() map[string]any
- func (cm *GoogleGenAI) Type() string
- func (cm *GoogleGenAI) Verbose() bool
- type GoogleGenAIClient
- type GoogleGenAIOptions
- type Ollama
- type OllamaClient
- type OllamaOptions
- type OpenAI
- type OpenAIClient
- type OpenAIOptions
Constants ¶
This section is empty.
Variables ¶
var DefaultOpenAIOptions = OpenAIOptions{ CallbackOptions: &schema.CallbackOptions{ Verbose: golc.Verbose, }, ModelName: openai.GPT3Dot5Turbo, Temperature: 1, TopP: 1, PresencePenalty: 0, FrequencyPenalty: 0, MaxRetries: 3, }
Functions ¶
This section is empty.
Types ¶
type Anthropic ¶
Anthropic is a chat model based on the Anthropic API.
func NewAnthropic ¶
func NewAnthropic(apiKey string, optFns ...func(o *AnthropicOptions)) (*Anthropic, error)
NewAnthropic creates a new instance of the Anthropic chat model with the provided options.
func NewAnthropicFromClient ¶ added in v0.0.67
func NewAnthropicFromClient(client AnthropicClient, optFns ...func(o *AnthropicOptions)) (*Anthropic, error)
NewAnthropicFromClient creates a new instance of the Anthropic chat model with the provided options.
func (*Anthropic) Generate ¶
func (cm *Anthropic) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Anthropic) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type AnthropicClient ¶ added in v0.0.67
type AnthropicClient interface {
CreateCompletion(ctx context.Context, request *anthropic.CompletionRequest) (*anthropic.CompletionResponse, error)
}
AnthropicClient is the interface for the Anthropic client.
type AnthropicOptions ¶
type AnthropicOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model name to use. ModelName string `map:"model_name,omitempty"` // Temperature parameter controls the randomness of the generation output. Temperature float32 `map:"temperature,omitempty"` // Denotes the number of tokens to predict per generation. MaxTokens int `map:"max_tokens,omitempty"` // TopK parameter specifies the number of highest probability tokens to consider for generation. TopK int `map:"top_k,omitempty"` // TopP parameter specifies the cumulative probability threshold for generating tokens. TopP float32 `map:"top_p,omitempty"` }
AnthropicOptions contains options for configuring the Anthropic chat model.
type AzureOpenAI ¶ added in v0.0.36
type AzureOpenAI struct { *OpenAI // contains filtered or unexported fields }
func NewAzureOpenAI ¶ added in v0.0.26
func NewAzureOpenAI(apiKey, baseURL string, optFns ...func(o *AzureOpenAIOptions)) (*AzureOpenAI, error)
func (*AzureOpenAI) InvocationParams ¶ added in v0.0.55
func (cm *AzureOpenAI) InvocationParams() map[string]any
InvocationParams returns the parameters used in the model invocation.
func (*AzureOpenAI) Type ¶ added in v0.0.36
func (cm *AzureOpenAI) Type() string
Type returns the type of the model.
type AzureOpenAIOptions ¶ added in v0.0.26
type AzureOpenAIOptions struct { OpenAIOptions Deployment string `map:"deployment,omitempty"` }
type Bedrock ¶ added in v0.0.71
Bedrock is a model implementation of the schema.ChatModel interface for the Bedrock model.
func NewBedrock ¶ added in v0.0.71
func NewBedrock(client BedrockRuntimeClient, modelID string, optFns ...func(o *BedrockOptions)) (*Bedrock, error)
NewBedrock creates an instance of the Bedrock model.
func NewBedrockAntrophic ¶ added in v0.0.78
func NewBedrockAntrophic(client BedrockRuntimeClient, optFns ...func(o *BedrockAnthropicOptions)) (*Bedrock, error)
NewBedrockAntrophic creates a new instance of Bedrock for the "anthropic" provider.
func NewBedrockMeta ¶ added in v0.0.87
func NewBedrockMeta(client BedrockRuntimeClient, optFns ...func(o *BedrockMetaOptions)) (*Bedrock, error)
NewBedrockMeta creates a new instance of Bedrock for the "meta" provider.
func (*Bedrock) Callbacks ¶ added in v0.0.71
Callbacks returns the registered callbacks of the model.
func (*Bedrock) Generate ¶ added in v0.0.71
func (cm *Bedrock) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Bedrock) InvocationParams ¶ added in v0.0.71
InvocationParams returns the parameters used in the model invocation.
type BedrockAnthropicOptions ¶ added in v0.0.78
type BedrockAnthropicOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model id to use. ModelID string `map:"model_id,omitempty"` // MaxTokensToSmaple sets the maximum number of tokens in the generated text. MaxTokensToSample int `map:"max_tokens_to_sample"` // Temperature controls the randomness of text generation. Higher values make it more random. Temperature float32 `map:"temperature"` // TopP is the total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // TopK determines how the model selects tokens for output. TopK int `map:"top_k"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
BedrockAnthropicOptions contains options for configuring the Bedrock model with the "anthropic" provider.
type BedrockInputOutputAdapter ¶ added in v0.0.71
type BedrockInputOutputAdapter struct {
// contains filtered or unexported fields
}
BedrockInputOutputAdapter is a helper struct for preparing input and handling output for Bedrock model.
func NewBedrockInputOutputAdapter ¶ added in v0.0.71
func NewBedrockInputOutputAdapter(provider string) *BedrockInputOutputAdapter
NewBedrockInputOutputAdpter creates a new instance of BedrockInputOutputAdpter.
func (*BedrockInputOutputAdapter) PrepareInput ¶ added in v0.0.71
func (bioa *BedrockInputOutputAdapter) PrepareInput(messages schema.ChatMessages, modelParams map[string]any, stop []string) ([]byte, error)
PrepareInput prepares the input for the Bedrock model based on the specified provider.
func (*BedrockInputOutputAdapter) PrepareOutput ¶ added in v0.0.71
func (bioa *BedrockInputOutputAdapter) PrepareOutput(response []byte) (string, error)
PrepareOutput prepares the output for the Bedrock model based on the specified provider.
func (*BedrockInputOutputAdapter) PrepareStreamOutput ¶ added in v0.0.96
func (bioa *BedrockInputOutputAdapter) PrepareStreamOutput(response []byte) (string, error)
PrepareStreamOutput prepares the output for the Bedrock model based on the specified provider.
type BedrockMetaOptions ¶ added in v0.0.87
type BedrockMetaOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model id to use. ModelID string `map:"model_id,omitempty"` // Temperature controls the randomness of text generation. Higher values make it more random. Temperature float32 `map:"temperature"` // TopP is the total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // MaxGenLen specify the maximum number of tokens to use in the generated response. MaxGenLen int `map:"max_gen_len"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
BedrockMetaOptions contains options for configuring the Bedrock model with the "meta" provider.
type BedrockOptions ¶ added in v0.0.71
type BedrockOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model params to use. ModelParams map[string]any `map:"model_params,omitempty"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
BedrockOptions contains options for configuring the Bedrock model.
type BedrockRuntimeClient ¶ added in v0.0.71
type BedrockRuntimeClient interface { InvokeModel(ctx context.Context, params *bedrockruntime.InvokeModelInput, optFns ...func(*bedrockruntime.Options)) (*bedrockruntime.InvokeModelOutput, error) InvokeModelWithResponseStream(ctx context.Context, params *bedrockruntime.InvokeModelWithResponseStreamInput, optFns ...func(*bedrockruntime.Options)) (*bedrockruntime.InvokeModelWithResponseStreamOutput, error) }
BedrockRuntimeClient is an interface for the Bedrock model runtime client.
type Cohere ¶ added in v0.0.95
Cohere represents an instance of the Cohere language model.
func NewCohere ¶ added in v0.0.95
func NewCohere(apiKey string, optFns ...func(o *CohereOptions)) (*Cohere, error)
NewCohere creates a new Cohere instance using the provided API key and optional configuration options. It internally creates a Cohere client using the provided API key and initializes the Cohere struct.
func NewCohereFromClient ¶ added in v0.0.95
func NewCohereFromClient(client CohereClient, optFns ...func(o *CohereOptions)) (*Cohere, error)
NewCohereFromClient creates a new Cohere instance using the provided Cohere client and optional configuration options.
func (*Cohere) Callbacks ¶ added in v0.0.95
Callbacks returns the registered callbacks of the model.
func (*Cohere) Generate ¶ added in v0.0.95
func (cm *Cohere) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Cohere) InvocationParams ¶ added in v0.0.95
InvocationParams returns the parameters used in the model invocation.
type CohereClient ¶ added in v0.0.95
type CohereClient interface { // Chat performs a non-streaming chat with the Cohere API, generating a response based on the provided request. // It returns a non-streamed response or an error if the operation fails. Chat(ctx context.Context, request *cohere.ChatRequest, opts ...core.RequestOption) (*cohere.NonStreamedChatResponse, error) // ChatStream performs a streaming chat with the Cohere API, generating responses in a stream based on the provided request. // It returns a stream of responses or an error if the operation fails. ChatStream(ctx context.Context, request *cohere.ChatStreamRequest, opts ...core.RequestOption) (*core.Stream[cohere.StreamedChatResponse], error) }
CohereClient defines the interface for interacting with the Cohere API.
type CohereOptions ¶ added in v0.0.95
type CohereOptions struct { // CallbackOptions specify options for handling callbacks during text generation. *schema.CallbackOptions `map:"-"` // Tokenizer represents the tokenizer to be used with the LLM model. schema.Tokenizer `map:"-"` // Model represents the name or identifier of the Cohere language model to use. Model string `map:"model,omitempty"` // Temperature is a non-negative float that tunes the degree of randomness in generation. Temperature float64 `map:"temperature"` // MaxRetries represents the maximum number of retries to make when generating. MaxRetries uint `map:"max_retries,omitempty"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
CohereOptions contains options for configuring the Cohere model.
type Ernie ¶ added in v0.0.67
Ernie is a struct representing the Ernie language model.
func NewErnie ¶ added in v0.0.67
func NewErnie(clientID, clientSecret string, optFns ...func(o *ErnieOptions)) (*Ernie, error)
NewErnie creates a new instance of the Ernie chat model.
func NewErnieFromClient ¶ added in v0.0.67
func NewErnieFromClient(client ErnieClient, optFns ...func(o *ErnieOptions)) (*Ernie, error)
NewErnieFromClient creates a new instance of the Ernie chat model from a custom ErnieClient.
func (*Ernie) Generate ¶ added in v0.0.67
func (cm *Ernie) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Ernie) InvocationParams ¶ added in v0.0.67
InvocationParams returns the parameters used in the model invocation.
type ErnieClient ¶ added in v0.0.67
type ErnieClient interface {
CreateChatCompletion(ctx context.Context, model string, request *ernie.ChatCompletionRequest) (*ernie.ChatCompletionResponse, error)
}
ErnieClient is the interface for the Ernie client.
type ErnieOptions ¶ added in v0.0.67
type ErnieOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // ModelName is the name of the Ernie chat model to use. ModelName string `map:"model_name,omitempty"` // Temperature is the sampling temperature to use during text generation. Temperature float64 `map:"temperature,omitempty"` // TopP is the total probability mass of tokens to consider at each step. TopP float64 `map:"top_p,omitempty"` // PenaltyScore is a parameter used during text generation to apply a penalty for generating longer responses. PenaltyScore float64 `map:"penalty_score"` }
ErnieOptions is the options struct for the Ernie chat model.
type Fake ¶ added in v0.0.14
Fake is a mock implementation of the schema.ChatModel interface for testing purposes.
func NewFake ¶ added in v0.0.14
func NewFake(fakeResultFunc FakeResultFunc, optFns ...func(o *FakeOptions)) *Fake
NewFake creates an instance of the Fake model with the provided custom result function.
func NewSimpleFake ¶ added in v0.0.58
func NewSimpleFake(response string, optFns ...func(o *FakeOptions)) *Fake
NewSimpleFake creates a simple instance of the Fake model with a fixed response for all inputs.
func (*Fake) Generate ¶ added in v0.0.14
func (cm *Fake) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Fake) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type FakeOptions ¶ added in v0.0.58
type FakeOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` ChatModelType string `map:"-"` }
FakeOptions contains options for configuring the Fake model.
type FakeResultFunc ¶ added in v0.0.58
type FakeResultFunc func(ctx context.Context, messages schema.ChatMessages) (*schema.ModelResult, error)
FakeResultFunc is a function type used for providing custom model results in the Fake model.
type GoogleGenAI ¶ added in v0.0.92
func NewGoogleGenAI ¶ added in v0.0.92
func NewGoogleGenAI(client GoogleGenAIClient, optFns ...func(o *GoogleGenAIOptions)) (*GoogleGenAI, error)
func (*GoogleGenAI) Callbacks ¶ added in v0.0.92
func (cm *GoogleGenAI) Callbacks() []schema.Callback
Callbacks returns the registered callbacks of the model.
func (*GoogleGenAI) Generate ¶ added in v0.0.92
func (cm *GoogleGenAI) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*GoogleGenAI) InvocationParams ¶ added in v0.0.92
func (cm *GoogleGenAI) InvocationParams() map[string]any
InvocationParams returns the parameters used in the model invocation.
func (*GoogleGenAI) Type ¶ added in v0.0.92
func (cm *GoogleGenAI) Type() string
Type returns the type of the model.
func (*GoogleGenAI) Verbose ¶ added in v0.0.92
func (cm *GoogleGenAI) Verbose() bool
Verbose returns the verbosity setting of the model.
type GoogleGenAIClient ¶ added in v0.0.92
type GoogleGenAIClient interface { GenerateContent(context.Context, *generativelanguagepb.GenerateContentRequest, ...gax.CallOption) (*generativelanguagepb.GenerateContentResponse, error) StreamGenerateContent(ctx context.Context, req *generativelanguagepb.GenerateContentRequest, opts ...gax.CallOption) (generativelanguagepb.GenerativeService_StreamGenerateContentClient, error) CountTokens(context.Context, *generativelanguagepb.CountTokensRequest, ...gax.CallOption) (*generativelanguagepb.CountTokensResponse, error) }
GoogleGenAIClient is an interface for the GoogleGenAI model client.
type GoogleGenAIOptions ¶ added in v0.0.92
type GoogleGenAIOptions struct { // CallbackOptions specify options for handling callbacks during text generation. *schema.CallbackOptions `map:"-"` // Tokenizer represents the tokenizer to be used with the LLM model. schema.Tokenizer `map:"-"` // ModelName is the name of the GoogleGenAI model to use. ModelName string `map:"model_name,omitempty"` // CandidateCount is the number of candidate generations to consider. CandidateCount int32 `map:"candidate_count,omitempty"` // MaxOutputTokens is the maximum number of tokens to generate in the output. MaxOutputTokens int32 `map:"max_output_tokens,omitempty"` // Temperature controls the randomness of the generation. Higher values make the output more random. Temperature float32 `map:"temperature,omitempty"` // TopP is the nucleus sampling parameter. It controls the cumulative probability of the most likely tokens to sample from. TopP float32 `map:"top_p,omitempty"` // TopK is the number of top tokens to consider for sampling. TopK int32 `map:"top_k,omitempty"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
type Ollama ¶ added in v0.0.91
Ollama is a struct representing the Ollama generative model.
func NewOllama ¶ added in v0.0.91
func NewOllama(client OllamaClient, optFns ...func(o *OllamaOptions)) (*Ollama, error)
NewOllama creates a new instance of the Ollama model with the provided client and options.
func (*Ollama) Callbacks ¶ added in v0.0.91
Callbacks returns the registered callbacks of the model.
func (*Ollama) Generate ¶ added in v0.0.91
func (cm *Ollama) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*Ollama) InvocationParams ¶ added in v0.0.91
InvocationParams returns the parameters used in the model invocation.
type OllamaClient ¶ added in v0.0.91
type OllamaClient interface { // CreateChat produces a single request and response for the Ollama generative model. CreateChat(ctx context.Context, req *ollama.ChatRequest) (*ollama.ChatResponse, error) // CreateChatStream initiates a streaming request and returns a stream for the Ollama generative model. CreateChatStream(ctx context.Context, req *ollama.ChatRequest) (*ollama.ChatStream, error) }
OllamaClient is an interface for the Ollama generative model client.
type OllamaOptions ¶ added in v0.0.91
type OllamaOptions struct { // CallbackOptions specify options for handling callbacks during text generation. *schema.CallbackOptions `map:"-"` // Tokenizer represents the tokenizer to be used with the LLM model. schema.Tokenizer `map:"-"` // ModelName is the name of the Gemini model to use. ModelName string `map:"model_name,omitempty"` // Temperature controls the randomness of the generation. Higher values make the output more random. Temperature float32 `map:"temperature,omitempty"` // MaxTokens is the maximum number of tokens to generate in the completion. MaxTokens int `map:"max_tokens,omitempty"` // TopP is the nucleus sampling parameter. It controls the cumulative probability of the most likely tokens to sample from. TopP float32 `map:"top_p,omitempty"` // TopK is the number of top tokens to consider for sampling. TopK int `map:"top_k,omitempty"` // PresencePenalty penalizes repeated tokens. PresencePenalty float32 `map:"presence_penalty,omitempty"` // FrequencyPenalty penalizes repeated tokens according to frequency. FrequencyPenalty float32 `map:"frequency_penalty,omitempty"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` }
OllamaOptions contains options for the Ollama model.
type OpenAI ¶
OpenAI represents the OpenAI chat model.
func NewOpenAI ¶
func NewOpenAI(apiKey string, optFns ...func(o *OpenAIOptions)) (*OpenAI, error)
NewOpenAI creates a new instance of the OpenAI chat model.
func NewOpenAIFromClient ¶ added in v0.0.36
func NewOpenAIFromClient(client OpenAIClient, optFns ...func(o *OpenAIOptions)) (*OpenAI, error)
NewOpenAIFromClient creates a new instance of the OpenAI chat model with the provided client and options.
func (*OpenAI) Generate ¶
func (cm *OpenAI) Generate(ctx context.Context, messages schema.ChatMessages, optFns ...func(o *schema.GenerateOptions)) (*schema.ModelResult, error)
Generate generates text based on the provided chat messages and options.
func (*OpenAI) InvocationParams ¶ added in v0.0.27
InvocationParams returns the parameters used in the model invocation.
type OpenAIClient ¶ added in v0.0.36
type OpenAIClient interface { CreateChatCompletion(ctx context.Context, request openai.ChatCompletionRequest) (response openai.ChatCompletionResponse, err error) CreateChatCompletionStream(ctx context.Context, request openai.ChatCompletionRequest) (stream *openai.ChatCompletionStream, err error) }
OpenAIClient is an interface for the OpenAI chat model client.
type OpenAIOptions ¶
type OpenAIOptions struct { *schema.CallbackOptions `map:"-"` schema.Tokenizer `map:"-"` // Model name to use. ModelName string `map:"model_name,omitempty"` // Sampling temperature to use. Temperature float32 `map:"temperature,omitempty"` // The maximum number of tokens to generate in the completion. // -1 returns as many tokens as possible given the prompt and //the models maximal context size. MaxTokens int `map:"max_tokens,omitempty"` // Total probability mass of tokens to consider at each step. TopP float32 `map:"top_p,omitempty"` // Penalizes repeated tokens. PresencePenalty float32 `map:"presence_penalty,omitempty"` // Penalizes repeated tokens according to frequency. FrequencyPenalty float32 `map:"frequency_penalty,omitempty"` // How many completions to generate for each prompt. N int `map:"n,omitempty"` // BaseURL is the base URL of the OpenAI service. BaseURL string `map:"base_url,omitempty"` // OrgID is the organization ID for accessing the OpenAI service. OrgID string `map:"org_id,omitempty"` // Stream indicates whether to stream the results or not. Stream bool `map:"stream,omitempty"` // MaxRetries represents the maximum number of retries to make when generating. MaxRetries uint `map:"max_retries,omitempty"` }
OpenAIOptions contains the options for the OpenAI chat model.