Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetImplSpecificOptions ¶
GetImplSpecificOptions extract the implementation specific options from Option list, optionally providing a base options with default values. e.g.
myOption := &MyOption{ Field1: "default_value", } myOption := model.GetImplSpecificOptions(myOption, opts...)
Types ¶
type CallbackInput ¶
type CallbackInput struct { // Messages is the messages to be sent to the model. Messages []*schema.Message // Tools is the tools to be used in the model. Tools []*schema.ToolInfo // ToolChoice is the tool choice, which controls the tool to be used in the model. ToolChoice any // string / *schema.ToolInfo // Config is the config for the model. Config *Config // Extra is the extra information for the callback. Extra map[string]any }
CallbackInput is the input for the model callback.
func ConvCallbackInput ¶
func ConvCallbackInput(src callbacks.CallbackInput) *CallbackInput
ConvCallbackInput converts the callback input to the model callback input.
type CallbackOutput ¶
type CallbackOutput struct { // Message is the message generated by the model. Message *schema.Message // Config is the config for the model. Config *Config // TokenUsage is the token usage of this request. TokenUsage *TokenUsage // Extra is the extra information for the callback. Extra map[string]any }
CallbackOutput is the output for the model callback.
func ConvCallbackOutput ¶
func ConvCallbackOutput(src callbacks.CallbackOutput) *CallbackOutput
ConvCallbackOutput converts the callback output to the model callback output.
type ChatModel ¶
type ChatModel interface { Generate(ctx context.Context, input []*schema.Message, opts ...Option) (*schema.Message, error) Stream(ctx context.Context, input []*schema.Message, opts ...Option) ( *schema.StreamReader[*schema.Message], error) // BindTools bind tools to the model. // BindTools before requesting ChatModel generally. // notice the non-atomic problem of BindTools and Generate. BindTools(tools []*schema.ToolInfo) error }
ChatModel support openai and maas. use Generate for completed output, use Stream as for stream output.
type Config ¶
type Config struct { // Model is the model name. Model string // MaxTokens is the max number of tokens, if reached the max tokens, the model will stop generating, and mostly return an finish reason of "length". MaxTokens int // Temperature is the temperature, which controls the randomness of the model. Temperature float32 // TopP is the top p, which controls the diversity of the model. TopP float32 // Stop is the stop words, which controls the stopping condition of the model. Stop []string }
Config is the config for the model.
type Option ¶
type Option struct {
// contains filtered or unexported fields
}
Option is the call option for ChatModel component.
func WithMaxTokens ¶
WithMaxTokens is the option to set the max tokens for the model.
func WithTemperature ¶
WithTemperature is the option to set the temperature for the model.
func WrapImplSpecificOptFn ¶
WrapImplSpecificOptFn is the option to wrap the implementation specific option function.
type Options ¶
type Options struct { // Temperature is the temperature for the model, which controls the randomness of the model. Temperature *float32 // MaxTokens is the max number of tokens, if reached the max tokens, the model will stop generating, and mostly return an finish reason of "length". MaxTokens *int // Model is the model name. Model *string // TopP is the top p for the model, which controls the diversity of the model. TopP *float32 // Stop is the stop words for the model, which controls the stopping condition of the model. Stop []string }
Options is the common options for the model.
func GetCommonOptions ¶
GetCommonOptions extract model Options from Option list, optionally providing a base Options with default values.
type TokenUsage ¶
type TokenUsage struct { // PromptTokens is the number of prompt tokens. PromptTokens int // CompletionTokens is the number of completion tokens. CompletionTokens int // TotalTokens is the total number of tokens. TotalTokens int }
TokenUsage is the token usage for the model.