Documentation ¶
Overview ¶
Package cache provides a generic wrapper that adds caching to a `llms.Model`. Responses are cached under a key calculated based on the provided messages and options. Different cache backends can be used when creating the wrapper.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Backend ¶
type Backend interface { // Get a value from the cache. If the key is not found, return `nil`. Get(ctx context.Context, key string) *llms.ContentResponse // Put a value into the cache. Put(ctx context.Context, key string, response *llms.ContentResponse) }
Backend is the interface that needs to be implemented by cache backends.
type Cacher ¶
type Cacher struct {
// contains filtered or unexported fields
}
Cacher is an LLM wrapper that caches the responses from the LLM.
func (*Cacher) Call
deprecated
func (c *Cacher) Call(ctx context.Context, prompt string, options ...llms.CallOption) (string, error)
Call is a simplified interface for a text-only Model, generating a single string response from a single string prompt.
Deprecated: this method is retained for backwards compatibility. Use the more general [GenerateContent] instead. You can also use the [GenerateFromSinglePrompt] function which provides a similar capability to Call and is built on top of the new interface.
func (*Cacher) GenerateContent ¶
func (c *Cacher) GenerateContent(ctx context.Context, messages []llms.MessageContent, options ...llms.CallOption) (*llms.ContentResponse, error)
GenerateContent asks the model to generate content from a sequence of messages. It's the most general interface for multi-modal LLMs that support chat-like interactions.