aillm

package
v0.0.0-...-0f623d9 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 7, 2025 License: Apache-2.0 Imports: 31 Imported by: 0

Documentation

Overview

Copyright (c) 2025 John Doe

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Index

Constants

View Source
const (
	SimilaritySearch  = 1 //
	KNearestNeighbors = 2 //
)

Variables

This section is empty.

Functions

This section is empty.

Types

type EmbeddingClient

type EmbeddingClient interface {
	// NewEmbedder initializes and returns an embedding model instance.
	NewEmbedder() (embeddings.Embedder, error)
	// contains filtered or unexported methods
}

EmbeddingClient defines an interface to abstract embedding model clients.

This interface provides methods to initialize and retrieve an embedding model, ensuring a standard contract for different embedding providers such as Ollama and OpenAI.

Methods:

  • NewEmbedder(): Initializes and returns an embedding model instance, or an error if the operation fails.
  • initialized(): Checks if the embedding model has been initialized and is ready for use.

type EmbeddingConfig

type EmbeddingConfig struct {
	ChunkSize    int // Size of each text chunk for embedding
	ChunkOverlap int // Number of overlapping characters between chunks
}

EmbeddingConfig holds the configuration settings for text chunking during embedding operations.

Fields:

  • ChunkSize: The size of each chunk to be created when splitting text for embedding purposes.
  • ChunkOverlap: The number of overlapping characters between consecutive chunks to maintain context.

type LLMAction

type LLMAction struct {
	Action    interface{} `json:"action"`
	TimeStamp time.Time   `json:"timestamp"`
}

Each action should be a timestamp for benchmarking or output management

type LLMCallOption

type LLMCallOption func(*LLMCallOptions)

LLMCallOption is a function that configures a LLMCallOptions.

type LLMCallOptions

type LLMCallOptions struct {
	// StreamingFunc is a function to be called for each chunk of a streaming response.
	// Return an error to stop streaming early.
	StreamingFunc  func(ctx context.Context, chunk []byte) error `json:"-"`
	ActionCallFunc func(action LLMAction)                        `json:"-"`
	Prefix         string
	Index          string
	Language       string
	SessionID      string
	ExtraContext   string
	ExactPrompt    string

	LimitGeneralEmbedding bool
	CotextCleanup         bool

	PersistentMemory bool
	// contains filtered or unexported fields
}

type LLMClient

type LLMClient interface {
	// NewLLMClient initializes and returns an LLM model instance.
	NewLLMClient() (llms.Model, error)
}

LLMClient defines an interface for creating a new LLM (Large Language Model) client instance.

Methods:

  • NewLLMClient(): Creates and returns an instance of an LLM model, or returns an error if the initialization fails.

type LLMConfig

type LLMConfig struct {
	Apiurl   string // API endpoint for the LLM service
	AiModel  string // Name of the AI model to be used
	APIToken string // API key required for authorization (e.g., for OpenAI or OVHCloud)
}

LLMConfig struct holds configuration details for the embedding and AI model service.

This struct is used to store the necessary configuration parameters needed to interact with LLM (Large Language Model) services, including API endpoint URLs, model names, and authentication credentials.

Fields:

  • Apiurl: The API endpoint URL of the LLM service for sending requests.
  • AiModel: The specific AI model to be used for embedding or inference operations.
  • APIToken: Authentication token required to access the API (e.g., for OpenAI services).

type LLMContainer

type LLMContainer struct {
	Embedder                            EmbeddingClient // Embedding client to handle text processing
	EmbeddingConfig                     EmbeddingConfig // Configuration for text chunking
	LLMClient                           LLMClient       // AI model client for generating responses
	MemoryManager                       *MemoryManager  // Session-based memory management
	LLMModelLanguageDetectionCapability bool            // Language detection capability flag

	AnswerLanguage          string           // Default answer language - will be ignored if  LLMModelLanguageDetectionCapability = true
	RedisClient             RedisClient      // Redis client for caching and retrieval
	SearchAlgorithm         int              // Semantic search algorithm Cosine Similarity or The k-nearest neighbors
	Temperature             float64          // Controls randomness of model output
	TopP                    float64          // Probability threshold for response diversity
	ScoreThreshold          float32          // Threshold for RAG-based responses
	RagRowCount             int              // Number of RAG rows to retrieve for context
	AllowHallucinate        bool             // Enables/disables AI-generated responses when data is
	FallbackLanguage        string           // Default language fallback
	NoRagErrorMessage       string           // Message shown when RAG results are empty
	NotRelatedAnswer        string           // Predefined response for unrelated queries
	Character               string           // AI assistant's character/personality settings
	Transcriber             Transcriber      // Responsible for processing and transcribing content
	PersistentMemoryManager PersistentMemory // Advanced Memory manager controller
	// contains filtered or unexported fields
}

LLMContainer serves as the main struct that manages LLM operations, embedding configurations, and data storage.

It acts as a container for managing various components required for interacting with an AI model, embedding data, and handling queries and responses.

Fields:

  • Embedder: The embedding client responsible for processing and storing text embeddings.
  • EmbeddingConfig: Configuration settings for text chunking operations.
  • LLMClient: The LLM client that provides access to the AI model for generating responses.
  • MemoryManager: A memory management component that stores session-related data.
  • LLMModelLanguageDetectionCapability: A boolean indicating if the model supports automatic language detection.
  • AnswerLanguage: The preferred language for responses from the model.
  • RedisClient: Redis client for caching embeddings and retrieval operations.
  • Temperature: Controls the randomness of the AI's responses (lower values = more deterministic).
  • TopP: Probability threshold for response generation (higher values = more diverse responses).
  • ScoreThreshold: The similarity threshold for retrieval-augmented generation (RAG).
  • RagRowCount: The number of RAG rows to retrieve and analyze for context.
  • AllowHallucinate: Determines if the model can generate responses without relevant data (true/false).
  • FallbackLanguage: The default language to use if the primary language is unavailable.
  • NoRagErrorMessage: The error message to display if no relevant data is found during retrieval.
  • NotRelatedAnswer: A predefined response when the model cannot find relevant information.
  • Character: A personality trait or characteristic assigned to the AI assistant (e.g., formal, friendly).
  • Transcriber: Component responsible for converting speech or text inputs into usable data.

func (*LLMContainer) AskLLM

func (llm *LLMContainer) AskLLM(Query string, options ...LLMCallOption) (LLMResult, error)

AskLLM processes a user query and retrieves an AI-generated response using Retrieval-Augmented Generation (RAG).

This function supports multi-step processes:

  • Retrieves session memory to provide context for the query.
  • Uses a semantic search algorithm (Cosine Similarity or K-Nearest Neighbors) to fetch relevant documents.
  • Constructs the query for the LLM based on user input and past interactions.
  • Calls the LLM to generate a response, optionally streaming results to a callback function.
  • Updates session memory with the query if relevant documents are found.

Parameters:

  • Query: The user's input query.
  • options: A variadic parameter to specify additional options like session ID, language, and streaming functions.

Returns:

  • LLMResult: Struct containing the AI-generated response, retrieved documents, session memory, and logged actions.
  • error: An error if the query fails or if essential components are missing.

func (*LLMContainer) CosineSimilarity

func (llm *LLMContainer) CosineSimilarity(prefix, Query string, rowCount int, ScoreThreshold float32) (interface{}, error)

CosineSimilarity searches for similar documents in the vector store using cosine similarity.

This function initializes the embedding model, sets up a Redis-based vector store, and performs a similarity search based on the provided query.

Parameters:

  • prefix: A string prefix used to organize and identify related vector entries.
  • Query: The query string to search for similar documents.
  • rowCount: The number of results to retrieve from the vector store.
  • ScoreThreshold: The minimum similarity score threshold for the results.

Returns:

  • interface{}: The search results containing the most similar documents.
  • error: An error if the search fails or the embedding model is missing.

func (LLMContainer) EmbeddFile

func (llm LLMContainer) EmbeddFile(Index, Title, fileName string, tc TranscribeConfig, options ...LLMCallOption) (LLMEmbeddingObject, error)

EmbeddFile processes and embeds the content of a given file into the LLM system.

Parameters:

  • ObjectId: A unique identifier for the embedding object.
  • Title: The Title of the document being embedded. Also it will be used for raw data for a better Context
  • Index: The Index of the document being embedded. Also it will be used for the raw file redis key
  • fileName: The path to the file to be embedded.
  • tc: Configuration for transcription, such as language settings.

Returns:

  • LLMEmbeddingObject: The embedded object containing the processed content.
  • error: An error if any issues occur during processing.

func (*LLMContainer) EmbeddText

func (llm *LLMContainer) EmbeddText(Index string, Contents LLMEmbeddingContent, options ...LLMCallOption) (LLMEmbeddingObject, error)

EmbeddText embeds provided text content into the LLM system and stores the data in Redis.

Parameters:

  • ObjectId: Unique identifier for the embedding object.
  • Index: Index associated with the content being embedded.
  • Contents: A map containing language-specific content to be embedded.

Returns:

  • LLMEmbeddingObject: The resulting embedding object after processing and storage.
  • error: An error if any issues occur during embedding or Redis operations.

func (LLMContainer) EmbeddURL

func (llm LLMContainer) EmbeddURL(Index, url string, tc TranscribeConfig, options ...LLMCallOption) (LLMEmbeddingObject, error)

EmbeddURL processes and embeds content from a given URL into the LLM system.

Parameters:

  • ObjectId: A unique identifier for the embedding object.
  • Index: The Index associated with the content being embedded.
  • url: The web URL from which content will be transcribed and embedded.
  • tc: Configuration for transcription, including language and extraction options.

Returns:

  • LLMEmbeddingObject: The embedded object containing the processed content.
  • error: An error if any issues occur during the transcription or embedding process.

func (*LLMContainer) FindKNN

func (llm *LLMContainer) FindKNN(prefix, searchQuery string, rowCount int, ScoreThreshold float32) (interface{}, error)

FindKNN performs a K-Nearest Neighbors (KNN) search on the stored vector embeddings.

This function retrieves the most relevant documents based on the provided query, using the KNN algorithm to rank them according to their proximity in the vector space.

Parameters:

  • prefix: A string prefix used to identify relevant vector entries.
  • searchQuery: The query string to find the nearest neighbors for.
  • rowCount: The number of closest neighbors to retrieve.
  • ScoreThreshold: The minimum similarity score for considering results.

Returns:

  • interface{}: The retrieved relevant documents.
  • error: An error if the search fails or the embedding model is missing.

func (*LLMContainer) Init

func (llm *LLMContainer) Init() error

Init initializes the LLMContainer by configuring memory management, embedding settings, transcriber configurations, and connecting to the Redis database.

This function sets default parameters for temperature, token threshold, RAG configurations, fallback responses, and initializes essential components like the memory manager and transcriber.

Returns:

  • error: An error if Redis configuration is missing or if the Redis connection fails.

func (*LLMContainer) InitEmbedding

func (llm *LLMContainer) InitEmbedding() error

InitEmbedding initializes the embedding model based on the type of embedding provider.

This function checks the type of the configured embedding provider (Ollama or OpenAI) and initializes it with the appropriate configuration settings.

Returns:

  • error: Returns an error if initialization fails or if the provider is unsupported.

func (*LLMContainer) ListEmbeddings

func (llm *LLMContainer) ListEmbeddings(KeyID string, offset, limit int) (map[string]interface{}, error)

List retrieves multiple embedding objects from Redis with pagination support.

Parameters:

  • rdb: Redis client instance for database operations.
  • KeyID: The prefix used to filter stored keys in Redis.
  • offset: The starting position for retrieval.
  • limit: The number of items to retrieve.

Returns:

  • map[string]interface{}: A map containing retrieved objects and total count.
  • error: An error if the operation fails.

func (*LLMContainer) RemoveEmbedding

func (llm *LLMContainer) RemoveEmbedding(Index string, options ...LLMCallOption) error

RemoveEmbedding deletes an embedding object and its associated keys from Redis.

Parameters:

  • ObjectId: The unique identifier for the embedding object to be removed.
  • Index: The Index of the embedding object.

Returns:

  • error: An error if deletion fails.

func (*LLMContainer) RemoveEmbeddingSubKey

func (llm *LLMContainer) RemoveEmbeddingSubKey(Index, rawDocID string, options ...LLMCallOption) error

func (*LLMContainer) SearchAll

func (llm *LLMContainer) SearchAll(language string) LLMCallOption

SearchAll specifies the scope of search,

Parameters:

  • language: scope of general search in specific language

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithActionCallFunc

func (llm *LLMContainer) WithActionCallFunc(ActionCallFunc func(action LLMAction)) LLMCallOption

WithActionCallFunc specifies a callback function to log custom actions during LLM query processing.

Parameters:

  • ActionCallFunc: A function defining custom actions to be logged during query processing.

Returns:

  • LLMCallOption: An option to set the custom action callback function.

func (*LLMContainer) WithCharacter

func (llm *LLMContainer) WithCharacter(character string) LLMCallOption

WithPersistentMemory enhances the memory by using vector search to create more efficient prompts for conversation memory

Parameters:

  • usePersistentMemory: A boolean value to update property

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithCotextCleanup

func (llm *LLMContainer) WithCotextCleanup(cleanup bool) LLMCallOption

WithCotextCleanup Cleanup retrieved context, specially html codes to make a clear context

Parameters:

  • cleanup: A boolean value to update property

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithEmbeddingIndex

func (llm *LLMContainer) WithEmbeddingIndex(Index string) LLMCallOption

func (*LLMContainer) WithEmbeddingPrefix

func (llm *LLMContainer) WithEmbeddingPrefix(Prefix string) LLMCallOption

WithEmbeddingPrefix specifies a prefix for identifying related embeddings.

Parameters:

  • Prefix: A string prefix used to group or identify embeddings in the store.

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithExactPrompt

func (llm *LLMContainer) WithExactPrompt(ExactPrompt string) LLMCallOption

WithExtraExactPromot queries LLM with exact promot.

Parameters:

  • ExactPromot: string, prompt

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithExtraContext

func (llm *LLMContainer) WithExtraContext(ExtraContext string) LLMCallOption

WithExtraExtraContext specifies a extra context for search

Parameters:

  • ExtraContext: Extra provided text to provide LLM.

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithLanguage

func (llm *LLMContainer) WithLanguage(Language string) LLMCallOption

WithLanguage specifies the language to use for the query response.

Parameters:

  • Language: The language in which the LLM should generate responses.

Returns:

  • LLMCallOption: An option that sets the query language.

func (*LLMContainer) WithLimitGeneralEmbedding

func (llm *LLMContainer) WithLimitGeneralEmbedding(denied bool) LLMCallOption

WithEmbeddingPrefix specifies a prefix for identifying related embeddings.

Parameters:

  • Prefix: A string prefix used to group or identify embeddings in the store.

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithPersistentMemory

func (llm *LLMContainer) WithPersistentMemory(usePersistentMemory bool) LLMCallOption

WithPersistentMemory enhances the memory by using vector search to create more efficient prompts for conversation memory

Parameters:

  • usePersistentMemory: A boolean value to update property

Returns:

  • LLMCallOption: An option that sets the embedding prefix.

func (*LLMContainer) WithSessionID

func (llm *LLMContainer) WithSessionID(SessionID string) LLMCallOption

WithSessionID specifies the session ID for tracking user chat history and interactions.

Parameters:

  • SessionID: A unique identifier for the user's session.

Returns:

  • LLMCallOption: An option that sets the session ID.

func (*LLMContainer) WithStreamingFunc

func (llm *LLMContainer) WithStreamingFunc(streamingFunc func(ctx context.Context, chunk []byte) error) LLMCallOption

WithStreamingFunc specifies a callback function for handling streaming output during query processing.

Parameters:

  • streamingFunc: A function to process streaming chunks of output from the LLM. Returning an error stops the streaming process.

Returns:

  • LLMCallOption: An option to set the streaming function.

type LLMEmbeddingContent

type LLMEmbeddingContent struct {
	Text        string `json:"Text" redis:"Text"`
	Title       string `json:"Title" redis:"Title"`
	Language    string `json:"Language" redis:"Language"`
	Id          string `json:"Id" redis:"Id"`
	Source      string `json:"Source" redis:"Source"`
	Keys        []string
	GeneralKeys []string
}

LLMEmbeddingContent represents a single piece of text content that is embedded and stored in the system.

This struct holds the necessary information for managing and identifying embedded text, including the raw text, a title, its source, and associated keys.

Fields:

  • Text: The raw text content that is embedded.
  • Title: A descriptive title for the embedded content.
  • Index:
  • Source: The origin of the content, such as a file name, URL, or other identifier.
  • Keys: A slice of strings representing the Redis keys associated with this content.

type LLMEmbeddingObject

type LLMEmbeddingObject struct {
	EmbeddingPrefix string                         `json:"EmbeddingPrefix" redis:"EmbeddingPrefix"`
	Index           string                         `json:"Index" redis:"Index"`
	Contents        map[string]LLMEmbeddingContent `json:"Contents" redis:"Contents"`
}

LLMEmbeddingObject represents a collection of embedded text contents grouped under a specific object ID.

This struct serves as a container for multiple pieces of embedded text content, organized by language or context. It provides a way to store and manage embeddings for a specific use case or document.

Fields:

  • EmbeddingPrefix: A unique prefix or identifier for the embedding object (e.g., "ObjectId").
  • Index: An Index for the embedding object, providing a future access.
  • Contents: A map of language-specific content, where the key is the language code (e.g., "en", "pt") and the value is an LLMEmbeddingContent struct containing the associated content details.

type LLMResult

type LLMResult struct {
	Prompt   []llms.MessageContent
	RagDocs  interface{}
	Response *llms.ContentResponse
	Memory   []MemoryData
	Actions  []LLMAction
}

LLMResult represents the result of an LLM query, including the generated response, retrieved documents, and logged actions.

This struct captures the outcome of a query processed by the LLMContainer, storing essential components such as the query prompt, RAG (Retrieval-Augmented Generation) documents, memory updates, and actions logged during the query lifecycle.

Fields:

  • Prompt: A slice of MessageContent representing the constructed query prompt sent to the LLM.
  • RagDocs: An interface containing the retrieved documents (e.g., schema.Document) used in the RAG process.
  • Response: A pointer to the ContentResponse generated by the LLM, containing the AI's output and metadata.
  • Memory: A slice of strings representing session-based memory for context-aware interactions.
  • Actions: A slice of LLMAction structs, each representing a logged action or milestone during the query lifecycle.

type LLMTextEmbedding

type LLMTextEmbedding struct {
	ChunkSize         int
	ChunkOverlap      int
	Text              string
	EmbeddedDocuments []schema.Document
}

LLMTextEmbedding is a struct designed to handle text processing and splitting operations.

Fields:

  • ChunkSize: The maximum size of each text chunk after splitting.
  • ChunkOverlap: The number of overlapping characters between consecutive chunks to ensure context retention.
  • Text: The original text content to be processed and split into chunks.
  • EmbeddedDocuments: A slice of schema.Document representing the resulting chunks after processing.

func (*LLMTextEmbedding) SplitText

func (emb *LLMTextEmbedding) SplitText() ([]schema.Document, error)

SplitText method to split the text into smaller chunks.

This function takes the text stored in the LLMTextEmbedding struct and splits it into smaller chunks based on the specified chunk size and overlap settings. The resulting chunks are stored as schema.Document objects.

Returns:

  • []schema.Document: A slice containing the split document chunks.
  • error: An error if the text splitting process encounters any issues.

type Memory

type Memory struct {
	Questions       []MemoryData // Stores user queries during the session
	MemoryStartTime time.Time    // Timestamp when the session started
}

Memory structure to store user memory session data.

This struct keeps track of a user's questions and the session start time.

Fields:

  • Questions: A slice of strings representing the list of user queries in the session.
  • MemoryStartTime: A timestamp indicating when the session started.

type MemoryData

type MemoryData struct {
	Question string
	Answer   string
	Keys     []string
}

Memory structure to store user memory question data.

This struct keeps track of a user's questions and the session data in Redis.

Fields:

  • Questions: A string representing the user query.
  • Answer: A string representing the LLM response to the query.
  • Keys: A slice of strings that keeps keys of Redis vector data related to this question.

type MemoryManager

type MemoryManager struct {
	// contains filtered or unexported fields
}

MemoryManager manages session memories with a time-to-live (TTL) mechanism.

This struct is responsible for storing user sessions, providing thread-safe access to session data, and cleaning up expired sessions periodically.

Fields:

  • memoryMap: A map storing session ID as the key and Memory struct as the value.
  • mu: A mutex to ensure thread-safe operations on the memory map.
  • ttl: The time-to-live (TTL) duration after which sessions will be removed automatically.

func NewMemoryManager

func NewMemoryManager(ttlMinutes int) *MemoryManager

NewMemoryManager creates and initializes a new MemoryManager with a specified TTL (Time-To-Live).

This function initializes the session memory map and sets up a background process to periodically remove expired sessions.

Parameters:

  • ttlMinutes: The time-to-live duration in minutes before session data expires.

Returns:

  • *MemoryManager: A pointer to the newly created MemoryManager instance.

func (*MemoryManager) AddMemory

func (m *MemoryManager) AddMemory(sessionID string, questions []MemoryData)

AddMemory adds or updates a session's memory in the memory map.

This function stores user queries within a session and ensures thread-safe access using a mutex lock to avoid concurrent read/write issues.

Parameters:

  • sessionID: The unique identifier for the user's session.
  • questions: A slice of strings containing user queries.

func (*MemoryManager) DeleteMemory

func (m *MemoryManager) DeleteMemory(sessionID string)

DeleteMemory removes a user's session memory from the memory map.

This function ensures safe deletion using a mutex lock to prevent data races.

Parameters:

  • sessionID: The unique identifier for the session to be deleted.

func (*MemoryManager) GetMemory

func (m *MemoryManager) GetMemory(sessionID string) (Memory, bool)

GetMemory retrieves stored session memory for a given session ID.

The function safely reads from the session map and returns the stored memory if it exists.

Parameters:

  • sessionID: The unique identifier for the user's session.

Returns:

  • Memory: The stored session data containing questions and timestamp.
  • bool: A boolean indicating whether the session ID exists in the memory map.

type OllamaController

type OllamaController struct {
	Config        LLMConfig   // Configuration for the Ollama LLM service
	LLMController *ollama.LLM // Instance of the Ollama LLM client
}

OllamaController struct to manage the Ollama embedding and language model service.

This struct implements the EmbeddingClient interface and acts as a wrapper around the Ollama LLM (Large Language Model), handling initialization and interactions.

Fields:

  • Config: Configuration details such as API URL and model name.
  • LLMController: Instance of the Ollama LLM client for handling AI operations.

func (*OllamaController) NewEmbedder

func (oc *OllamaController) NewEmbedder() (embeddings.Embedder, error)

NewEmbedder initializes and returns an Ollama embedding model instance.

This function implements the EmbeddingClient interface to create and return an embedding model using the current LLMController instance.

Returns:

  • embeddings.Embedder: The initialized embedding model instance.
  • error: An error if the initialization fails.

func (*OllamaController) NewLLMClient

func (oc *OllamaController) NewLLMClient() (llms.Model, error)

NewLLMClient initializes and returns a new instance of the Ollama LLM client.

This function sets up the Ollama model based on the provided API URL and model name in the configuration.

Returns:

  • llms.Model: The initialized LLM model instance.
  • error: An error if the initialization fails.

type OpenAIController

type OpenAIController struct {
	Config        LLMConfig
	LLMController *openai.LLM
}

OpenAIController struct to manage OpenAI embedding and language model services.

This struct implements the EmbeddingClient interface and acts as a wrapper around the OpenAI LLM (Large Language Model), handling initialization and interactions.

Fields:

  • Config: Configuration details such as API URL, model name, and API token.
  • LLMController: Instance of the OpenAI LLM client for handling AI operations.

func (*OpenAIController) NewEmbedder

func (oc *OpenAIController) NewEmbedder() (embeddings.Embedder, error)

NewEmbedder initializes and returns an OpenAI embedding model instance.

This function implements the EmbeddingClient interface to create and return an embedding model using the current LLMController instance.

Returns:

  • embeddings.Embedder: The initialized embedding model instance.
  • error: An error if the initialization fails.

func (*OpenAIController) NewLLMClient

func (oc *OpenAIController) NewLLMClient() (llms.Model, error)

NewLLMClient initializes and returns a new instance of the OpenAI LLM client.

This function sets up the OpenAI model based on the provided API token, API base URL, and the selected AI model from the configuration.

Returns:

  • llms.Model: The initialized LLM model instance.
  • error: An error if the initialization fails.

type PersistentMemory

type PersistentMemory struct {
	MemoryPrefix          string        //prefix for redis storage
	MemoryTTL             time.Duration // auto delete memory question TTL
	MemorySearchThreshold float32       //Memory vector search Threshold
	HistoryItemCount      int           // More queries = more tokens. adjus it carefully.
	// contains filtered or unexported fields
}

PersistentMemory structure to store user memory session data in a persistent storage (Redis) for future retrival or vector search.

This struct keeps track of a user's questions parameters.

Fields:

  • Questions: A slice of strings representing the list of user queries in the session.
  • MemoryStartTime: A timestamp indicating when the session started.

func (*PersistentMemory) AddMemory

func (pm *PersistentMemory) AddMemory(sessionID string, query MemoryData) error

Returns:

  • error: An error if the embedding process fails.

func (*PersistentMemory) DeleteMemory

func (pm *PersistentMemory) DeleteMemory(sessionID string) error

DeleteMemory removes a user's session memory from the memory map.

Parameters:

  • sessionID: The unique identifier for the session to be deleted.

func (*PersistentMemory) GetMemory

func (pm *PersistentMemory) GetMemory(sessionID string, query string) (MemoryData, string, error)

GetMemory retrieves stored session memory for a given session ID.

The function safely reads from the session map and returns the stored memory if it exists.

Parameters:

  • sessionID: The unique identifier for the user's session.
  • query: Will be used for Vector search in user query history to find previous related questions

Returns:

  • MemoryData: Last asked question.
  • string: generated prompt for memory context.
  • error: An error if the memory retrival process fails.

type RedisClient

type RedisClient struct {
	Host     string // Redis server address
	Password string // Redis authentication password (if applicable)
	// contains filtered or unexported fields
}

RedisClient manages the connection details for a Redis database instance used for storing embeddings.

Fields:

  • Host: The address of the Redis server (e.g., "localhost:6379").
  • Password: The password for connecting to the Redis server (if authentication is required).
  • redisClient: The Redis client instance used for executing operations.

type TranscribeConfig

type TranscribeConfig struct {
	TikaLanguage        string        //PDF ONLY, OCR language code (refer to Tesseract OCR languages) can be found @ https://github.com/tesseract-ocr/tessdata/
	Language            string        // Target language for transcription
	OCROnly             bool          // Perform OCR only, ignoring non-image text
	ExtractInlineImages bool          // Enable extraction of text from inline images
	MaxTimeout          time.Duration // Maximum processing time before timeout
}

type Transcriber

type Transcriber struct {
	MaxPageLimit uint   // Maximum number of pages allowed for processing
	TikaURL      string // URL of the Apache Tika service for text extraction

	TempFolder string // Path to the temporary folder for storing transcribed files
	// contains filtered or unexported fields
}

Transcriber handles document transcription by extracting text from various file formats.

This struct provides configurations and options for processing documents through the Tika service, managing temporary files, and setting processing limits.

Fields:

  • MaxPageLimit: The maximum number of pages to process in a document (JUST PDF Documents).
  • TikaURL: The URL of the Apache Tika server used for text extraction.
  • initialized: A boolean indicating if the transcriber has been initialized successfully.
  • TempFolder: The folder where temporary files will be stored during processing (Downloading / Transcribing).
  • folderSep: The file path separator used for compatibility across operating systems.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL