gokamy

package module
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 21, 2025 License: Apache-2.0 Imports: 12 Imported by: 0

README

Gokamy AI

gokamy

Go Report Card GitHub Workflow Status GoDoc Release

Gokamy SDK is a lightweight, flexible, and extensible toolkit for building intelligent conversational agents in Golang. It enables you to create agents, engineer prompts, generate tool schemas, manage memory, and orchestrate complex workflows—making it easy to integrate advanced AI capabilities into your projects. 🚀

Features

  • Agent Management 🤖: Easily build and configure agents with custom system prompts, tools, and memory.
  • Prompt Engineering 📝: Create structured prompts with nested sections for improved clarity.
  • Tool Schemas 🔧: Generate JSON schemas from Go structures to define tools and validate user inputs.
  • Memory Implementations 🧠: Use built-in SimpleMemory or implement your own memory storage that adheres to the Memory interface.
  • Orchestrator 🎛️: Manage multiple agents and execute them in a predefined sequence to achieve complex workflows.
  • Extendable 🔌: The SDK is designed to be unopinionated, allowing seamless integration into your projects.
  • CLI (Planned) 🚀: Future CLI support to initialize projects and scaffold agents with simple commands.

Installation

To install Gokamy SDK, use Go modules:

go get github.com/Dieg0Code/gokamy-ai

Ensure that you have Go installed and configured in your development environment.

Quick Start

Below is a simple example demonstrating how to create a haiku agent using the SDK:

package main

import (
	"context"
	"fmt"
	"log"

	gokamy "github.com/Dieg0Code/gokamy-ai"
	openai "github.com/sashabaranov/go-openai"
)

func main() {
	// Initialize the OpenAI client with your API key.
	client := openai.NewClient("YOUR_OPENAI_API_KEY")

	// Create a simple memory instance.
	memory := gokamy.NewSimpleMemory()

	// Build a structured prompt using PromptBuilder to instruct the agent to respond in haiku.
	systemPrompt := gokamy.NewPromptBuilder().
		CreateSection("Introduction").
		AddText("Introduction", "You are an agent that always responds in haiku format.").
		CreateSection("Guidelines").
		AddListItem("Guidelines", "Keep responses in a three-line haiku format (5-7-5 syllables).").
		AddListItem("Guidelines", "Be creative and concise.").
		Build()

	fmt.Println("System Prompt:")
	fmt.Println(systemPrompt)

	// Build the agent using AgentBuilder.
	agent, err := gokamy.NewAgentBuilder().
		SetClient(client).
		SetName("HaikuAgent").
		SetConfigPrompt(systemPrompt).
		SetMemory(memory).
		SetModel(openai.GPT4).
		Build()
	if err != nil {
		log.Fatalf("Error building agent: %v", err)
	}

	// Process a sample input with the agent.
	response, err := agent.Process(context.Background(), "What is the weather like today?")
	if err != nil {
		log.Fatalf("Error processing request: %v", err)
	}

	fmt.Println("\nAgent Response:")
	fmt.Println(response)
}
Orchestration Example

Gokamy SDK also provides an orchestrator to manage a sequence of agents. In the following example, two agents are created and executed in sequence:

package main

import (
	"context"
	"fmt"
	"log"

	gokamy "github.com/Dieg0Code/gokamy-ai"
	openai "github.com/sashabaranov/go-openai"
)

func main() {
	// Initialize the OpenAI client using your API key.
	client := openai.NewClient("YOUR_API_KEY")

	// Create simple memory instances for each agent.
	memoryAgentOne := gokamy.NewSimpleMemory()
	memoryAgentTwo := gokamy.NewSimpleMemory()

	// Build the first agent (HelloAgent).
	agentOne, err := gokamy.NewAgentBuilder().
		SetClient(client).
		SetName("HelloAgent").
		SetConfigPrompt("You are an agent that warmly greets users and encourages further interaction.").
		SetMemory(memoryAgentOne).
		SetModel(openai.GPT4).
		Build()
	if err != nil {
		log.Fatalf("Error building HelloAgent: %v", err)
	}

	// Build the second agent (FinalAgent).
	agentTwo, err := gokamy.NewAgentBuilder().
		SetClient(client).
		SetName("FinalAgent").
		SetConfigPrompt("You are an agent that provides a final summary based on the conversation.").
		SetMemory(memoryAgentTwo).
		SetModel(openai.GPT4).
		Build()
	if err != nil {
		log.Fatalf("Error building FinalAgent: %v", err)
	}

	// Create an orchestrator, register both agents, and define the execution sequence.
	orchestrator := gokamy.NewOrchestratorBuilder().
		AddAgent(agentOne).
		AddAgent(agentTwo).
		// Define the processing sequence: first HelloAgent, then FinalAgent.
		SetSequence([]string{"HelloAgent", "FinalAgent"}).
		Build()

	// Provide an input and process the sequence.
	input := "Please greet the user and provide a summary."
	response, err := orchestrator.ProcessSequence(context.Background(), input)
	if err != nil {
		log.Fatalf("Error processing sequence: %v", err)
	}

	fmt.Println("Final Orchestrator Response:")
	fmt.Println(response)
}
Tools

Gokamy SDK includes functionality to automatically generate JSON schemas from Go structures. These generated schemas can be used to define and validate tools for your agents.

For example, consider the following tool definition that generates a JSON schema for a Product structure:

package main

import (
	"encoding/json"
	"fmt"
	"log"

	gokamy "github.com/Dieg0Code/gokamy-ai"
)

// Product represents a product with various attributes.
type Product struct {
	ID        int     `json:"id" description:"Unique product identifier" required:"true"`
	Name      string  `json:"name" description:"Product name" required:"true"`
	Category  string  `json:"category" description:"Category of the product" enum:"Electronic,Furniture,Clothing"`
	Price     float64 `json:"price" description:"Price of the product"`
	Available bool    `json:"available" description:"Product availability" required:"true"`
}

func main() {
	// Generate the JSON schema for the Product struct.
	schema, err := gokamy.GenerateSchema(Product{})
	if err != nil {
		log.Fatal(err)
	}
	output, err := json.MarshalIndent(schema, "", "  ")
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(string(output))
}

The schema generation leverages reflection along with custom struct tags (e.g., description, required, enum) to produce a JSON Schema that describes the tool's expected input. This schema can then be used to interface with language models or validate user-provided data.

Custom Memory Implementation

In addition to the built-in SimpleMemory (an in-memory slice), Gokamy SDK allows you to create your own memory implementations. Simply ensure your implementation satisfies the Memory interface.

Example custom memory implementation:

package main

import (
    "context"
    "fmt"
    "log"
    "time"

    gomaky "github.com/Dieg0Code/gokamy-ai"
    openai "github.com/sashabaranov/go-openai"
    
    "gorm.io/driver/postgres"
    "gorm.io/gorm"
)

// Message represents the schema for storing chat messages.
type Message struct {
    gorm.Model
    Role      string
    Content   string
}

// ORMMemory is a custom Memory implementation that persists messages with GORM.
type ORMMemory struct {
    db *gorm.DB
}

// Add stores a new message in the database.
func (m *ORMMemory) Add(message openai.ChatCompletionMessage) {
    msg := Message{
        Role:      message.Role,
        Content:   message.Content,
    }
    if err := m.db.Create(&msg).Error; err != nil {
        log.Printf("failed to add message: %v", err)
    }
}

// Get retrieves all stored messages ordered by creation time.
func (m *ORMMemory) Get() []openai.ChatCompletionMessage {
    var messages []Message
	if err := m.db.Order("created_at").Find(&messages).Error; err != nil {
		log.Printf("failed to get messages: %v", err)
		return nil
	}

	var chatMessages []openai.ChatCompletionMessage
	for _, msg := range messages {
		chatMessages = append(chatMessages, openai.ChatCompletionMessage{
			Role:    msg.Role,
			Content: msg.Content,
		})
	}
	return chatMessages
}

// Clear removes all messages from the persistent memory.
func (m *ORMMemory) Clear() {
    // optional: implement clear functionality
}

// NewORMMemory returns a Memory interface backed by ORMMemory.
// It auto-migrates the Message table using GORM.
func NewORMMemory(db *gorm.DB) gomaky.Memory {
    if err := db.AutoMigrate(&Message{}); err != nil {
        log.Fatalf("AutoMigrate failed: %v", err)
    }
    return &ORMMemory{db: db}
}

func main() {
    // Set up the PostgreSQL DSN. Replace with your PostgreSQL credentials.
    dsn := "host=localhost user=postgres password=YOUR_PASSWORD dbname=your_db port=5432 sslmode=disable TimeZone=UTC"
    db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
    if err != nil {
        log.Fatalf("failed to connect to database: %v", err)
    }

    // Create ORM-based memory instances for each agent.
    memoryAgentOne := NewORMMemory(db)
    memoryAgentTwo := NewORMMemory(db)

    // Create an ORM-based memory instance for orchestrator global history.
    globalHistory := NewORMMemory(db)

    // Initialize the OpenAI client using your API key.
    client := openai.NewClient("YOUR_API_KEY")

    // Build the first agent (HelloAgent).
    agentOne, err := gomaky.NewAgentBuilder().
        SetClient(client).
        SetName("HelloAgent").
        SetConfigPrompt("You are an agent that warmly greets users and encourages further interaction.").
        SetMemory(memoryAgentOne).
        SetModel(openai.GPT4).
        Build()
    if err != nil {
        log.Fatalf("Error building HelloAgent: %v", err)
    }

    // Build the second agent (FinalAgent).
    agentTwo, err := gomaky.NewAgentBuilder().
        SetClient(client).
        SetName("FinalAgent").
        SetConfigPrompt("You are an agent that provides a final summary based on the conversation.").
        SetMemory(memoryAgentTwo).
        SetModel(openai.GPT4).
        Build()
    if err != nil {
        log.Fatalf("Error building FinalAgent: %v", err)
    }

    // Create an orchestrator, register both agents, and define the execution sequence.
    orchestrator := gomaky.NewOrchestratorBuilder().
        SetGlobalHistory(globalHistory).
        AddAgent(agentOne).
        AddAgent(agentTwo).
        // Define the processing sequence: first HelloAgent, then FinalAgent.
        SetSequence([]string{"HelloAgent", "FinalAgent"}).
        Build()

    // Provide an input and process the sequence.
    input := "Please greet the user and provide a summary."
    response, err := orchestrator.ProcessSequence(context.Background(), input)
    if err != nil {
        log.Fatalf("Error processing sequence: %v", err)
    }

    fmt.Println("Final Orchestrator Response:")
    fmt.Println(response)
}
Advanced Configuration: Temperature and JSON Response Format

You can configure the behavior of your agent by setting parameters such as temperature and JSON response format. The following example demonstrates how to set these options using the AgentBuilder:

package main

import (
	"context"
	"fmt"
	"log"

	gokamy "github.com/Dieg0Code/gokamy-ai"
	openai "github.com/sashabaranov/go-openai"
)

// MyResponse defines the expected JSON structure of the response.
type MyResponse struct {
	Message string   `json:"message" description:"The response message from the agent" required:"true"`
	Code    int      `json:"code" description:"The status code of the response" required:"true"`
	Status  string   `json:"status" description:"The status of the operation" enum:"success,failure" required:"true"`
	Details []string `json:"details" description:"Optional additional details about the response" required:"false"`
}

func main() {
	// Initialize the OpenAI client with your API key.
	client := openai.NewClient("YOUR_OPENAI_API_KEY")

	// Create a simple memory instance.
	memory := gokamy.NewSimpleMemory()

	// Set a basic system prompt.
	systemPrompt := "You are an advanced agent configured with custom settings. Please provide a JSON response following the expected format."

	// Build the agent with custom temperature and JSON response format.
	agent, err := gokamy.NewAgentBuilder().
		SetClient(client).
		SetName("AdvancedAgent").
		SetConfigPrompt(systemPrompt).
		SetMemory(memory).
		SetModel(openai.GPT4).
		SetTemperature(0.7).                // Set the temperature to influence randomness.
		SetJSONResponseFormat(MyResponse{}). // Set the expected JSON response format using tags.
		Build()
	if err != nil {
		log.Fatalf("Error building agent: %v", err)
	}

	// Process a sample input with the agent.
	response, err := agent.Process(context.Background(), "Provide a response in JSON format.")
	if err != nil {
		log.Fatalf("Error processing request: %v", err)
	}

	// Parse the JSON response into MyResponse struct.
	var parsedResponse MyResponse
	if err := json.Unmarshal([]byte(response), &parsedResponse); err != nil {
		log.Fatalf("Error parsing JSON response: %v", err)
	}

	// Print the parsed response.
	fmt.Println("Agent Response:")
	fmt.Printf("Message: %s\n", parsedResponse.Message)
	fmt.Printf("Code: %d\n", parsedResponse.Code)
}
Future CLI Support

The project is also planning a CLI tool to streamline project setup. The planned commands include:

  • gokamy init: Initializes the project structure by creating an agents directory.
  • gokamy new AgentName: Creates a new agent scaffold with placeholder files (e.g., AgentName.go, AgentNameTool.go, prompt.go).

Stay tuned for further updates!

Dependencies and Their Licenses

This project uses the following third-party libraries:

Please refer to their respective repositories for the full license texts.

Contributing

Contributions are welcome! Feel free to open issues or submit pull requests on GitHub.

License

This project is licensed under the Apache License 2.0.
See LICENSE for more details.

Documentation

Overview

Package gokamy provides an SDK for interfacing with OpenAI's API, offering agents that process inputs, manage tool execution, and maintain memory.

Index

Constants

View Source
const (
	RoleSystem    = "system"
	RoleDeveloper = "developer"
	RoleUser      = "user"
	RoleAssistant = "assistant"
	RoleTool      = "tool"
)

Role constants define standard message roles across different providers

View Source
const (
	FinishReasonStop      = "stop"
	FinishReasonLength    = "length"
	FinishReasonToolCalls = "tool_calls"
)

FinishReason constants define standard reasons for completion.

Variables

This section is empty.

Functions

func GenerateRawSchema added in v1.1.0

func GenerateRawSchema(v any) (json.RawMessage, error)

GenerateRawSchema wraps GenerateSchema and returns the JSON marshalled schema.

Types

type Agent

type Agent interface {
	Process(ctx context.Context, userName string, input string, additionalMessages ...[]Message) (string, error)
	AddTool(tool Tool)
	SetConfigPrompt(prompt string)
	GetName() string
}

Agent defines the interface for processing inputs and managing tools. Implementations of Agent should support processing messages, adding tools, configuring prompts, and providing a name identifier.

type AgentBuilder

type AgentBuilder struct {
	// contains filtered or unexported fields
}

AgentBuilder provides a fluent, modular way to configure and construct an Agent.

func NewAgentBuilder

func NewAgentBuilder() *AgentBuilder

NewAgentBuilder initializes and returns a new instance of AgentBuilder.

func (*AgentBuilder) AddTool

func (b *AgentBuilder) AddTool(tool Tool) *AgentBuilder

AddTool adds a tool to the agent's configuration, making it available during processing.

func (*AgentBuilder) Build

func (b *AgentBuilder) Build() (*BaseAgent, error)

Build constructs and returns an Agent based on the current configuration. It returns an error if any issues occurred during the builder setup.

func (*AgentBuilder) SetClient

func (b *AgentBuilder) SetClient(client LLMClient) *AgentBuilder

SetClient sets the LLMClient to be used by the agent.

func (*AgentBuilder) SetConfigPrompt added in v0.3.0

func (b *AgentBuilder) SetConfigPrompt(prompt string) *AgentBuilder

SetConfigPrompt sets the system prompt that configures the agent's behavior.

func (*AgentBuilder) SetJSONResponseFormat

func (b *AgentBuilder) SetJSONResponseFormat(schemaName string, structSchema any) *AgentBuilder

SetJSONResponseFormat configures the agent to use a JSON schema for response formatting, generating the schema from a provided sample type.

func (*AgentBuilder) SetMemory

func (b *AgentBuilder) SetMemory(memory Memory) *AgentBuilder

SetMemory sets the memory implementation for the agent.

func (*AgentBuilder) SetModel

func (b *AgentBuilder) SetModel(model string) *AgentBuilder

SetModel configures the model to be used by the agent.

func (*AgentBuilder) SetName

func (b *AgentBuilder) SetName(name string) *AgentBuilder

SetName sets the name identifier for the agent.

func (*AgentBuilder) SetTemperature

func (b *AgentBuilder) SetTemperature(temperature float32) *AgentBuilder

SetTemperature sets the temperature parameter for the agent's responses.

type BaseAgent

type BaseAgent struct {
	// contains filtered or unexported fields
}

BaseAgent holds the common implementation of the Agent interface, including the OpenAI client, system prompt, tools, memory, model configuration, and concurrency control.

func (*BaseAgent) AddTool

func (b *BaseAgent) AddTool(tool Tool)

AddTool adds a tool to the agent. AddTool adds a tool to the agent.

func (*BaseAgent) GetName

func (b *BaseAgent) GetName() string

GetName returns the name identifier of the agent.

func (*BaseAgent) Process

func (b *BaseAgent) Process(ctx context.Context, userName, input string, additionalMessages ...[]Message) (string, error)

Process takes the user input along with optional additional messages, adds the initial user message to memory, and initiates processing with available tools.

func (*BaseAgent) SetConfigPrompt added in v0.3.0

func (b *BaseAgent) SetConfigPrompt(prompt string)

SetConfigPrompt sets the system prompt for the agent, which can be used to configure behavior.

type ChatCompletionRequest added in v1.0.0

type ChatCompletionRequest struct {
	Model          string           // The model identifier to use (e.g. "gpt-4").
	Messages       []Message        // Conversational messages.
	Tools          []ToolDefinition // Optional tools available to the agent.
	Temperature    float32          // Sampling temperature.
	ResponseFormat *ResponseFormat  // Optional format for the response.
}

ChatCompletionRequest represents a unified chat completion request.

type ChatCompletionResponse added in v1.0.0

type ChatCompletionResponse struct {
	Choices []Choice // One or more response choices.
	Usage   Usage    // Token usage statistics.
}

ChatCompletionResponse represents a unified response structure from the LLM.

type Choice added in v1.0.0

type Choice struct {
	Message      Message // The message produced by the LLM.
	FinishReason string  // Reason why generation stopped (e.g. "stop", "length").
}

Choice represents a single completion option.

type DataType

type DataType string

DataType represents a JSON data type in the generated schema.

const (
	Object  DataType = "object"
	Number  DataType = "number"
	Integer DataType = "integer"
	String  DataType = "string"
	Array   DataType = "array"
	Null    DataType = "null"
	Boolean DataType = "boolean"
)

Supported JSON data types.

type DeepseekR1Client added in v1.0.0

type DeepseekR1Client struct {
	// contains filtered or unexported fields
}

DeepseekR1Client implementa LLMClient usando el SDK de DeepseekR1.

func (*DeepseekR1Client) CreateChatCompletion added in v1.0.0

CreateChatCompletion envía la solicitud de chat a DeepseekR1. Se ignoran tools y ResponseFormat, ya que DeepseekR1 no los soporta.

type Definition

type Definition struct {
	Type                 DataType              `json:"type,omitempty"`
	Description          string                `json:"description,omitempty"`
	Enum                 []string              `json:"enum,omitempty"`
	Properties           map[string]Definition `json:"properties,omitempty"`
	Required             []string              `json:"required,omitempty"`
	Items                *Definition           `json:"items,omitempty"`
	AdditionalProperties any                   `json:"additionalProperties,omitempty"`
}

Definition is a struct for describing a JSON Schema. It includes type, description, enumeration values, properties, required fields, and additional items.

func (*Definition) MarshalJSON

func (d *Definition) MarshalJSON() ([]byte, error)

MarshalJSON provides custom JSON marshalling for the Definition type. It ensures that the Properties map is initialized before marshalling.

type Embedder

type Embedder struct {
	// contains filtered or unexported fields
}

Embedder is responsible for generating embeddings using the OpenAI API.

func (*Embedder) GenerateEmbedding

func (e *Embedder) GenerateEmbedding(ctx context.Context, data string, model ...openai.EmbeddingModel) ([]float32, error)

GenerateEmbedding generates an embedding for the provided data string. It accepts an optional embedding model; if provided, that model overrides the default. Returns a slice of float32 representing the embedding vector or an error if any.

type EmbedderBuilder

type EmbedderBuilder struct {
	// contains filtered or unexported fields
}

EmbedderBuilder provides a fluent API to configure and build an Embedder instance.

func NewEmbedderBuilder

func NewEmbedderBuilder() *EmbedderBuilder

NewEmbedderBuilder initializes a new EmbedderBuilder with default settings.

func (*EmbedderBuilder) Build

func (b *EmbedderBuilder) Build() (*Embedder, error)

Build constructs the Embedder instance based on the current configuration. Returns an error if the required OpenAI client is not configured.

func (*EmbedderBuilder) SetClient

func (b *EmbedderBuilder) SetClient(client *openai.Client) *EmbedderBuilder

SetClient configures the OpenAI client for the Embedder.

func (*EmbedderBuilder) SetModel

SetModel configures the embedding model to be used by the Embedder.

type JSONSchema added in v1.0.0

type JSONSchema struct {
	Name   string
	Schema json.RawMessage // The raw JSON schema.
	Strict bool            // Indicates whether strict validation is enforced.
}

JSONSchema defines the structure for responses in JSON.

type LLMClient added in v1.0.0

type LLMClient interface {
	CreateChatCompletion(ctx context.Context, req ChatCompletionRequest) (ChatCompletionResponse, error)
}

LLMClient defines the interface for interacting with LLM providers.

func NewDeepseekR1Client added in v1.0.0

func NewDeepseekR1Client(apiKey, baseURL string) LLMClient

NewDeepseekR1Client crea un nuevo cliente para DeepseekR1. Recibe la API key y el baseURL (por ejemplo, "https://models.inference.ai.azure.com/").

func NewOpenAIAzureClient added in v1.0.0

func NewOpenAIAzureClient(apiKey, endpoint string) LLMClient

NewOpenAIAzureClient creates an LLMClient for Azure using the provided API key and endpoint. It configures the client with Azure-specific settings.

func NewOpenAIClient added in v1.0.0

func NewOpenAIClient(apiKey string) LLMClient

NewOpenAIClient creates a new LLMClient using the provided API key with the standard OpenAI endpoint.

type Memory

type Memory interface {
	// Add appends a complete ChatCompletionMessage to the memory.
	Add(message Message)
	// Get returns a copy of all stored chat messages.
	Get() []Message
	// Clear removes all stored chat messages from memory.
	Clear()
}

Memory defines the interface for managing a history of chat messages. It provides methods for adding messages, retrieving the complete history, and clearing the history.

func NewSimpleMemory

func NewSimpleMemory() Memory

NewSimpleMemory creates and returns a new instance of SimpleMemory. It initializes the internal message slice and ensures the memory is ready for use.

type Message added in v1.0.0

type Message struct {
	Role      string     // One of RoleSystem, RoleUser, RoleAssistant, or RoleTool.
	Content   string     // The textual content of the message.
	Name      string     // Optional identifier for the sender.
	ToolCalls []ToolCall // Optional tool calls made by the assistant.
	ToolID    string     // For tool responses, references the original tool call.
}

Message represents a chat message with standardized fields.

type OpenAIClient added in v1.0.0

type OpenAIClient struct {
	// contains filtered or unexported fields
}

OpenAIClient implements the LLMClient interface using the OpenAI SDK. It wraps the official OpenAI client and provides a consistent interface for making chat completion requests.

func (*OpenAIClient) CreateChatCompletion added in v1.0.0

func (o *OpenAIClient) CreateChatCompletion(ctx context.Context, req ChatCompletionRequest) (ChatCompletionResponse, error)

CreateChatCompletion sends a chat completion request to the OpenAI API using the provided request parameters. It converts internal messages and tool definitions to OpenAI formats, sends the request, and maps the response back into the SDK's unified structure.

type Orchestrator

type Orchestrator struct {
	// contains filtered or unexported fields
}

Orchestrator manages multiple agents, maintains a global conversation history, and optionally defines an execution sequence (pipeline) for agents.

func NewOrchestrator

func NewOrchestrator() *Orchestrator

NewOrchestrator creates and returns a new Orchestrator with default settings. It initializes the agents map and the global history with a simple in-memory implementation.

func (*Orchestrator) GetAgent

func (o *Orchestrator) GetAgent(name string) (Agent, bool)

GetAgent retrieves a registered agent by its name in a thread-safe manner.

func (*Orchestrator) Process

func (o *Orchestrator) Process(ctx context.Context, agentName, userName, input string) (string, error)

Process executes a specific agent by combining the global history with the agent's internal memory. It retrieves the target agent, merges global messages with the agent's own messages (if applicable), processes the input, and then updates the global history with both the user input and the agent's response.

func (*Orchestrator) ProcessSequence

func (o *Orchestrator) ProcessSequence(ctx context.Context, userName, input string) (string, error)

ProcessSequence executes a pipeline of agents as defined in the orchestrator's sequence. The output of each agent is used as the input for the next agent in the sequence. It returns the final output from the last agent or an error if processing fails.

type OrchestratorBuilder

type OrchestratorBuilder struct {
	// contains filtered or unexported fields
}

OrchestratorBuilder provides a fluent interface for constructing an Orchestrator. It allows developers to configure agents, set a custom global history, and define an execution sequence.

func NewOrchestratorBuilder

func NewOrchestratorBuilder() *OrchestratorBuilder

NewOrchestratorBuilder initializes a new OrchestratorBuilder with default values. It creates an empty agents map, an empty sequence, and a default global history.

func (*OrchestratorBuilder) AddAgent

func (b *OrchestratorBuilder) AddAgent(agent Agent) *OrchestratorBuilder

AddAgent registers an agent with the orchestrator using the agent's name as the key.

func (*OrchestratorBuilder) Build

func (b *OrchestratorBuilder) Build() *Orchestrator

Build constructs and returns an Orchestrator instance based on the current configuration.

func (*OrchestratorBuilder) SetGlobalHistory

func (b *OrchestratorBuilder) SetGlobalHistory(history Memory) *OrchestratorBuilder

SetGlobalHistory sets a custom global conversation history for the orchestrator.

func (*OrchestratorBuilder) SetSequence

func (b *OrchestratorBuilder) SetSequence(seq []string) *OrchestratorBuilder

SetSequence defines the execution order (pipeline) of agents. For example: []string{"agent1", "agent2", "agent3"}.

type PromptBuilder

type PromptBuilder struct {
	// contains filtered or unexported fields
}

PromptBuilder facilitates the construction of a prompt by organizing content into sections and subsections.

func NewPromptBuilder

func NewPromptBuilder() *PromptBuilder

NewPromptBuilder creates and initializes a new PromptBuilder instance.

func (*PromptBuilder) AddListItem

func (pb *PromptBuilder) AddListItem(sectionName, item string) *PromptBuilder

AddListItem adds a numbered list item to the specified section or subsection. The item is trimmed for any unnecessary whitespace.

func (*PromptBuilder) AddListItemF

func (pb *PromptBuilder) AddListItemF(sectionName string, value interface{}) *PromptBuilder

AddListItemF is a helper method that converts any value to its string representation (using JSON marshaling if necessary) and adds it as a numbered list item to the specified section.

func (*PromptBuilder) AddSubSection

func (pb *PromptBuilder) AddSubSection(childName, parentName string) *PromptBuilder

AddSubSection creates a new subsection (child) under the specified parent section. If the parent section does not exist, it is created as a top-level section.

func (*PromptBuilder) AddText

func (pb *PromptBuilder) AddText(sectionName, text string) *PromptBuilder

AddText adds a line of text to the specified section or subsection. It trims any extra whitespace before appending.

func (*PromptBuilder) AddTextF

func (pb *PromptBuilder) AddTextF(sectionName string, value interface{}) *PromptBuilder

AddTextF is a helper method that converts any value to its string representation (using JSON marshaling if necessary) and adds it as a text line to the specified section.

func (*PromptBuilder) Build

func (pb *PromptBuilder) Build() string

Build generates the final prompt by concatenating all top-level sections and their nested subsections. It returns the fully formatted prompt as a single string.

func (*PromptBuilder) CreateSection

func (pb *PromptBuilder) CreateSection(name string) *PromptBuilder

CreateSection adds a new top-level section with the given name to the prompt. If a section with the same name already exists, it is not created again.

type ResponseFormat added in v1.0.0

type ResponseFormat struct {
	Type       string      // For example, "json_schema".
	JSONSchema *JSONSchema // The JSON schema that defines the expected response format.
}

ResponseFormat specifies how the LLM should format its response.

type Section

type Section struct {
	Name        string     // Name of the section.
	Lines       []string   // Lines of text contained in the section.
	SubSections []*Section // Nested subsections within this section.
	// contains filtered or unexported fields
}

Section represents a block of content that may include text lines and nested subsections. It is used by the PromptBuilder to structure and format the final prompt.

type SimpleMemory

type SimpleMemory struct {
	// contains filtered or unexported fields
}

SimpleMemory implements a basic in-memory storage for chat messages. It uses a slice to store messages and a RWMutex for safe concurrent access.

func (*SimpleMemory) Add

func (s *SimpleMemory) Add(message Message)

Add appends a complete chat message to the SimpleMemory.

func (*SimpleMemory) Clear

func (s *SimpleMemory) Clear()

Clear removes all stored messages from the memory.

func (*SimpleMemory) Get

func (s *SimpleMemory) Get() []Message

Get returns a copy of all stored chat messages to avoid data races. A copy of the messages slice is returned to ensure that external modifications do not affect the internal state.

type Tool

type Tool interface {
	GetDefinition() ToolDefinition
	Execute(args json.RawMessage) (interface{}, error)
}

Tool defines the interface for executable tools.

type ToolCall added in v1.0.0

type ToolCall struct {
	ID   string          // Unique identifier for the tool call.
	Name string          // The name of the tool to be invoked.
	Args json.RawMessage // Arguments for the tool, encoded in JSON.
}

ToolCall represents a tool invocation request.

type ToolDefinition added in v1.0.0

type ToolDefinition struct {
	Name        string          // Name of the tool.
	Description string          // A short description of what the tool does.
	Parameters  json.RawMessage // JSON Schema defining the parameters for the tool.
}

ToolDefinition describes a tool's capabilities.

type Usage added in v1.0.0

type Usage struct {
	PromptTokens     int // Number of tokens in the prompt.
	CompletionTokens int // Number of tokens generated by the model.
	TotalTokens      int // Total tokens consumed.
}

Usage provides token usage statistics.

Directories

Path Synopsis
examples

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL