Documentation
¶
Overview ¶
Package optimizer provides prompt optimization capabilities for Language Learning Models.
Index ¶
- func OptimizePrompt[I schema.Schema, O schema.Schema](ctx context.Context, prompt *I, config OptimizationConfig, ...) (*I, *O, error)
- type ContextProvider
- type HistoryGetter
- type IterationCallback
- type LetterRating
- type Metric
- type NumericalRating
- type OptimizationConfig
- type OptimizationEntry
- type OptimizationRating
- type OptimizerConfig
- type OptimizerOption
- type Option
- func WithCustomMetrics(metrics ...Metric) Option
- func WithIterationCallback(callback IterationCallback) Option
- func WithIterations(iterations int) Option
- func WithMaxRetries(maxRetries int) Option
- func WithMemorySize(size int) Option
- func WithOptimizationGoal(goal string) Option
- func WithRatingSystem(system string) Option
- func WithRetryDelay(delay time.Duration) Option
- func WithThreshold(threshold float64) Option
- type PromptAssessment
- type PromptImprovement
- type PromptOptimizer
- type Strength
- type Suggestion
- type Weakness
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func OptimizePrompt ¶
func OptimizePrompt[I schema.Schema, O schema.Schema](ctx context.Context, prompt *I, config OptimizationConfig, agentOpts ...agents.Option) (*I, *O, error)
OptimizePrompt performs automated optimization of an LLM prompt and generates a response. It uses a sophisticated optimization process that includes: - Prompt quality assessment - Iterative refinement - Performance measurement - Response validation
The optimization process follows these steps: 1. Initialize optimization with the given configuration 2. Assess initial prompt quality 3. Apply iterative improvements based on assessment 4. Validate against optimization goals 5. Generate response using the optimized prompt
Parameters:
- ctx: Context for cancellation and timeout control
- llm: Language model instance to use for optimization
- config: Configuration controlling the optimization process
Returns:
- optimizedPrompt: The refined and improved prompt text
- response: The LLM's response using the optimized prompt
- err: Any error encountered during optimization
Example usage:
optimizedPrompt, response, err := OptimizePrompt(ctx, llmInstance, OptimizationConfig{ Prompt: "Initial prompt text...", Description: "Task description for optimization", Metrics: []Metric{{Name: "Clarity", Description: "..."}}, Threshold: 15.0, // Minimum acceptable quality score })
The function uses a PromptOptimizer internally and configures it with: - Debug logging for prompts and responses - Custom evaluation metrics - Configurable rating system - Retry mechanisms for reliability - Quality thresholds for acceptance
Types ¶
type ContextProvider ¶
type ContextProvider struct {
// contains filtered or unexported fields
}
func NewContextProvider ¶
func NewContextProvider(historyGetter HistoryGetter) *ContextProvider
func (*ContextProvider) Info ¶
func (c *ContextProvider) Info() string
func (*ContextProvider) Title ¶
func (c *ContextProvider) Title() string
type HistoryGetter ¶
type HistoryGetter interface {
RecentHistory() []OptimizationEntry
}
type IterationCallback ¶
type IterationCallback func(iteration int, entry OptimizationEntry)
IterationCallback is a function type for monitoring optimization progress. It's called after each iteration with the current state.
type LetterRating ¶
type LetterRating struct {
Grade string // Letter grade (A+, A, B, etc.)
}
LetterRating implements OptimizationRating using a letter grade system. It evaluates prompts using traditional academic grades (A+, A, B, etc.).
func (LetterRating) IsGoalMet ¶
func (lr LetterRating) IsGoalMet() bool
IsGoalMet checks if the letter grade meets the optimization goal. Returns true for grades A, A+, or S.
func (LetterRating) String ¶
func (lr LetterRating) String() string
String returns the letter grade as a string.
type Metric ¶
type Metric struct { // Name identifies the metric (e.g., "Clarity", "Specificity") Name string `json:"name" jsonschema:"title=name,description=identifies the metric (e.g., 'Clarity', 'Specificity')"` // Description explains what the metric measures and its significance Description string `json:"description" jsonschema:"title=description,description=explains what the metric measures and its significance"` // Value is the numerical score (0-20 scale) assigned to this metric Value float64 `` /* 148-byte string literal not displayed */ // Reasoning provides the rationale behind the assigned value Reasoning string `json:"reasoning" jsonschema:"title=reasoning,descrption=provides the rationale behind the assigned value"` }
Metric represents a quantitative or qualitative measure of prompt performance. Each metric provides a specific aspect of evaluation with supporting reasoning.
type NumericalRating ¶
type NumericalRating struct { Score float64 // Current score Max float64 // Maximum possible score }
NumericalRating implements OptimizationRating using a numerical score system. It evaluates prompts on a scale from 0 to Max.
func (NumericalRating) IsGoalMet ¶
func (nr NumericalRating) IsGoalMet() bool
IsGoalMet checks if the numerical score meets the optimization goal. Returns true if the score is 90% or higher of the maximum possible score.
func (NumericalRating) String ¶
func (nr NumericalRating) String() string
String formats the numerical rating as a string in the form "score/max".
type OptimizationConfig ¶
type OptimizationConfig struct { // Description explains the intended use and context of the prompt Description string // Metrics defines custom evaluation criteria for prompt assessment // Each metric should have a name and description Metrics []Metric // RatingSystem specifies the grading approach: // - "numerical": Uses a 0-20 scale // - "letter": Uses letter grades (F to A+) RatingSystem string // Threshold sets the minimum acceptable quality score (0.0 to 1.0) // For numerical ratings: score must be >= threshold * 20 // For letter grades: requires grade equivalent to threshold // Example: 0.8 requires A- or better Threshold float64 // MaxRetries is the maximum number of retry attempts for failed operations // Each retry includes a delay specified by RetryDelay MaxRetries int // RetryDelay is the duration to wait between retry attempts // This helps prevent rate limiting and allows for transient issues to resolve RetryDelay time.Duration }
OptimizationConfig holds the configuration parameters for prompt optimization. It controls various aspects of the optimization process including evaluation criteria, retry behavior, and quality thresholds.
func DefaultOptimizationConfig ¶
func DefaultOptimizationConfig() OptimizationConfig
DefaultOptimizationConfig returns a default configuration for prompt optimization. The default configuration provides a balanced set of parameters suitable for most optimization scenarios.
Default values:
- RatingSystem: "numerical" (0-20 scale)
- Threshold: 0.8 (requires 16/20 or better)
- MaxRetries: 3 attempts
- RetryDelay: 2 seconds
- Metrics: Relevance, Clarity, and Specificity
Example usage:
config := DefaultOptimizationConfig() config.Prompt = "Your prompt text..." config.Description = "Task description..."
The default metrics evaluate:
- Relevance: Alignment with the intended task
- Clarity: Unambiguous and understandable phrasing
- Specificity: Level of detail and precision
type OptimizationEntry ¶
type OptimizationEntry struct { schema.Base // Prompt is the LLM prompt being evaluated Prompt schema.Schema `json:"prompt" jsonschema:"title=prompt,description=previous evaluated prompt"` // Assessment contains the comprehensive evaluation of the prompt Assessment PromptAssessment `json:"assessment" jsonschema:"title=assessment,description=the comprehensive evaluation of the prompt"` }
OptimizationEntry represents a single step in the optimization process, containing both the prompt and its assessment.
type OptimizationRating ¶
type OptimizationRating interface { // IsGoalMet determines if the optimization goal has been achieved IsGoalMet() bool // String returns a string representation of the rating String() string }
OptimizationRating defines the interface for different rating systems used in prompt optimization. Implementations can provide different ways to evaluate if an optimization goal has been met.
type OptimizerConfig ¶
type OptimizerConfig struct {
// contains filtered or unexported fields
}
OptimizerConfig is PromptOptimizer config
type OptimizerOption ¶
type OptimizerOption func(*OptimizerConfig)
OptimizerOption is a function type for configuring the PromptOptimizer. It follows the functional options pattern for flexible configuration.
type Option ¶
type Option = func(*OptimizerConfig)
func WithCustomMetrics ¶
WithCustomMetrics sets custom evaluation metrics for the optimizer.
func WithIterationCallback ¶
func WithIterationCallback(callback IterationCallback) Option
WithIterationCallback sets a callback function to be called after each iteration.
func WithIterations ¶
WithIterations sets the maximum number of optimization iterations.
func WithMaxRetries ¶
WithMaxRetries sets the maximum number of retry attempts per iteration.
func WithMemorySize ¶
WithMemorySize sets the number of recent optimization entries to keep in memory.
func WithOptimizationGoal ¶
WithOptimizationGoal sets the target goal for optimization.
func WithRatingSystem ¶
WithRatingSystem sets the rating system to use (numerical or letter grades).
func WithRetryDelay ¶
WithRetryDelay sets the delay duration between retry attempts.
func WithThreshold ¶
WithThreshold sets the minimum acceptable score threshold.
type PromptAssessment ¶
type PromptAssessment struct { schema.Base // Metrics contains specific performance measurements Metrics []Metric `json:"metrics" validate:"required,min=1" jsonschema:"title=metrics,description=specific performance measurements"` // Strengths lists positive aspects of the prompt Strengths []Strength `json:"strengths" validate:"required,min=1" jsonschema:"title=strengths,description=lists positive aspects of the prompt"` // Weaknesses identifies areas needing improvement Weaknesses []Weakness `json:"weaknesses" validate:"required,min=1" jsonschema:"title=weaknesses,description=identifies areas needing improvement"` // Suggestions provides actionable improvements Suggestions []Suggestion `json:"suggestions" validate:"required,min=1" jsonschema:"title=suggestions,description=provides actionable improvements"` // OverallScore represents the prompt's overall quality (0-20 scale) OverallScore float64 `` /* 149-byte string literal not displayed */ // OverallGrade provides a letter grade assessment (e.g., "A", "B+") OverallGrade string `` /* 246-byte string literal not displayed */ // EfficiencyScore measures token usage and processing efficiency EfficiencyScore float64 `` /* 149-byte string literal not displayed */ // AlignmentWithGoal measures how well the prompt serves its intended purpose AlignmentWithGoal float64 `` /* 176-byte string literal not displayed */ }
PromptAssessment provides a comprehensive evaluation of a prompt's quality including metrics, strengths, weaknesses, and suggestions for improvement.
type PromptImprovement ¶
type PromptImprovement[T schema.Schema] struct { schema.Base IncrementalImprovement *T `json:"incrementalImprovement" jsonschema:"title=incrementalImprovement,description=Refines existing prompt approach"` BoldRedesign *T `json:"boldRedesign" jsonschema:"title=boldRedesign,description=Reimagines prompt structure"` ExpectedImpact struct { Incremental float64 `json:"incremental" jsonschema:"title=incremental,description=incrementalImprovement version impact score"` Bold float64 `json:"bold" jsonschema:"title=bold,description=boldRedesign version impact score"` } `` /* 131-byte string literal not displayed */ }
PromptImprovement represents improvement for the prompt
type PromptOptimizer ¶
type PromptOptimizer[T schema.Schema] struct { OptimizerConfig // contains filtered or unexported fields }
PromptOptimizer orchestrates the prompt optimization process. It manages the iterative refinement of prompts through assessment, improvement suggestions, and validation.
func NewPromptOptimizer ¶
func NewPromptOptimizer[I schema.Schema](agentOpts []agents.Option, taskDesc string, opts ...OptimizerOption) *PromptOptimizer[I]
NewPromptOptimizer creates a new instance of PromptOptimizer with the given configuration.
Parameters:
- llm: Language Learning Model interface for generating and evaluating prompts
- debugManager: Debug manager for logging and debugging
- taskDesc: Description of the optimization task
- opts: Optional configuration options
Returns:
- Configured PromptOptimizer instance
func (*PromptOptimizer[I]) GetOptimizationHistory ¶
func (po *PromptOptimizer[I]) GetOptimizationHistory() []OptimizationEntry
GetOptimizationHistory returns the complete history of optimization attempts.
func (*PromptOptimizer[I]) OptimizePrompt ¶
func (po *PromptOptimizer[I]) OptimizePrompt(ctx context.Context, prompt *I) (*I, error)
OptimizePrompt performs iterative optimization of the initial prompt to meet the specified goal.
The optimization process: 1. Assesses the current prompt 2. Records assessment in history 3. Checks if optimization goal is met 4. Generates improved prompt if goal not met 5. Repeats until goal is met or max iterations reached
Parameters:
- ctx: Context for cancellation and timeout
Returns:
- Optimized prompt
- Error if optimization fails
func (*PromptOptimizer[I]) RecentHistory ¶
func (po *PromptOptimizer[I]) RecentHistory() []OptimizationEntry
recentHistory returns the most recent optimization entries based on memory size.
type Strength ¶
type Strength struct { // Point describes the strength (e.g., "Clear task definition") Point string `json:"point" jsonschema:"title=point,description=describes the strength (e.g., 'Clear task definition')"` // Example provides a specific instance demonstrating this strength Example string `json:"example" jsonschema:"title=example,description=provides a specific instance demonstrating this strength"` }
Strength represents a positive aspect of the prompt with a concrete example.
type Suggestion ¶
type Suggestion struct { // Description outlines the suggested change Description string `json:"description" jsonschema:"title=description,description=outlines the suggested change"` // ExpectedImpact estimates the improvement's effect (0-20 scale) ExpectedImpact float64 `` /* 148-byte string literal not displayed */ // Reasoning explains why this suggestion would improve the prompt Reasoning string `json:"reasoning" jsonschema:"title=reasoning,description=explains why this suggestion would improve the prompt"` }
Suggestion represents a proposed improvement to the prompt with impact estimation.
type Weakness ¶
type Weakness struct { // Point describes the weakness (e.g., "Ambiguous constraints") Point string `json:"point" jsonschema:"title=point,description=describes the weakness (e.g., 'Ambiguous constraints')"` // Example provides a specific instance demonstrating this weakness Example string `json:"example" jsonschema:"title=example,description=provides a specific instance demonstrating this weakness"` }
Weakness represents an area for improvement in the prompt with a concrete example.