Documentation
¶
Index ¶
- func InitializeModel(cfg *Config, opts ...ModelOption) (llms.Model, error)
- type ChatCompletionPayload
- type CompletionService
- func (s *CompletionService) PerformCompletion(ctx context.Context, payload *ChatCompletionPayload, ...) (string, error)
- func (s *CompletionService) PerformCompletionStreaming(ctx context.Context, payload *ChatCompletionPayload, ...) (<-chan string, error)
- func (s *CompletionService) Run(ctx context.Context, runCfg RunOptions) error
- func (s *CompletionService) SetNextCompletionPrefill(content string)
- type CompletionServiceOption
- type Config
- type DummyBackend
- func (d *DummyBackend) Call(ctx context.Context, prompt string, options ...llms.CallOption) (string, error)
- func (d *DummyBackend) CreateEmbedding(ctx context.Context, text string) ([]float64, error)
- func (d *DummyBackend) GenerateContent(ctx context.Context, messages []llms.MessageContent, ...) (*llms.ContentResponse, error)
- type InputHandler
- type InputSource
- type InputSourceType
- type InputSources
- type Message
- type ModelOption
- type PerformCompletionConfig
- type RunOptions
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func InitializeModel ¶ added in v0.4.0
func InitializeModel(cfg *Config, opts ...ModelOption) (llms.Model, error)
InitializeModel initializes the model with the given configuration and options.
Types ¶
type ChatCompletionPayload ¶
type ChatCompletionPayload struct { Model string `json:"model"` Messages []llms.MessageContent Stream bool `json:"stream,omitempty"` }
type CompletionService ¶
type CompletionService struct { // Stdout is the writer for standard output. If nil, os.Stdout will be used. Stdout io.Writer // Stderr is the writer for standard error. If nil, os.Stderr will be used. Stderr io.Writer // contains filtered or unexported fields }
func NewCompletionService ¶
func NewCompletionService(cfg *Config, model llms.Model, opts ...CompletionServiceOption) (*CompletionService, error)
NewCompletionService creates a new CompletionService with the given configuration.
func (*CompletionService) PerformCompletion ¶
func (s *CompletionService) PerformCompletion(ctx context.Context, payload *ChatCompletionPayload, cfg PerformCompletionConfig) (string, error)
PerformCompletion provides a non-streaming version of the completion.
func (*CompletionService) PerformCompletionStreaming ¶
func (s *CompletionService) PerformCompletionStreaming(ctx context.Context, payload *ChatCompletionPayload, cfg PerformCompletionConfig) (<-chan string, error)
func (*CompletionService) Run ¶
func (s *CompletionService) Run(ctx context.Context, runCfg RunOptions) error
func (*CompletionService) SetNextCompletionPrefill ¶ added in v0.3.0
func (s *CompletionService) SetNextCompletionPrefill(content string)
SetNextCompletionPrefill sets the next completion prefill message. Note that not all inference engines support prefill messages. Whitespace is trimmed from the end of the message.
type CompletionServiceOption ¶ added in v0.4.0
type CompletionServiceOption func(*CompletionService)
func WithLogger ¶ added in v0.4.0
func WithLogger(l *zap.SugaredLogger) CompletionServiceOption
WithLogger sets the logger for the completion service.
func WithStderr ¶ added in v0.4.0
func WithStderr(w io.Writer) CompletionServiceOption
func WithStdout ¶ added in v0.4.0
func WithStdout(w io.Writer) CompletionServiceOption
type Config ¶
type Config struct { Backend string `yaml:"backend"` Model string `yaml:"model"` Stream bool `yaml:"stream"` MaxTokens int `yaml:"maxTokens"` Temperature float64 `yaml:"temperature"` SystemPrompt string `yaml:"systemPrompt"` LogitBias map[string]float64 `yaml:"logitBias"` CompletionTimeout time.Duration `yaml:"completionTimeout"` Debug bool `yaml:"debug"` OpenAIAPIKey string `yaml:"openaiAPIKey"` AnthropicAPIKey string `yaml:"anthropicAPIKey"` GoogleAPIKey string `yaml:"googleAPIKey"` }
func LoadConfig ¶ added in v0.2.0
LoadConfig loads the configuration from various sources in the following order of precedence: 1. Command-line flags (highest priority) 2. Environment variables 3. Configuration file 4. Default values (lowest priority)
The function performs the following steps: - Sets default values - Binds command-line flags - Loads environment variables - Reads the configuration file - Unmarshals the configuration into the Config struct
If a config file is not found, it falls back to using defaults and flags. The --verbose flag can be used to print the final configuration.
type DummyBackend ¶ added in v0.4.0
type DummyBackend struct {
GenerateText func() string
}
func NewDummyBackend ¶ added in v0.4.0
func NewDummyBackend() (*DummyBackend, error)
func (*DummyBackend) Call ¶ added in v0.4.0
func (d *DummyBackend) Call(ctx context.Context, prompt string, options ...llms.CallOption) (string, error)
func (*DummyBackend) CreateEmbedding ¶ added in v0.4.0
func (*DummyBackend) GenerateContent ¶ added in v0.4.0
func (d *DummyBackend) GenerateContent(ctx context.Context, messages []llms.MessageContent, options ...llms.CallOption) (*llms.ContentResponse, error)
type InputHandler ¶ added in v0.4.0
InputHandler manages multiple input sources.
type InputSource ¶ added in v0.4.0
type InputSource struct { Type InputSourceType Reader io.Reader }
InputSource represents a single input source.
type InputSourceType ¶ added in v0.4.0
type InputSourceType string
InputSourceType represents the type of input source.
const ( InputSourceStdin InputSourceType = "stdin" InputSourceFile InputSourceType = "file" InputSourceString InputSourceType = "string" InputSourceArg InputSourceType = "arg" )
type InputSources ¶ added in v0.4.0
type InputSources []InputSource
InputSources is a slice of InputSource.
type Message ¶ added in v0.4.5
type Message = []llms.MessageContent
type ModelOption ¶ added in v0.4.0
type ModelOption func(*modelOptions)
ModelOption is a function that modifies the model options
func WithHTTPClient ¶ added in v0.4.0
func WithHTTPClient(client *http.Client) ModelOption
WithHTTPClient sets a custom HTTP client for the model
type PerformCompletionConfig ¶ added in v0.3.2
PerformCompletionConfig is the configuration for the PerformCompletion method, it controls the behavior of the completion with regard to user interaction.
type RunOptions ¶ added in v0.4.0
type RunOptions struct { // Config options *Config `json:"config,omitempty" yaml:"config,omitempty"` // Input options InputStrings []string `json:"inputStrings,omitempty" yaml:"inputStrings,omitempty"` InputFiles []string `json:"inputFiles,omitempty" yaml:"inputFiles,omitempty"` PositionalArgs []string `json:"positionalArgs,omitempty" yaml:"positionalArgs,omitempty"` Prefill string `json:"prefill,omitempty" yaml:"prefill,omitempty"` // Output options Continuous bool `json:"continuous,omitempty" yaml:"continuous,omitempty"` StreamOutput bool `json:"streamOutput,omitempty" yaml:"streamOutput,omitempty"` ShowSpinner bool `json:"showSpinner,omitempty" yaml:"showSpinner,omitempty"` EchoPrefill bool `json:"echoPrefill,omitempty" yaml:"echoPrefill,omitempty"` // Verbosity options Verbose bool `json:"verbose,omitempty" yaml:"verbose,omitempty"` DebugMode bool `json:"debugMode,omitempty" yaml:"debugMode,omitempty"` // History options HistoryIn string `json:"historyIn,omitempty" yaml:"historyIn,omitempty"` HistoryOut string `json:"historyOut,omitempty" yaml:"historyOut,omitempty"` ReadlineHistoryFile string `json:"readlineHistoryFile,omitempty" yaml:"readlineHistoryFile,omitempty"` NCompletions int `json:"nCompletions,omitempty" yaml:"nCompletions,omitempty"` // I/O Stdout io.Writer `json:"-" yaml:"-"` Stderr io.Writer `json:"-" yaml:"-"` Stdin io.Reader `json:"-" yaml:"-"` // Timing MaximumTimeout time.Duration `json:"maximumTimeout,omitempty" yaml:"maximumTimeout,omitempty"` ConfigPath string `json:"configPath,omitempty" yaml:"configPath,omitempty"` }
RunOptions contains all the options that are relevant to run cgpt.
func (*RunOptions) GetCombinedInputReader ¶ added in v0.4.0
GetCombinedInputReader returns an io.Reader that combines all input sources.