cgpt

package module
v0.4.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 22, 2024 License: ISC Imports: 25 Imported by: 0

README

cgpt

cgpt is a command-line tool for interacting with Large Language Models (LLMs) using various backends.

Features

  • Supports multiple backends: Anthropic, OpenAI, Ollama, and Google AI
  • Interactive mode for continuous conversations
  • Streaming output
  • History management
  • Configurable via YAML file and environment variables
  • Vim plugin for easy integration

Installation

Using Homebrew

brew install tmc/tap/cgpt

From Source

cgpt is written in Go. To build from source, you need to have Go installed on your system. See the Go installation instructions for more information.

Quickstart Guide for Brew (including Go setup)
  1. Install Homebrew:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    
  2. Install Go:

    brew install go
    
  3. Add Go binary directory to PATH:

    echo 'export PATH=$PATH:$HOME/go/bin' >> ~/.zshrc
    # Or if using bash:
    # echo 'export PATH=$PATH:$HOME/go/bin' >> ~/.bash_profile
    
  4. Reload your shell configuration:

    source ~/.zshrc
    # Or if using bash:
    # source ~/.bash_profile
    

Once Go is set up, you can install cgpt from source:

go install github.com/tmc/cgpt/cmd/cgpt@latest

From GitHub Releases

Download the latest release from the GitHub Releases page.

Usage

cgpt [flags]
Flags
  • -b, --backend string: The backend to use (default "anthropic")
  • -m, --model string: The model to use (default "claude-3-5-sonnet-20241022")
  • -i, --input string: Direct string input (overrides -f)
  • -f, --file string: Input file path. Use '-' for stdin (default "-")
  • -c, --continuous: Run in continuous mode (interactive)
  • -s, --system-prompt string: System prompt to use
  • -p, --prefill string: Prefill the assistant's response
  • -I, --history-load string: File to read completion history from
  • -O, --history-save string: File to store completion history in
  • --config string: Path to the configuration file (default "config.yaml")
  • -v, --verbose: Verbose output
  • --debug: Debug output
  • -n, --completions int: Number of completions (when running non-interactively with history)
  • -t, --max-tokens int: Maximum tokens to generate (default 8000)
  • --completion-timeout duration: Maximum time to wait for a response (default 2m0s)

Configuration

cgpt can be configured using a YAML file. By default, it looks for config.yaml in the current directory. You can specify a different configuration file using the --config flag.

Example config.yaml:

backend: "anthropic"
model: "claude-3-5-sonnet-20241022"
stream: true
maxTokens: 2048
systemPrompt: "You are a helpful assistant."

Environment Variables

  • OPENAI_API_KEY: OpenAI API key
  • OPENAI_BASE_URL: Override for OpenAI API Base URL
  • ANTHROPIC_API_KEY: Anthropic API key
  • GOOGLE_API_KEY: Google AI API key

Vim Plugin

cgpt includes a Vim plugin for easy integration. To use it, copy the vim/plugin/cgpt.vim file to your Vim plugin directory.

Vim Plugin Usage
  1. Visually select the text you want to process with cgpt.
  2. Press cg or use the :CgptRun command to run cgpt on the selected text.
  3. The output will be appended after the visual selection.
Vim Plugin Configuration
  • g:cgpt_backend: Set the backend for cgpt (default: 'anthropic')
  • g:cgpt_model: Set the model for cgpt (default: 'claude-3-5-sonnet-20241022')
  • g:cgpt_system_prompt: Set the system prompt for cgpt
  • g:cgpt_config_file: Set the path to the cgpt configuration file
  • g:cgpt_include_filetype: Include the current filetype in the prompt (default: 0)

Examples

# Simple query
echo "Explain quantum computing" | cgpt

# Interactive mode
cgpt -c

# Use a specific backend and model
cgpt -b openai -m gpt-4 -i "Translate 'Hello, world!' to French"

# Load and save history
cgpt -I input_history.yaml -O output_history.yaml -i "Continue the conversation"

License

This project is licensed under the ISC License. See the LICENSE file for details.


This README provides an overview of the cgpt tool, including its features, installation instructions, usage examples, configuration options, and information about the Vim plugin. It also includes details about the supported backends and environment variables for API keys.

Happy hacking! 🚀

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func InitializeModel added in v0.4.0

func InitializeModel(cfg *Config, opts ...ModelOption) (llms.Model, error)

InitializeModel initializes the model with the given configuration and options.

Types

type ChatCompletionPayload

type ChatCompletionPayload struct {
	Model    string `json:"model"`
	Messages []llms.MessageContent
	Stream   bool `json:"stream,omitempty"`
}

type CompletionService

type CompletionService struct {

	// Stdout is the writer for standard output. If nil, os.Stdout will be used.
	Stdout io.Writer
	// Stderr is the writer for standard error. If nil, os.Stderr will be used.
	Stderr io.Writer
	// contains filtered or unexported fields
}

func NewCompletionService

func NewCompletionService(cfg *Config, model llms.Model, opts ...CompletionServiceOption) (*CompletionService, error)

NewCompletionService creates a new CompletionService with the given configuration.

func (*CompletionService) PerformCompletion

func (s *CompletionService) PerformCompletion(ctx context.Context, payload *ChatCompletionPayload, cfg PerformCompletionConfig) (string, error)

PerformCompletion provides a non-streaming version of the completion.

func (*CompletionService) PerformCompletionStreaming

func (s *CompletionService) PerformCompletionStreaming(ctx context.Context, payload *ChatCompletionPayload, cfg PerformCompletionConfig) (<-chan string, error)

func (*CompletionService) Run

func (s *CompletionService) Run(ctx context.Context, runCfg RunOptions) error

func (*CompletionService) SetNextCompletionPrefill added in v0.3.0

func (s *CompletionService) SetNextCompletionPrefill(content string)

SetNextCompletionPrefill sets the next completion prefill message. Note that not all inference engines support prefill messages. Whitespace is trimmed from the end of the message.

type CompletionServiceOption added in v0.4.0

type CompletionServiceOption func(*CompletionService)

func WithLogger added in v0.4.0

WithLogger sets the logger for the completion service.

func WithStderr added in v0.4.0

func WithStderr(w io.Writer) CompletionServiceOption

func WithStdout added in v0.4.0

func WithStdout(w io.Writer) CompletionServiceOption

type Config

type Config struct {
	Backend     string  `yaml:"backend"`
	Model       string  `yaml:"model"`
	Stream      bool    `yaml:"stream"`
	MaxTokens   int     `yaml:"maxTokens"`
	Temperature float64 `yaml:"temperature"`

	SystemPrompt string             `yaml:"systemPrompt"`
	LogitBias    map[string]float64 `yaml:"logitBias"`

	CompletionTimeout time.Duration `yaml:"completionTimeout"`

	Debug bool `yaml:"debug"`

	OpenAIAPIKey    string `yaml:"openaiAPIKey"`
	AnthropicAPIKey string `yaml:"anthropicAPIKey"`
	GoogleAPIKey    string `yaml:"googleAPIKey"`
}

func LoadConfig added in v0.2.0

func LoadConfig(path string, stderr io.Writer, flagSet *pflag.FlagSet) (*Config, error)

LoadConfig loads the configuration from various sources in the following order of precedence: 1. Command-line flags (highest priority) 2. Environment variables 3. Configuration file 4. Default values (lowest priority)

The function performs the following steps: - Sets default values - Binds command-line flags - Loads environment variables - Reads the configuration file - Unmarshals the configuration into the Config struct

If a config file is not found, it falls back to using defaults and flags. The --verbose flag can be used to print the final configuration.

type DummyBackend added in v0.4.0

type DummyBackend struct {
	GenerateText func() string
}

func NewDummyBackend added in v0.4.0

func NewDummyBackend() (*DummyBackend, error)

func (*DummyBackend) Call added in v0.4.0

func (d *DummyBackend) Call(ctx context.Context, prompt string, options ...llms.CallOption) (string, error)

func (*DummyBackend) CreateEmbedding added in v0.4.0

func (d *DummyBackend) CreateEmbedding(ctx context.Context, text string) ([]float64, error)

func (*DummyBackend) GenerateContent added in v0.4.0

func (d *DummyBackend) GenerateContent(ctx context.Context, messages []llms.MessageContent, options ...llms.CallOption) (*llms.ContentResponse, error)

type InputHandler added in v0.4.0

type InputHandler struct {
	Files   []string
	Strings []string
	Args    []string
	Stdin   io.Reader
}

InputHandler manages multiple input sources.

func (*InputHandler) Process added in v0.4.0

func (h *InputHandler) Process(ctx context.Context) (io.Reader, error)

Process reads the set of inputs, this will block on stdin if it is included. The order of precedence is: 1. Files 2. Strings 3. Args

type InputSource added in v0.4.0

type InputSource struct {
	Type   InputSourceType
	Reader io.Reader
}

InputSource represents a single input source.

type InputSourceType added in v0.4.0

type InputSourceType string

InputSourceType represents the type of input source.

const (
	InputSourceStdin  InputSourceType = "stdin"
	InputSourceFile   InputSourceType = "file"
	InputSourceString InputSourceType = "string"
	InputSourceArg    InputSourceType = "arg"
)

type InputSources added in v0.4.0

type InputSources []InputSource

InputSources is a slice of InputSource.

type Message added in v0.4.5

type Message = []llms.MessageContent

type ModelOption added in v0.4.0

type ModelOption func(*modelOptions)

ModelOption is a function that modifies the model options

func WithHTTPClient added in v0.4.0

func WithHTTPClient(client *http.Client) ModelOption

WithHTTPClient sets a custom HTTP client for the model

type PerformCompletionConfig added in v0.3.2

type PerformCompletionConfig struct {
	Stdout      io.Writer
	EchoPrefill bool
	ShowSpinner bool
}

PerformCompletionConfig is the configuration for the PerformCompletion method, it controls the behavior of the completion with regard to user interaction.

type RunOptions added in v0.4.0

type RunOptions struct {
	// Config options
	*Config `json:"config,omitempty" yaml:"config,omitempty"`
	// Input options
	InputStrings   []string `json:"inputStrings,omitempty" yaml:"inputStrings,omitempty"`
	InputFiles     []string `json:"inputFiles,omitempty" yaml:"inputFiles,omitempty"`
	PositionalArgs []string `json:"positionalArgs,omitempty" yaml:"positionalArgs,omitempty"`
	Prefill        string   `json:"prefill,omitempty" yaml:"prefill,omitempty"`

	// Output options
	Continuous   bool `json:"continuous,omitempty" yaml:"continuous,omitempty"`
	StreamOutput bool `json:"streamOutput,omitempty" yaml:"streamOutput,omitempty"`
	ShowSpinner  bool `json:"showSpinner,omitempty" yaml:"showSpinner,omitempty"`
	EchoPrefill  bool `json:"echoPrefill,omitempty" yaml:"echoPrefill,omitempty"`

	// Verbosity options
	Verbose   bool `json:"verbose,omitempty" yaml:"verbose,omitempty"`
	DebugMode bool `json:"debugMode,omitempty" yaml:"debugMode,omitempty"`

	// History options
	HistoryIn           string `json:"historyIn,omitempty" yaml:"historyIn,omitempty"`
	HistoryOut          string `json:"historyOut,omitempty" yaml:"historyOut,omitempty"`
	ReadlineHistoryFile string `json:"readlineHistoryFile,omitempty" yaml:"readlineHistoryFile,omitempty"`
	NCompletions        int    `json:"nCompletions,omitempty" yaml:"nCompletions,omitempty"`

	// I/O
	Stdout io.Writer `json:"-" yaml:"-"`
	Stderr io.Writer `json:"-" yaml:"-"`
	Stdin  io.Reader `json:"-" yaml:"-"`

	// Timing
	MaximumTimeout time.Duration `json:"maximumTimeout,omitempty" yaml:"maximumTimeout,omitempty"`

	ConfigPath string `json:"configPath,omitempty" yaml:"configPath,omitempty"`
}

RunOptions contains all the options that are relevant to run cgpt.

func (*RunOptions) GetCombinedInputReader added in v0.4.0

func (ro *RunOptions) GetCombinedInputReader(ctx context.Context) (io.Reader, error)

GetCombinedInputReader returns an io.Reader that combines all input sources.

Directories

Path Synopsis
cmd
cgpt
Command cgpt is a command line tool for interacting with Large Language Models (LLMs).
Command cgpt is a command line tool for interacting with Large Language Models (LLMs).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL