fun

package module
v0.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 10, 2024 License: Apache-2.0 Imports: 22 Imported by: 0

README

Define functions with code, data, or natural language description

pkg.go.dev Go Report Card pipeline status coverage report

A Go package provides high-level abstraction to define functions with code (the usual way), data (providing examples of inputs and expected outputs which are then used with an AI model), or natural language description. It is the simplest but powerful way to use large language models (LLMs) in Go.

Features:

  • A common interface to support both code-defined, data-defined, and description-defined functions.
  • Functions are strongly typed so inputs and outputs can be Go structs and values.
  • Provides unofficial OpenAI, Groq, Anthropic and Ollama integrations for AI (LLM) models.
  • Support for tool calling which transparently calls into Go functions with Go structs and values as inputs and outputs. Recursion possible.
  • Uses adaptive rate limiting to maximize throughput of API calls made to integrated AI models.
  • Provides a CLI tool fun which makes it easy to run data-defined and description-defined functions on files.

Installation

This is a Go package. You can add it to your project using go get:

go get gitlab.com/tozd/go/fun

It requires Go 1.23 or newer.

Releases page contains a list of stable versions of the fun tool. Each includes:

  • Statically compiled binaries.
  • Docker images.

You should just download/use the latest one.

The tool is implemented in Go. You can also use go install to install the latest stable (released) version:

go install gitlab.com/tozd/go/fun/cmd/go/fun@latest

To install the latest development version (main branch):

go install gitlab.com/tozd/go/fun/cmd/go/fun@main

Usage

As a package

See full package documentation with examples on pkg.go.dev.

fun tool

fun tool calls a function on files. You can provide:

  • Examples of inputs and expected outputs as files (as pairs of files with same basename but different file extensions).
  • Natural language description of the function, a prompt.
  • Input files on which to run the function.
  • Files with input and output JSON Schemas to validate inputs and outputs, respectively.

You have to provide example inputs and outputs or a prompt, and you can provide both.

fun has two sub-commands:

  • extract supports extracting parts of one JSON into multiple files using GJSON query. Because fun calls the function on files this is useful to preprocess a large JSON file to create files to then call the function on.
    • The query should return an array of objects with ID and data fields (by default named id and data).
  • call then calls the function on files in the input directory and writes results into files in the output directory.
    • Corresponding output files will have the same basename as input files but with the output file extension (configurable) so it is safe to use the same directory both for input and output files.
    • fun calls the function only for files which do not yet exist in the output directory so it is safe to run fun multiple times if previous run of fun had issues or was interrupted.
    • fun supports splitting input files into batches so one run of fun can operate only on a particular batch. Useful if you want to distribute execution across multiple machines.
    • If output fails to validate the JSON Schema, the output is stored into a file with additional suffix .invalid. If calling the function fails for some other reason, the error is stored into a file with additional suffix .error.
  • combine combines multiple input directories into one output directory with only those files which are equal in all input directories.
    • Provided input directories should be outputs from different models or different configurations but all run on same input files.
    • This allows decreasing false positives at the expense of having less outputs overall.

For details on all CLI arguments possible, run fun --help:

fun --help

If you have Go available, you can run it without installation:

go run gitlab.com/tozd/go/fun/cmd/go/fun@latest --help

Or with Docker:

docker run -i registry.gitlab.com/tozd/go/fun/branch/main:latest --help

The above command runs the latest development version (main branch). See releases page for a Docker image for the latest stable version.

Example

If you have a large JSON file with the following structure:

{
  "exercises": [
    {
      "serial": 1,
      "text": "Ariel was playing basketball. 1 of her shots went in the hoop. 2 of the shots did not go in the hoop. How many shots were there in total?"
    },
    // ...
  ]
}

To create for each exercise a .txt file with filename based on the serial field (e.g., 1.txt) and contents based on the text field, in the data output directory, you could run:

fun extract --input exercises.json --output data --out=.txt 'exercises.#.{id:serial,data:text}'

To solve all exercises, you can then run:

export ANTHROPIC_API_KEY='...'
echo "You MUST output only final number, nothing more." > prompt.txt
fun call --input data --output results --provider anthropic --model claude-3-haiku-20240307 --in .txt --out .txt --prompt prompt.txt

For the data/1.txt input file you should now get results/1.txt output file with contents 3.

The issue is that sadly the function might sometimes output more than just the number. We can detect those cases using JSON Schema to validate outputs. We can use a JSON Schema to validate that the output is an integer. We will see warnings in cases when outputs do not validate and corresponding output files will not be created.

echo '{"type": "integer"}' > schema.json
fun call --input data --output results --provider anthropic --model claude-3-haiku-20240307 --in .txt --out .txt --prompt prompt.txt --output-schema schema.json

We can also use a JSON Schema to validate that the output is a string matching a regex:

echo '{"type": "string", "pattern": "^[0-9]+$"}' > schema.json
fun call --input data --output results --provider anthropic --model claude-3-haiku-20240307 --in .txt --out .txt --prompt prompt.txt --output-schema schema.json

GitHub mirror

There is also a read-only GitHub mirror available, if you need to fork the project there.

Acknowledgements

The project gratefully acknowledge the HPC RIVR consortium and EuroHPC JU for funding this project by providing computing resources of the HPC system Vega at the Institute of Information Science.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union nor the granting authority can be held responsible for them. Funded within the framework of the NGI Search project under grant agreement No 101069364.

Funded by the European Union emblem NGI Search logo

Documentation

Overview

Package fun provides high-level abstraction to define functions with code (the usual way), data (providing examples of inputs and expected outputs which are then used with an AI model), or natural language description. It is the simplest but powerful way to use large language models (LLMs) in Go.

Index

Examples

Constants

View Source
const (
	// Prompt to parse input string into the target struct.
	TextParserToJSONPrompt = `` //nolint:lll
	/* 265-byte string literal not displayed */

	// Prompt to request only JSON output, which is then converted into the target struct.
	TextToJSONPrompt = `Output only JSON.`
)

Variables

View Source
var (
	ErrAlreadyInitialized           = errors.Base("already initialized")
	ErrMultipleSystemMessages       = errors.Base("multiple system messages")
	ErrGaveUpRetry                  = errors.Base("gave up retrying")
	ErrAPIRequestFailed             = errors.Base("API request failed")
	ErrAPIResponseError             = errors.Base("API response error")
	ErrMissingRequestID             = errors.Base("missing request ID")
	ErrModelNotActive               = errors.Base("model not active")
	ErrUnexpectedRole               = errors.Base("unexpected role")
	ErrUnexpectedNumberOfMessages   = errors.Base("unexpected number of messages")
	ErrUnexpectedMessageType        = errors.Base("unexpected message type")
	ErrUnexpectedStop               = errors.Base("unexpected stop")
	ErrUnexpectedNumberOfTokens     = errors.Base("unexpected number of tokens")
	ErrModelMaxContextLength        = errors.Base("unable to determine model max context length")
	ErrMaxContextLengthOverModel    = errors.Base("max context length over what model supports")
	ErrMaxResponseLengthOverContext = errors.Base("max response length over max context length")
	ErrJSONSchemaValidation         = errors.Base("JSON Schema validation error")
	ErrRefused                      = errors.Base("refused")
	ErrInvalidJSONSchema            = errors.Base("invalid JSON Schema")
	ErrToolNotFound                 = errors.Base("tool not found")
	ErrToolCallsWithoutCalls        = errors.Base("tool calls without calls")
)

Functions

func WithTextRecorder added in v0.4.0

func WithTextRecorder(ctx context.Context) context.Context

WithTextRecorder returns a copy of the context in which an instance of TextRecorder is stored.

Passing such context to Text.Call allows you to record all communication with the AI model and track usage.

Types

type AnthropicTextProvider

type AnthropicTextProvider struct {
	// Client is a HTTP client to be used for API calls. If not provided
	// a rate-limited retryable HTTP client is initialized instead.
	Client *http.Client `json:"-"`

	// APIKey is the API key to be used for API calls.
	APIKey string `json:"-"`

	// Model is the name of the model to be used.
	Model string `json:"model"`

	// MaxContextLength is the maximum total number of tokens allowed to be used
	// with the underlying AI model (i.e., the maximum context window).
	// If not provided, heuristics are used to determine it automatically.
	MaxContextLength int `json:"maxContextLength"`

	// MaxResponseLength is the maximum number of tokens allowed to be used in
	// a response with the underlying AI model. If not provided, heuristics
	// are used to determine it automatically.
	MaxResponseLength int `json:"maxResponseLength"`

	// PromptCaching set to true enables prompt caching.
	PromptCaching bool `json:"promptCaching"`

	// Temperature is how creative should the AI model be.
	// Default is 0 which means not at all.
	Temperature float64 `json:"temperature"`
	// contains filtered or unexported fields
}

AnthropicTextProvider is a TextProvider which provides integration with text-based Anthropic AI models.

func (*AnthropicTextProvider) Chat

func (a *AnthropicTextProvider) Chat(ctx context.Context, message ChatMessage) (string, errors.E)

Chat implements TextProvider interface.

func (*AnthropicTextProvider) Init

func (a *AnthropicTextProvider) Init(_ context.Context, messages []ChatMessage) errors.E

Init implements TextProvider interface.

func (*AnthropicTextProvider) InitTools added in v0.4.0

func (a *AnthropicTextProvider) InitTools(ctx context.Context, tools map[string]TextTooler) errors.E

InitTools implements WithTools interface.

func (AnthropicTextProvider) MarshalJSON added in v0.4.0

func (a AnthropicTextProvider) MarshalJSON() ([]byte, error)

type Callee

type Callee[Input, Output any] interface {
	// Init initializes the callee.
	Init(ctx context.Context) errors.E

	// Call calls the callee with provided inputs and returns the output.
	Call(ctx context.Context, input ...Input) (Output, errors.E)

	// Variadic returns a Go function which takes variadic inputs
	// and returns the output as defined by the callee.
	Variadic() func(ctx context.Context, input ...Input) (Output, errors.E)

	// Variadic returns a Go function which takes one input
	// and returns the output as defined by the callee.
	Unary() func(ctx context.Context, input Input) (Output, errors.E)
}

Callee is a high-level function abstraction to unify functions defined in different ways.

type ChatMessage

type ChatMessage struct {
	// Role of the message.
	Role string `json:"role"`

	// Content is textual content of the message.
	Content string `json:"content"`
}

ChatMessage is a message struct for TextProvider.

type Duration added in v0.6.0

type Duration time.Duration

Duration is time.Duration but which formats duration as seconds with millisecond precision in JSON.

func (Duration) MarshalJSON added in v0.6.0

func (d Duration) MarshalJSON() ([]byte, error)

type Go

type Go[Input, Output any] struct {
	// Fun implements the logic.
	Fun func(ctx context.Context, input ...Input) (Output, errors.E)
}

Go implements Callee interface with its logic defined by Go function.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"gitlab.com/tozd/go/errors"

	"gitlab.com/tozd/go/fun"
)

func main() {
	ctx := context.Background()

	f := fun.Go[int, int]{
		Fun: func(_ context.Context, input ...int) (int, errors.E) {
			return input[0] + input[1], nil
		},
	}
	errE := f.Init(ctx)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}

	output, errE := f.Call(ctx, 38, 4)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}
	fmt.Println(output)

}
Output:

42

func (*Go[Input, Output]) Call

func (f *Go[Input, Output]) Call(ctx context.Context, input ...Input) (Output, errors.E)

Call implements Callee interface.

func (*Go[Input, Output]) Init

func (*Go[Input, Output]) Init(_ context.Context) errors.E

Init implements Callee interface.

func (*Go[Input, Output]) Unary added in v0.4.0

func (f *Go[Input, Output]) Unary() func(ctx context.Context, input Input) (Output, errors.E)

Unary implements Callee interface.

func (*Go[Input, Output]) Variadic added in v0.4.0

func (f *Go[Input, Output]) Variadic() func(ctx context.Context, input ...Input) (Output, errors.E)

Variadic implements Callee interface.

type GroqTextProvider

type GroqTextProvider struct {
	// Client is a HTTP client to be used for API calls. If not provided
	// a rate-limited retryable HTTP client is initialized instead.
	Client *http.Client `json:"-"`

	// APIKey is the API key to be used for API calls.
	APIKey string `json:"-"`

	// Model is the name of the model to be used.
	Model string `json:"model"`

	// MaxContextLength is the maximum total number of tokens allowed to be used
	// with the underlying AI model (i.e., the maximum context window).
	// If not provided, heuristics are used to determine it automatically.
	MaxContextLength int `json:"maxContextLength"`

	// MaxResponseLength is the maximum number of tokens allowed to be used in
	// a response with the underlying AI model. If not provided, heuristics
	// are used to determine it automatically.
	MaxResponseLength int `json:"maxResponseLength"`

	// Seed is used to control the randomness of the AI model. Default is 0.
	Seed int `json:"seed"`

	// Temperature is how creative should the AI model be.
	// Default is 0 which means not at all.
	Temperature float64 `json:"temperature"`
	// contains filtered or unexported fields
}

GroqTextProvider is a TextProvider which provides integration with text-based Groq AI models.

func (*GroqTextProvider) Chat

func (g *GroqTextProvider) Chat(ctx context.Context, message ChatMessage) (string, errors.E)

Chat implements TextProvider interface.

func (*GroqTextProvider) Init

func (g *GroqTextProvider) Init(ctx context.Context, messages []ChatMessage) errors.E

Init implements TextProvider interface.

func (*GroqTextProvider) InitTools added in v0.4.0

func (g *GroqTextProvider) InitTools(ctx context.Context, tools map[string]TextTooler) errors.E

InitTools implements WithTools interface.

func (GroqTextProvider) MarshalJSON added in v0.4.0

func (g GroqTextProvider) MarshalJSON() ([]byte, error)

type InputOutput

type InputOutput[Input, Output any] struct {
	Input  []Input
	Output Output
}

InputOutput describes one example (variadic) input with corresponding output.

type OllamaModelAccess added in v0.4.0

type OllamaModelAccess struct {
	Insecure bool
	Username string
	Password string
}

OllamaModelAccess describes access to a model for OllamaTextProvider.

type OllamaTextProvider

type OllamaTextProvider struct {
	// Client is a HTTP client to be used for API calls. If not provided
	// a rate-limited retryable HTTP client is initialized instead.
	Client *http.Client `json:"-"`

	// Base is a HTTP URL where Ollama instance is listening.
	Base string `json:"-"`

	// Model is the name of the model to be used.
	Model string `json:"model"`

	// ModelAccess allows Ollama to access private AI models.
	ModelAccess OllamaModelAccess `json:"-"`

	// MaxContextLength is the maximum total number of tokens allowed to be used
	// with the underlying AI model (i.e., the maximum context window).
	// If not provided, it is obtained from Ollama for the model.
	MaxContextLength int `json:"maxContextLength"`

	// MaxResponseLength is the maximum number of tokens allowed to be used in
	// a response with the underlying AI model. If not provided, -2 is used
	// which instructs Ollama to fill the context.
	MaxResponseLength int `json:"maxResponseLength"`

	// Seed is used to control the randomness of the AI model. Default is 0.
	Seed int `json:"seed"`

	// Temperature is how creative should the AI model be.
	// Default is 0 which means not at all.
	Temperature float64 `json:"temperature"`
	// contains filtered or unexported fields
}

OllamaTextProvider is a TextProvider which provides integration with text-based Ollama AI models.

func (*OllamaTextProvider) Chat

func (o *OllamaTextProvider) Chat(ctx context.Context, message ChatMessage) (string, errors.E)

Chat implements TextProvider interface.

func (*OllamaTextProvider) Init

func (o *OllamaTextProvider) Init(ctx context.Context, messages []ChatMessage) errors.E

Init implements TextProvider interface.

func (*OllamaTextProvider) InitTools added in v0.4.0

func (o *OllamaTextProvider) InitTools(ctx context.Context, tools map[string]TextTooler) errors.E

InitTools implements WithTools interface.

func (OllamaTextProvider) MarshalJSON added in v0.4.0

func (o OllamaTextProvider) MarshalJSON() ([]byte, error)

type OpenAITextProvider added in v0.4.0

type OpenAITextProvider struct {
	// Client is a HTTP client to be used for API calls. If not provided
	// a rate-limited retryable HTTP client is initialized instead.
	Client *http.Client `json:"-"`

	// APIKey is the API key to be used for API calls.
	APIKey string `json:"-"`

	// Model is the name of the model to be used.
	Model string `json:"model"`

	// MaxContextLength is the maximum total number of tokens allowed to be used
	// with the underlying AI model (i.e., the maximum context window).
	// If not provided, heuristics are used to determine it automatically.
	MaxContextLength int `json:"maxContextLength"`

	// MaxResponseLength is the maximum number of tokens allowed to be used in
	// a response with the underlying AI model. If not provided, heuristics
	// are used to determine it automatically.
	MaxResponseLength int `json:"maxResponseLength"`

	// ForceOutputJSONSchema when set to true requests the AI model to force
	// the output JSON Schema for its output. When true, consider using
	// meaningful property names and use "description" JSON Schema field to
	// describe to the AI model what each property is. When true, the JSON
	// Schema must have "title" field to name the JSON Schema and consider
	// using "description" field to describe the JSON Schema itself.
	//
	// There are currently limitations on the JSON Schema imposed by OpenAI,
	// so JSON Schema automatically determined from the Output type fails,
	// e.g., only "object" top-level type can be used, all properties must
	// be required, "additionalProperties" must be set to false, top-level
	// $ref is not supported. This further means that only structs can be
	// used as Output types.
	ForceOutputJSONSchema bool `json:"forceOutputJsonSchema"`

	// Seed is used to control the randomness of the AI model. Default is 0.
	Seed int `json:"seed"`

	// Temperature is how creative should the AI model be.
	// Default is 0 which means not at all.
	Temperature float64 `json:"temperature"`
	// contains filtered or unexported fields
}

OpenAITextProvider is a TextProvider which provides integration with text-based OpenAI AI models.

func (*OpenAITextProvider) Chat added in v0.4.0

func (o *OpenAITextProvider) Chat(ctx context.Context, message ChatMessage) (string, errors.E)

Chat implements TextProvider interface.

func (*OpenAITextProvider) Init added in v0.4.0

func (o *OpenAITextProvider) Init(_ context.Context, messages []ChatMessage) errors.E

Init implements TextProvider interface.

func (*OpenAITextProvider) InitOutputJSONSchema added in v0.4.0

func (o *OpenAITextProvider) InitOutputJSONSchema(_ context.Context, schema []byte) errors.E

InitOutputJSONSchema implements WithOutputJSONSchema interface.

func (*OpenAITextProvider) InitTools added in v0.4.0

func (o *OpenAITextProvider) InitTools(ctx context.Context, tools map[string]TextTooler) errors.E

InitTools implements WithTools interface.

func (OpenAITextProvider) MarshalJSON added in v0.4.0

func (o OpenAITextProvider) MarshalJSON() ([]byte, error)

type Text

type Text[Input, Output any] struct {
	// Provider is a text-based AI model.
	Provider TextProvider

	// InputJSONSchema is a JSON Schema to validate inputs against.
	// If not provided, it is automatically determined from the Input type.
	InputJSONSchema []byte

	// OutputJSONSchema is a JSON Schema to validate outputs against.
	// If not provided, it is automatically determined from the Output type.
	OutputJSONSchema []byte

	// Prompt is a natural language description of the logic.
	Prompt string

	// Data are example inputs with corresponding outputs for the function.
	Data []InputOutput[Input, Output]

	// Tools that can be called by the AI model.
	Tools map[string]TextTooler
	// contains filtered or unexported fields
}

Text implements Callee interface with its logic defined by given example data inputs and outputs, or a natural language description, or both.

It uses a text-based AI model provided by a TextProvider.

For non-string Input types, it marshals them to JSON before providing them to the AI model, and for non-string Output types, it unmarshals model outputs from JSON to Output type. For this to work, Input and Output types should have a JSON representation.

Example (Data)
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"gitlab.com/tozd/go/fun"
)

func main() {
	if os.Getenv("ANTHROPIC_API_KEY") == "" {
		fmt.Println("skipped")
		return
	}

	ctx := context.Background()

	f := fun.Text[int, int]{
		Provider: &fun.AnthropicTextProvider{
			APIKey: os.Getenv("ANTHROPIC_API_KEY"),
			Model:  "claude-3-haiku-20240307",
		},
		Data: []fun.InputOutput[int, int]{
			{[]int{1, 2}, 3},
			{[]int{10, 12}, 22},
			{[]int{3, 5}, 8},
		},
	}
	errE := f.Init(ctx)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}

	output, errE := f.Call(ctx, 38, 4)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}
	fmt.Println(output)

}
Output:

42
Example (Description)
package main

import (
	"context"
	"fmt"
	"log"
	"os"

	"gitlab.com/tozd/go/fun"
)

func main() {
	if os.Getenv("GROQ_API_KEY") == "" {
		fmt.Println("skipped")
		return
	}

	ctx := context.Background()

	f := fun.Text[int, int]{
		Provider: &fun.GroqTextProvider{
			APIKey: os.Getenv("GROQ_API_KEY"),
			Model:  "llama3-8b-8192",
			Seed:   42,
		},
		Prompt: `Sum numbers together. Output only the number.`,
	}
	errE := f.Init(ctx)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}

	output, errE := f.Call(ctx, 38, 4)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}
	fmt.Println(output)

}
Output:

42

func (*Text[Input, Output]) Call

func (t *Text[Input, Output]) Call(ctx context.Context, input ...Input) (Output, errors.E)

Call implements Callee interface.

func (*Text[Input, Output]) Init

func (t *Text[Input, Output]) Init(ctx context.Context) errors.E

Init implements Callee interface.

func (*Text[Input, Output]) Unary added in v0.4.0

func (t *Text[Input, Output]) Unary() func(ctx context.Context, input Input) (Output, errors.E)

Unary implements Callee interface.

func (*Text[Input, Output]) Variadic added in v0.4.0

func (t *Text[Input, Output]) Variadic() func(ctx context.Context, input ...Input) (Output, errors.E)

Variadic implements Callee interface.

type TextProvider

type TextProvider interface {
	// Init initializes text provider with optional messages which
	// are at every Chat call prepended to its message. These messages
	// can include system prompt and prior conversation with the AI model.
	Init(ctx context.Context, messages []ChatMessage) errors.E

	// Chat sends a message to the AI model and returns its response.
	Chat(ctx context.Context, message ChatMessage) (string, errors.E)
}

TextProvider is a provider for text-based LLMs.

type TextRecorder added in v0.4.0

type TextRecorder struct {
	// contains filtered or unexported fields
}

TextRecorderCall is a recorder which records all communication with the AI model and track usage.

It can be used to record multiple calls and it can be used concurrently, but it is suggested that you create a new instance with WithTextRecorder for every call.

Example
if os.Getenv("ANTHROPIC_API_KEY") == "" || os.Getenv("OPENAI_API_KEY") == "" {
	fmt.Println("skipped")
	return
}

ctx := context.Background()

// We can define a tool implementation with another model.
tool := fun.Text[toolInput, float64]{
	Provider: &fun.AnthropicTextProvider{
		APIKey: os.Getenv("ANTHROPIC_API_KEY"),
		Model:  "claude-3-haiku-20240307",
	},
	InputJSONSchema:  jsonSchemaNumbers,
	OutputJSONSchema: jsonSchemaNumber,
	Prompt:           `Sum numbers together. Output only the number.`,
}
errE := tool.Init(ctx)
if errE != nil {
	log.Fatalf("% -+#.1v\n", errE)
}

f := fun.Text[int, int]{
	Provider: &fun.OpenAITextProvider{
		APIKey:            os.Getenv("OPENAI_API_KEY"),
		Model:             "gpt-4o-mini-2024-07-18",
		MaxContextLength:  128_000,
		MaxResponseLength: 16_384,
		Seed:              42,
	},
	Prompt: `Sum numbers together. Output only the number.`,
	Tools: map[string]fun.TextTooler{
		"sum_numbers": &fun.TextTool[toolInput, float64]{
			Description:      "Sums numbers together.",
			InputJSONSchema:  jsonSchemaNumbers,
			OutputJSONSchema: jsonSchemaNumber,
			// Here we provide the tool implemented with another model.
			Fun: tool.Unary(),
		},
	},
}
errE = f.Init(ctx)
if errE != nil {
	log.Fatalf("% -+#.1v\n", errE)
}

// We use the recorder to make sure the tool has really been called.
ctx = fun.WithTextRecorder(ctx)

output, errE := f.Call(ctx, 38, 4)
if errE != nil {
	log.Fatalf("% -+#.1v\n", errE)
}
fmt.Println(output)

calls := fun.GetTextRecorder(ctx).Calls()
// We change calls a bit for the example to be deterministic.
cleanCalls(calls)

callsJSON, err := json.MarshalIndent(calls, "", "  ")
if err != nil {
	log.Fatalf("%v\n", err)
}
fmt.Println(string(callsJSON))
Output:

42
[
  {
    "id": "id_1",
    "provider": {
      "type": "openai",
      "model": "gpt-4o-mini-2024-07-18",
      "maxContextLength": 128000,
      "maxResponseLength": 16384,
      "forceOutputJsonSchema": false,
      "seed": 42,
      "temperature": 0
    },
    "messages": [
      {
        "role": "system",
        "content": "Sum numbers together. Output only the number."
      },
      {
        "role": "user",
        "content": "[38,4]"
      },
      {
        "role": "tool_use",
        "content": "{\"numbers\":[38,4]}",
        "toolUseId": "call_1_2",
        "toolUseName": "sum_numbers"
      },
      {
        "role": "tool_result",
        "content": "42",
        "toolUseId": "call_1_2",
        "toolDuration": 100004.000,
        "toolCalls": [
          {
            "id": "id_2",
            "provider": {
              "type": "anthropic",
              "model": "claude-3-haiku-20240307",
              "maxContextLength": 200000,
              "maxResponseLength": 4096,
              "promptCaching": false,
              "temperature": 0
            },
            "messages": [
              {
                "role": "system",
                "content": "Sum numbers together. Output only the number."
              },
              {
                "role": "user",
                "content": "{\"numbers\":[38,4]}"
              },
              {
                "role": "assistant",
                "content": "42"
              }
            ],
            "usedTokens": {
              "req_2_0": {
                "maxTotal": 200000,
                "maxResponse": 4096,
                "prompt": 24,
                "response": 5,
                "total": 29
              }
            },
            "usedTime": {
              "req_2_0": {
                "apiCall": 1.000
              }
            },
            "duration": 2.000
          }
        ]
      },
      {
        "role": "assistant",
        "content": "42"
      }
    ],
    "usedTokens": {
      "req_1_0": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 57,
        "response": 16,
        "total": 73
      },
      "req_1_1": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 82,
        "response": 2,
        "total": 84
      }
    },
    "usedTime": {
      "req_1_0": {
        "apiCall": 1.000
      },
      "req_1_1": {
        "apiCall": 2.000
      }
    },
    "duration": 1.000
  }
]

func GetTextRecorder added in v0.4.0

func GetTextRecorder(ctx context.Context) *TextRecorder

GetTextRecorder returns the instance of TextRecorder stored in the context, if any.

func (*TextRecorder) Calls added in v0.4.0

func (t *TextRecorder) Calls() []TextRecorderCall

TextRecorderCall returns calls records recorded by this recorder.

It returns only completed calls. If you need access to calls while they are being recorded, use [Notify].

In most cases this will be just one call record unless you are reusing same context across multiple calls.

func (*TextRecorder) Notify added in v0.6.0

func (t *TextRecorder) Notify(c chan<- []TextRecorderCall)

Notify sets a channel which is used to send all recorded calls (at current, possibly incomplete, state) anytime any of them changes.

Example
if os.Getenv("ANTHROPIC_API_KEY") == "" || os.Getenv("OPENAI_API_KEY") == "" {
	fmt.Println("skipped")
	return
}

ctx := context.Background()

// We can define a tool implementation with another model.
tool := fun.Text[toolInput, float64]{
	Provider: &fun.AnthropicTextProvider{
		APIKey: os.Getenv("ANTHROPIC_API_KEY"),
		Model:  "claude-3-haiku-20240307",
	},
	InputJSONSchema:  jsonSchemaNumbers,
	OutputJSONSchema: jsonSchemaNumber,
	Prompt:           `Sum numbers together. Output only the number.`,
}
errE := tool.Init(ctx)
if errE != nil {
	log.Fatalf("% -+#.1v\n", errE)
}

f := fun.Text[int, int]{
	Provider: &fun.OpenAITextProvider{
		APIKey:            os.Getenv("OPENAI_API_KEY"),
		Model:             "gpt-4o-mini-2024-07-18",
		MaxContextLength:  128_000,
		MaxResponseLength: 16_384,
		Seed:              42,
	},
	Prompt: `Sum numbers together. Output only the number.`,
	Tools: map[string]fun.TextTooler{
		"sum_numbers": &fun.TextTool[toolInput, float64]{
			Description:      "Sums numbers together.",
			InputJSONSchema:  jsonSchemaNumbers,
			OutputJSONSchema: jsonSchemaNumber,
			// Here we provide the tool implemented with another model.
			Fun: tool.Unary(),
		},
	},
}
errE = f.Init(ctx)
if errE != nil {
	log.Fatalf("% -+#.1v\n", errE)
}

// We use the recorder to make sure the tool has really been called.
ctx = fun.WithTextRecorder(ctx)

// We want to be notified as soon as a message is received or send.
c := make(chan []fun.TextRecorderCall)
fun.GetTextRecorder(ctx).Notify(c)

var wg sync.WaitGroup
wg.Add(1)
go func() {
	defer wg.Done()
	for n := range c {
		// We change calls a bit for the example to be deterministic.
		cleanCalls(n)

		nJSON, err := json.MarshalIndent(n, "", "  ")
		if err != nil {
			log.Fatalf("%v\n", err)
		}
		fmt.Println(string(nJSON))
	}
}()

output, errE := f.Call(ctx, 38, 4)
if errE != nil {
	log.Fatalf("% -+#.1v\n", errE)
}

close(c)
wg.Wait()

fmt.Println(output)
Output:

[
  {
    "id": "id_1",
    "provider": {
      "type": "openai",
      "model": "gpt-4o-mini-2024-07-18",
      "maxContextLength": 128000,
      "maxResponseLength": 16384,
      "forceOutputJsonSchema": false,
      "seed": 42,
      "temperature": 0
    },
    "messages": [
      {
        "role": "system",
        "content": "Sum numbers together. Output only the number."
      },
      {
        "role": "user",
        "content": "[38,4]"
      }
    ],
    "duration": 1.000
  }
]
[
  {
    "id": "id_1",
    "provider": {
      "type": "openai",
      "model": "gpt-4o-mini-2024-07-18",
      "maxContextLength": 128000,
      "maxResponseLength": 16384,
      "forceOutputJsonSchema": false,
      "seed": 42,
      "temperature": 0
    },
    "messages": [
      {
        "role": "system",
        "content": "Sum numbers together. Output only the number."
      },
      {
        "role": "user",
        "content": "[38,4]"
      },
      {
        "role": "tool_use",
        "content": "{\"numbers\":[38,4]}",
        "toolUseId": "call_1_2",
        "toolUseName": "sum_numbers"
      }
    ],
    "usedTokens": {
      "req_1_0": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 57,
        "response": 16,
        "total": 73
      }
    },
    "usedTime": {
      "req_1_0": {
        "apiCall": 1.000
      }
    },
    "duration": 1.000
  }
]
[
  {
    "id": "id_1",
    "provider": {
      "type": "openai",
      "model": "gpt-4o-mini-2024-07-18",
      "maxContextLength": 128000,
      "maxResponseLength": 16384,
      "forceOutputJsonSchema": false,
      "seed": 42,
      "temperature": 0
    },
    "messages": [
      {
        "role": "system",
        "content": "Sum numbers together. Output only the number."
      },
      {
        "role": "user",
        "content": "[38,4]"
      },
      {
        "role": "tool_use",
        "content": "{\"numbers\":[38,4]}",
        "toolUseId": "call_1_2",
        "toolUseName": "sum_numbers"
      },
      {
        "role": "tool_result",
        "toolUseId": "call_1_2",
        "toolDuration": 100004.000,
        "toolCalls": [
          {
            "id": "id_2",
            "provider": {
              "type": "anthropic",
              "model": "claude-3-haiku-20240307",
              "maxContextLength": 200000,
              "maxResponseLength": 4096,
              "promptCaching": false,
              "temperature": 0
            },
            "messages": [
              {
                "role": "system",
                "content": "Sum numbers together. Output only the number."
              },
              {
                "role": "user",
                "content": "{\"numbers\":[38,4]}"
              }
            ],
            "duration": 2.000
          }
        ]
      }
    ],
    "usedTokens": {
      "req_1_0": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 57,
        "response": 16,
        "total": 73
      }
    },
    "usedTime": {
      "req_1_0": {
        "apiCall": 1.000
      }
    },
    "duration": 1.000
  }
]
[
  {
    "id": "id_1",
    "provider": {
      "type": "openai",
      "model": "gpt-4o-mini-2024-07-18",
      "maxContextLength": 128000,
      "maxResponseLength": 16384,
      "forceOutputJsonSchema": false,
      "seed": 42,
      "temperature": 0
    },
    "messages": [
      {
        "role": "system",
        "content": "Sum numbers together. Output only the number."
      },
      {
        "role": "user",
        "content": "[38,4]"
      },
      {
        "role": "tool_use",
        "content": "{\"numbers\":[38,4]}",
        "toolUseId": "call_1_2",
        "toolUseName": "sum_numbers"
      },
      {
        "role": "tool_result",
        "toolUseId": "call_1_2",
        "toolDuration": 100004.000,
        "toolCalls": [
          {
            "id": "id_2",
            "provider": {
              "type": "anthropic",
              "model": "claude-3-haiku-20240307",
              "maxContextLength": 200000,
              "maxResponseLength": 4096,
              "promptCaching": false,
              "temperature": 0
            },
            "messages": [
              {
                "role": "system",
                "content": "Sum numbers together. Output only the number."
              },
              {
                "role": "user",
                "content": "{\"numbers\":[38,4]}"
              },
              {
                "role": "assistant",
                "content": "42"
              }
            ],
            "usedTokens": {
              "req_2_0": {
                "maxTotal": 200000,
                "maxResponse": 4096,
                "prompt": 24,
                "response": 5,
                "total": 29
              }
            },
            "usedTime": {
              "req_2_0": {
                "apiCall": 1.000
              }
            },
            "duration": 2.000
          }
        ]
      }
    ],
    "usedTokens": {
      "req_1_0": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 57,
        "response": 16,
        "total": 73
      }
    },
    "usedTime": {
      "req_1_0": {
        "apiCall": 1.000
      }
    },
    "duration": 1.000
  }
]
[
  {
    "id": "id_1",
    "provider": {
      "type": "openai",
      "model": "gpt-4o-mini-2024-07-18",
      "maxContextLength": 128000,
      "maxResponseLength": 16384,
      "forceOutputJsonSchema": false,
      "seed": 42,
      "temperature": 0
    },
    "messages": [
      {
        "role": "system",
        "content": "Sum numbers together. Output only the number."
      },
      {
        "role": "user",
        "content": "[38,4]"
      },
      {
        "role": "tool_use",
        "content": "{\"numbers\":[38,4]}",
        "toolUseId": "call_1_2",
        "toolUseName": "sum_numbers"
      },
      {
        "role": "tool_result",
        "content": "42",
        "toolUseId": "call_1_2",
        "toolDuration": 100004.000,
        "toolCalls": [
          {
            "id": "id_2",
            "provider": {
              "type": "anthropic",
              "model": "claude-3-haiku-20240307",
              "maxContextLength": 200000,
              "maxResponseLength": 4096,
              "promptCaching": false,
              "temperature": 0
            },
            "messages": [
              {
                "role": "system",
                "content": "Sum numbers together. Output only the number."
              },
              {
                "role": "user",
                "content": "{\"numbers\":[38,4]}"
              },
              {
                "role": "assistant",
                "content": "42"
              }
            ],
            "usedTokens": {
              "req_2_0": {
                "maxTotal": 200000,
                "maxResponse": 4096,
                "prompt": 24,
                "response": 5,
                "total": 29
              }
            },
            "usedTime": {
              "req_2_0": {
                "apiCall": 1.000
              }
            },
            "duration": 2.000
          }
        ]
      }
    ],
    "usedTokens": {
      "req_1_0": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 57,
        "response": 16,
        "total": 73
      }
    },
    "usedTime": {
      "req_1_0": {
        "apiCall": 1.000
      }
    },
    "duration": 1.000
  }
]
[
  {
    "id": "id_1",
    "provider": {
      "type": "openai",
      "model": "gpt-4o-mini-2024-07-18",
      "maxContextLength": 128000,
      "maxResponseLength": 16384,
      "forceOutputJsonSchema": false,
      "seed": 42,
      "temperature": 0
    },
    "messages": [
      {
        "role": "system",
        "content": "Sum numbers together. Output only the number."
      },
      {
        "role": "user",
        "content": "[38,4]"
      },
      {
        "role": "tool_use",
        "content": "{\"numbers\":[38,4]}",
        "toolUseId": "call_1_2",
        "toolUseName": "sum_numbers"
      },
      {
        "role": "tool_result",
        "content": "42",
        "toolUseId": "call_1_2",
        "toolDuration": 100004.000,
        "toolCalls": [
          {
            "id": "id_2",
            "provider": {
              "type": "anthropic",
              "model": "claude-3-haiku-20240307",
              "maxContextLength": 200000,
              "maxResponseLength": 4096,
              "promptCaching": false,
              "temperature": 0
            },
            "messages": [
              {
                "role": "system",
                "content": "Sum numbers together. Output only the number."
              },
              {
                "role": "user",
                "content": "{\"numbers\":[38,4]}"
              },
              {
                "role": "assistant",
                "content": "42"
              }
            ],
            "usedTokens": {
              "req_2_0": {
                "maxTotal": 200000,
                "maxResponse": 4096,
                "prompt": 24,
                "response": 5,
                "total": 29
              }
            },
            "usedTime": {
              "req_2_0": {
                "apiCall": 1.000
              }
            },
            "duration": 2.000
          }
        ]
      },
      {
        "role": "assistant",
        "content": "42"
      }
    ],
    "usedTokens": {
      "req_1_0": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 57,
        "response": 16,
        "total": 73
      },
      "req_1_1": {
        "maxTotal": 128000,
        "maxResponse": 16384,
        "prompt": 82,
        "response": 2,
        "total": 84
      }
    },
    "usedTime": {
      "req_1_0": {
        "apiCall": 1.000
      },
      "req_1_1": {
        "apiCall": 2.000
      }
    },
    "duration": 1.000
  }
]
42

type TextRecorderCall added in v0.4.0

type TextRecorderCall struct {

	// ID is a random ID assigned to this call so that it is
	// possible to correlate the call with logging.
	ID string `json:"id"`

	// Provider for this call.
	Provider TextProvider `json:"provider"`

	// Messages sent to and received from the AI model. Note that
	// these messages might have been sent and received multiple times
	// in multiple requests made (e.g., when using tools).
	Messages []TextRecorderMessage `json:"messages,omitempty"`

	// UsedTokens for each request made to the AI model.
	UsedTokens map[string]TextRecorderUsedTokens `json:"usedTokens,omitempty"`

	// UsedTime for each request made to the AI model.
	UsedTime map[string]TextRecorderUsedTime `json:"usedTime,omitempty"`

	// Duration is end-to-end duration of this call.
	Duration Duration `json:"duration,omitempty"`
	// contains filtered or unexported fields
}

TextRecorderCall describes a call to an AI model.

There might be multiple requests made to an AI model during a call (e.g., when using tools).

type TextRecorderMessage added in v0.4.0

type TextRecorderMessage struct {

	// Role of the message. Possible values are "system",
	// "assistant", "user", "tool_use", and "tool_result".
	Role string `json:"role"`

	// Content is textual content of the message.
	Content *string `json:"content,omitempty"`

	// ToolUseID is the ID of the tool use to correlate
	// "tool_use" and "tool_result" messages.
	ToolUseID string `json:"toolUseId,omitempty"`

	// ToolUseName is the name of the tool to use.
	ToolUseName string `json:"toolUseName,omitempty"`

	// ToolDuration is duration of the tool call.
	ToolDuration Duration `json:"toolDuration,omitempty"`

	// ToolCalls contains any recursive calls recorded while running the tool.
	ToolCalls []TextRecorderCall `json:"toolCalls,omitempty"`

	// IsError is true if there was an error during tool execution.
	// In this case, Content is the error message returned to the AI model.
	IsError bool `json:"isError,omitempty"`

	// IsRefusal is true if the AI model refused to respond.
	// In this case, Content is the explanation of the refusal.
	IsRefusal bool `json:"isRefusal,omitempty"`
	// contains filtered or unexported fields
}

TextRecorderMessage describes one message sent to or received from the AI model.

type TextRecorderUsedTime added in v0.4.0

type TextRecorderUsedTime struct {
	// Prompt is time taken by processing the prompt.
	Prompt Duration `json:"prompt,omitempty"`

	// Response is time taken by formulating the response.
	Response Duration `json:"response,omitempty"`

	// Total is the sum of Prompt and Response.
	Total Duration `json:"total,omitempty"`

	// APICall is end-to-end duration of the API call request.
	APICall Duration `json:"apiCall"`
}

TextRecorderUsedTime describes time taken by a request to an AI model.

type TextRecorderUsedTokens added in v0.4.0

type TextRecorderUsedTokens struct {
	// MaxTotal is the maximum total number of tokens allowed to be used
	// with the underlying AI model (i.e., the maximum context window).
	MaxTotal int `json:"maxTotal"`

	// MaxResponse is the maximum number of tokens allowed to be used in
	// a response with the underlying AI model.
	MaxResponse int `json:"maxResponse"`

	// Prompt is the number of tokens used by the prompt (including the system
	// prompt and all example inputs with corresponding outputs).
	Prompt int `json:"prompt"`

	// Response is the number of tokens used by the response.
	Response int `json:"response"`

	// Total is the sum of Prompt and Response.
	Total int `json:"total"`

	// CacheCreationInputTokens is the number of tokens written
	// to the cache when creating a new entry.
	CacheCreationInputTokens *int `json:"cacheCreationInputTokens,omitempty"`

	// CacheReadInputTokens is the number of tokens retrieved
	// from the cache for this request.
	CacheReadInputTokens *int `json:"cacheReadInputTokens,omitempty"`
}

TextRecorderUsedTokens describes number of tokens used by a request to an AI model.

type TextTool added in v0.4.0

type TextTool[Input, Output any] struct {
	// Description is a natural language description of the tool which helps
	// an AI model understand when to use this tool.
	Description string

	// InputJSONSchema is the JSON Schema for parameters passed by an AI model
	// to the tool. Consider using meaningful property names and use "description"
	// JSON Schema field to describe to the AI model what each property is.
	//
	// Depending on the provider and the model there are limitations on the JSON Schema
	// (e.g., only "object" top-level type can be used, all properties must be required,
	// "additionalProperties" must be set to false).
	//
	// It should correspond to the Input type parameter. If not provided, it is
	// automatically determined from the Input type, but the resulting JSON Schema
	// might not be supported by the provider or the model.
	InputJSONSchema []byte

	// InputJSONSchema is the JSON Schema for tool's output. It is used to validate
	// the output from the tool before it is passed on to the AI model.
	//
	// It should correspond to the Output type parameter. If not provided, it is
	// automatically determined from the Output type.
	OutputJSONSchema []byte

	// Fun implements the logic of the tool.
	Fun func(ctx context.Context, input Input) (Output, errors.E)
	// contains filtered or unexported fields
}

TextTool defines a tool which can be called by AI models through Text.

For non-string Input types, it marshals them to JSON before providing them to the AI model, and for non-string Output types, it unmarshals model outputs from JSON to Output type. For this to work, Input and Output types should have a JSON representation.

Example
package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"
	"os"

	"gitlab.com/tozd/go/errors"

	"gitlab.com/tozd/go/fun"
)

var (
	// It has to be an object and not just an array of numbers.
	// This is current limitation of AI providers.
	jsonSchemaNumbers = []byte(`
		{
			"type": "object",
			"properties": {
				"numbers": {"type": "array", "items": {"type": "number"}}
			},
			"additionalProperties": false,
			"required": [
				"numbers"
			]
		}
	`)
	jsonSchemaNumber = []byte(`{"type": "integer"}`)
)

type toolInput struct {
	Numbers []float64 `json:"numbers"`
}

func main() {
	if os.Getenv("OPENAI_API_KEY") == "" {
		fmt.Println("skipped")
		return
	}

	ctx := context.Background()

	f := fun.Text[int, int]{
		Provider: &fun.OpenAITextProvider{
			APIKey:            os.Getenv("OPENAI_API_KEY"),
			Model:             "gpt-4o-mini-2024-07-18",
			MaxContextLength:  128_000,
			MaxResponseLength: 16_384,
			Seed:              42,
		},
		Prompt: `Sum numbers together. Output only the number.`,
		Tools: map[string]fun.TextTooler{
			"sum_numbers": &fun.TextTool[toolInput, float64]{
				Description:      "Sums numbers together.",
				InputJSONSchema:  jsonSchemaNumbers,
				OutputJSONSchema: jsonSchemaNumber,
				Fun: func(_ context.Context, input toolInput) (float64, errors.E) {
					res := 0.0
					for _, n := range input.Numbers {
						res += n
					}
					return res, nil
				},
			},
		},
	}
	errE := f.Init(ctx)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}

	// We use the recorder to make sure the tool has really been called.
	ctx = fun.WithTextRecorder(ctx)

	output, errE := f.Call(ctx, 38, 4)
	if errE != nil {
		log.Fatalf("% -+#.1v\n", errE)
	}
	fmt.Println(output)

	calls := fun.GetTextRecorder(ctx).Calls()
	// We change calls a bit for the example to be deterministic.
	cleanCalls(calls)

	messages, err := json.MarshalIndent(calls[0].Messages, "", "  ")
	if err != nil {
		log.Fatalf("%v\n", err)
	}
	fmt.Println(string(messages))

}
Output:

42
[
  {
    "role": "system",
    "content": "Sum numbers together. Output only the number."
  },
  {
    "role": "user",
    "content": "[38,4]"
  },
  {
    "role": "tool_use",
    "content": "{\"numbers\":[38,4]}",
    "toolUseId": "call_1_2",
    "toolUseName": "sum_numbers"
  },
  {
    "role": "tool_result",
    "content": "42",
    "toolUseId": "call_1_2",
    "toolDuration": 100004.000
  },
  {
    "role": "assistant",
    "content": "42"
  }
]

func (*TextTool[Input, Output]) Call added in v0.4.0

func (t *TextTool[Input, Output]) Call(ctx context.Context, input ...json.RawMessage) (string, errors.E)

Call takes the raw JSON input from an AI model and converts it a value of Input type, calls Fun, and converts the output to the string to be passed back to the AI model as result of the tool call.

Call also validates that inputs and outputs match respective JSON Schemas.

Call implements Callee interface.

func (*TextTool[Input, Output]) GetDescription added in v0.4.0

func (t *TextTool[Input, Output]) GetDescription() string

GetDescription implements TextTooler interface.

func (*TextTool[Input, Output]) GetInputJSONSchema added in v0.4.0

func (t *TextTool[Input, Output]) GetInputJSONSchema() []byte

GetInputJSONSchema implements TextTooler interface.

func (*TextTool[Input, Output]) Init added in v0.4.0

func (t *TextTool[Input, Output]) Init(_ context.Context) errors.E

Init implements Callee interface.

func (*TextTool[Input, Output]) Unary added in v0.4.0

func (t *TextTool[Input, Output]) Unary() func(ctx context.Context, input json.RawMessage) (string, errors.E)

Unary implements Callee interface.

func (*TextTool[Input, Output]) Variadic added in v0.4.0

func (t *TextTool[Input, Output]) Variadic() func(ctx context.Context, input ...json.RawMessage) (string, errors.E)

Variadic implements Callee interface.

type TextTooler added in v0.4.0

type TextTooler interface {
	Callee[json.RawMessage, string]

	// GetDescription returns a natural language description of the tool which helps
	// an AI model understand when to use this tool.
	GetDescription() string

	// GetInputJSONSchema return the JSON Schema for parameters passed by an AI model
	// to the tool. Consider using meaningful property names and use "description"
	// JSON Schema field to describe to the AI model what each property is.
	//
	// Depending on the provider and the model there are limitations on the JSON Schema
	// (e.g., only "object" top-level type can be used, all properties must be required,
	// "additionalProperties" must be set to false).
	GetInputJSONSchema() []byte
}

TextTooler extends Callee interface with additional methods needed to define a tool which can be called by AI models through Text.

type WithOutputJSONSchema added in v0.4.0

type WithOutputJSONSchema interface {
	// InitOutputJSONSchema provides the JSON Schema the provider
	// should request the AI model to force for its output.
	InitOutputJSONSchema(ctx context.Context, schema []byte) errors.E
}

WithOutputJSONSchema is a TextProvider which supports forcing JSON Schema for its output.

type WithTools added in v0.4.0

type WithTools interface {
	// InitTools initializes the tool with available tools.
	InitTools(ctx context.Context, tools map[string]TextTooler) errors.E
}

WithTools is a TextProvider which supports tools.

Directories

Path Synopsis
cmd
fun

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL