ai

package
v1.19.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 29, 2025 License: Apache-2.0 Imports: 33 Imported by: 1

Documentation

Overview

Package ai provide LLM Function Calling features

Index

Constants

View Source
const (
	// DefaultZipperAddr is the default endpoint of the zipper
	DefaultZipperAddr = "localhost:9000"
	// RequestTimeout is the timeout for the request, default is 90 seconds
	RequestTimeout = 90 * time.Second
	//  RunFunctionTimeout is the timeout for awaiting the function response, default is 60 seconds
	RunFunctionTimeout = 60 * time.Second
)

Variables

View Source
var (
	// ErrConfigNotFound is the error when the ai config was not found
	ErrConfigNotFound = errors.New("ai config was not found")
	// ErrConfigFormatError is the error when the ai config format is incorrect
	ErrConfigFormatError = errors.New("ai config format is incorrect")
)

Functions

func DecodeRequest added in v1.18.11

func DecodeRequest[T any](r *http.Request, w http.ResponseWriter, logger *slog.Logger) (T, error)

DecodeRequest decodes the request body into given type.

func DecorateHandler added in v1.18.14

func DecorateHandler(h http.Handler, decorates ...func(handler http.Handler) http.Handler) http.Handler

DecorateHandler decorates the http.Handler.

func FromTracerContext added in v1.18.18

func FromTracerContext(ctx context.Context) trace.Tracer

FromTransIDContext returns the transID from the request context

func FromTransIDContext added in v1.18.8

func FromTransIDContext(ctx context.Context) string

FromTransIDContext returns the transID from the request context

func NewReducer added in v1.19.2

func NewReducer(conn *mem.FrameConn, cred *auth.Credential) yomo.StreamFunction

NewReducer creates a new instance of memory StreamFunction.

func NewServeMux added in v1.18.14

func NewServeMux(service *Service) *http.ServeMux

NewServeMux creates a new http.ServeMux for the llm bridge server.

func NewSource added in v1.19.2

func NewSource(conn *mem.FrameConn, cred *auth.Credential) yomo.Source

func RegisterFunctionMW added in v1.18.11

func RegisterFunctionMW() core.ConnMiddleware

RegisterFunctionMW returns a ConnMiddleware that can be used to register an ai function.

func RespondWithError added in v1.18.7

func RespondWithError(w http.ResponseWriter, code int, err error, logger *slog.Logger)

RespondWithError writes an error to response according to the OpenAI API spec.

func Serve

func Serve(config *Config, logger *slog.Logger, source yomo.Source, reducer yomo.StreamFunction) error

Serve starts the Basic API Server

func WithCallerContext added in v1.18.11

func WithCallerContext(ctx context.Context, caller *Caller) context.Context

WithCallerContext adds the caller to the request context

func WithTracerContext added in v1.18.18

func WithTracerContext(ctx context.Context, tracer trace.Tracer) context.Context

WithTracerContext adds the tracer to the request context

func WithTransIDContext added in v1.18.8

func WithTransIDContext(ctx context.Context, transID string) context.Context

WithTransIDContext adds the transID to the request context

Types

type BasicAPIServer

type BasicAPIServer struct {
	// contains filtered or unexported fields
}

BasicAPIServer provides restful service for end user

func NewBasicAPIServer

func NewBasicAPIServer(config *Config, provider provider.LLMProvider, source yomo.Source, reducer yomo.StreamFunction, logger *slog.Logger) (*BasicAPIServer, error)

NewBasicAPIServer creates a new restful service

type CallSyncer added in v1.18.11

type CallSyncer interface {
	// Call fires a bunch of function callings, and wait the result of these function callings.
	// The result only contains the messages with role=="tool".
	// If some function callings failed, the content will be returned as the failed reason.
	Call(ctx context.Context, transID string, reqID string, toolCalls map[uint32][]*openai.ToolCall) ([]openai.ChatCompletionMessage, error)
	// Close close the CallSyncer. if close, you can't use this CallSyncer anymore.
	Close() error
}

CallSyncer fires a bunch of function callings, and wait the result of these function callings. every tool call has a toolCallID, which is used to identify the function calling, Note that one tool call can only be responded once.

func NewCallSyncer added in v1.18.11

func NewCallSyncer(logger *slog.Logger, sourceCh chan<- TagFunctionCall, reduceCh <-chan ReduceMessage, timeout time.Duration) CallSyncer

NewCallSyncer creates a new CallSyncer.

type Caller added in v1.18.11

type Caller struct {
	CallSyncer
	// contains filtered or unexported fields
}

Caller calls the invoke function and keeps the metadata and system prompt.

func FromCallerContext added in v1.18.11

func FromCallerContext(ctx context.Context) *Caller

FromCallerContext returns the caller from the request context

func NewCaller added in v1.18.11

func NewCaller(source yomo.Source, reducer yomo.StreamFunction, md metadata.M, callTimeout time.Duration) (*Caller, error)

NewCaller returns a new caller.

func (*Caller) Close added in v1.18.13

func (c *Caller) Close() error

Close closes the caller.

func (*Caller) GetSystemPrompt added in v1.18.13

func (c *Caller) GetSystemPrompt() (prompt string, op SystemPromptOp)

GetSystemPrompt gets the system prompt

func (*Caller) Metadata added in v1.18.11

func (c *Caller) Metadata() metadata.M

Metadata returns the metadata of caller.

func (*Caller) SetSystemPrompt added in v1.18.11

func (c *Caller) SetSystemPrompt(prompt string, op SystemPromptOp)

SetSystemPrompt sets the system prompt

type Config

type Config struct {
	Server    Server              `yaml:"server"`    // Server is the configuration of the BasicAPIServer
	Providers map[string]Provider `yaml:"providers"` // Providers is the configuration of llm provider
}

Config is the configuration of AI bridge. The configuration looks like:

bridge:

ai:
	server:
		host: http://localhost
		port: 8000
		credential: token:<CREDENTIAL>
		provider: openai
	providers:
		azopenai:
			api_endpoint: https://<RESOURCE>.openai.azure.com
			deployment_id: <DEPLOYMENT_ID>
			api_key: <API_KEY>
			api_version: <API_VERSION>
		openai:
			api_key:
			api_endpoint:
		gemini:
			api_key:
		cloudflare_azure:
			endpoint: https://gateway.ai.cloudflare.com/v1/<CF_GATEWAY_ID>/<CF_GATEWAY_NAME>
			api_key: <AZURE_API_KEY>
			resource: <AZURE_OPENAI_RESOURCE>
			deployment_id: <AZURE_OPENAI_DEPLOYMENT_ID>
			api_version: <AZURE_OPENAI_API_VERSION>

func ParseConfig

func ParseConfig(conf map[string]any) (config *Config, err error)

ParseConfig parses the AI config from conf

type Handler added in v1.18.13

type Handler struct {
	// contains filtered or unexported fields
}

Handler handles the http request.

func (*Handler) HandleChatCompletions added in v1.18.13

func (h *Handler) HandleChatCompletions(w http.ResponseWriter, r *http.Request)

HandleChatCompletions is the handler for POST /chat/completions

func (*Handler) HandleInvoke added in v1.18.13

func (h *Handler) HandleInvoke(w http.ResponseWriter, r *http.Request)

HandleInvoke is the handler for POST /invoke

func (*Handler) HandleOverview added in v1.18.13

func (h *Handler) HandleOverview(w http.ResponseWriter, r *http.Request)

HandleOverview is the handler for GET /overview

type Provider

type Provider = map[string]string

Provider is the configuration of llm provider

type ReduceMessage added in v1.18.13

type ReduceMessage struct {
	// ReqID indentifies the message.
	ReqID string
	// Message is the message.
	Message openai.ChatCompletionMessage
}

ReduceMessage is the message from the reducer.

type ResponseWriter added in v1.18.18

type ResponseWriter struct {
	IsStream bool
	Err      error
	TTFT     time.Time
	// contains filtered or unexported fields
}

ResponseWriter is a wrapper for http.ResponseWriter. It is used to add TTFT and Err to the response.

func NewResponseWriter added in v1.18.18

func NewResponseWriter(w http.ResponseWriter) *ResponseWriter

NewResponseWriter returns a new ResponseWriter.

func (*ResponseWriter) Flush added in v1.18.18

func (w *ResponseWriter) Flush()

Flush flushes the underlying ResponseWriter.

func (*ResponseWriter) Header added in v1.18.18

func (w *ResponseWriter) Header() http.Header

Header returns the headers of the underlying ResponseWriter.

func (*ResponseWriter) SetStreamHeader added in v1.18.18

func (w *ResponseWriter) SetStreamHeader() http.Header

SetStreamHeader sets the stream headers of the underlying ResponseWriter.

func (*ResponseWriter) Write added in v1.18.18

func (w *ResponseWriter) Write(b []byte) (int, error)

Write writes the data to the underlying ResponseWriter.

func (*ResponseWriter) WriteHeader added in v1.18.18

func (w *ResponseWriter) WriteHeader(code int)

WriteHeader writes the header to the underlying ResponseWriter.

func (*ResponseWriter) WriteStreamDone added in v1.18.18

func (w *ResponseWriter) WriteStreamDone() error

WriteStreamDone writes the done event to the underlying ResponseWriter.

func (*ResponseWriter) WriteStreamEvent added in v1.18.18

func (w *ResponseWriter) WriteStreamEvent(event any) error

WriteStreamEvent writes the event to the underlying ResponseWriter.

type Server

type Server struct {
	Addr     string `yaml:"addr"`     // Addr is the address of the server
	Provider string `yaml:"provider"` // Provider is the llm provider to use
}

Server is the configuration of the BasicAPIServer, which is the endpoint for end user access

type Service

type Service struct {
	// contains filtered or unexported fields
}

Service is the service layer for llm bridge server. service is responsible for handling the logic from handler layer.

func NewService

func NewService(provider provider.LLMProvider, opt *ServiceOptions) *Service

NewService creates a new service for handling the logic from handler layer.

func (*Service) GetChatCompletions

func (srv *Service) GetChatCompletions(ctx context.Context, req openai.ChatCompletionRequest, transID string, caller *Caller, w *ResponseWriter, tracer trace.Tracer) error

GetChatCompletions accepts openai.ChatCompletionRequest and responds to http.ResponseWriter.

func (*Service) GetInvoke added in v1.18.7

func (srv *Service) GetInvoke(ctx context.Context, userInstruction, baseSystemMessage, transID string, caller *Caller, includeCallStack bool, tracer trace.Tracer) (*ai.InvokeResponse, error)

GetInvoke returns the invoke response

func (*Service) LoadOrCreateCaller added in v1.18.14

func (srv *Service) LoadOrCreateCaller(r *http.Request) (*Caller, error)

LoadOrCreateCaller loads or creates the caller according to the http request.

type ServiceOptions added in v1.18.14

type ServiceOptions struct {
	// Logger is the logger for the service
	Logger *slog.Logger
	// CredentialFunc is the function for getting the credential from the request
	CredentialFunc func(r *http.Request) (string, error)
	// CallerCacheSize is the size of the caller's cache
	CallerCacheSize int
	// CallerCacheTTL is the time to live of the callers cache
	CallerCacheTTL time.Duration
	// CallerCallTimeout is the timeout for awaiting the function response.
	CallerCallTimeout time.Duration
	// SourceBuilder should builds an unconnected source.
	SourceBuilder func() yomo.Source
	// ReducerBuilder should builds an unconnected reducer.
	ReducerBuilder func() yomo.StreamFunction
	// MetadataExchanger exchanges metadata from the credential.
	MetadataExchanger func(credential string) (metadata.M, error)
}

ServiceOptions is the option for creating service

type SystemPromptOp added in v1.18.15

type SystemPromptOp int

SystemPromptOp defines the operation of system prompt

const (
	SystemPromptOpOverwrite SystemPromptOp = 0
	SystemPromptOpDisabled  SystemPromptOp = 1
	SystemPromptOpPrefix    SystemPromptOp = 2
)

type TagFunctionCall added in v1.18.13

type TagFunctionCall struct {
	// Tag is the tag of the request.
	Tag uint32
	// FunctionCall is the function call.
	// It cantains the arguments and the function name.
	FunctionCall *ai.FunctionCall
}

TagFunctionCall is the request to the syncer. It always be sent to the source.

Directories

Path Synopsis
Package provider defines the ai.Provider interface and provides a mock provider for unittest.
Package provider defines the ai.Provider interface and provides a mock provider for unittest.
anthropic
Package anthropic is the anthropic llm provider, see https://docs.anthropic.com
Package anthropic is the anthropic llm provider, see https://docs.anthropic.com
azopenai
Package azopenai is used to provide the Azure OpenAI service
Package azopenai is used to provide the Azure OpenAI service
cerebras
Package cerebras is the Cerebras llm provider
Package cerebras is the Cerebras llm provider
cfazure
Package cfazure is used to provide the Azure OpenAI service
Package cfazure is used to provide the Azure OpenAI service
cfopenai
Package cfopenai is used to provide the Azure OpenAI service
Package cfopenai is used to provide the Azure OpenAI service
deepseek
Package deepseek is the DeepSeek llm provider
Package deepseek is the DeepSeek llm provider
gemini
Package gemini is used to provide the gemini service
Package gemini is used to provide the gemini service
githubmodels
Package githubmodels is the Github Models llm provider, see https://github.com/marketplace/models
Package githubmodels is the Github Models llm provider, see https://github.com/marketplace/models
ollama
Package ollama is used to provide the Ollama service for YoMo Bridge.
Package ollama is used to provide the Ollama service for YoMo Bridge.
openai
Package openai is the OpenAI llm provider
Package openai is the OpenAI llm provider
vertexai
Package vertexai is used to provide the vertexai service
Package vertexai is used to provide the vertexai service
xai
Package xai is the x.ai provider
Package xai is the x.ai provider
Package register provides a register for registering and unregistering functions
Package register provides a register for registering and unregistering functions

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL