Documentation ¶
Index ¶
- func IsInvalidOutputError(err error) bool
- func NewInvalidOutputError(coarse, detail string) error
- func ParseJSONFromModel[T any](text string) (T, error)
- func StreamToDeltas(stream *openai.ChatCompletionStream) chan string
- type AccessRequest
- type CompletionCommand
- type GeneratedCommand
- type Label
- type Message
- type Resource
- type StreamingMessage
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func IsInvalidOutputError ¶
IsInvalidOutputError returns true if the error is an invalidOutputError.
func NewInvalidOutputError ¶
NewInvalidOutputError builds an error caused by the output of an LLM.
func ParseJSONFromModel ¶
ParseJSONFromModel parses a JSON object from the model output and attempts to sanitize contaminant text to avoid triggering self-correction due to some natural language being bundled with the JSON. The output type is generic, and thus the structure of the expected JSON varies depending on T.
func StreamToDeltas ¶
func StreamToDeltas(stream *openai.ChatCompletionStream) chan string
StreamToDeltas converts an openai.CompletionStream into a channel of strings. This channel can then be consumed manually to search for specific markers, or directly converted into a StreamingMessage with NewStreamingMessage.
Types ¶
type AccessRequest ¶
type AccessRequest struct { Roles []string `json:"roles"` Resources []Resource `json:"resources"` Reason string `json:"reason"` SuggestedReviewers []string `json:"suggested_reviewers"` }
AccessRequest represents an access request suggestion returned by OpenAI's completion API.
type CompletionCommand ¶
type CompletionCommand struct { Command string `json:"command,omitempty"` Nodes []string `json:"nodes,omitempty"` Labels []Label `json:"labels,omitempty"` }
CompletionCommand represents a command suggestion returned by OpenAI's completion API.
type GeneratedCommand ¶
type GeneratedCommand struct {
Command string `json:"command"`
}
GeneratedCommand represents a Bash command generated by LLM.
type Message ¶
type Message struct {
Content string
}
Message represents a new message within a live conversation.
type Resource ¶
type Resource struct { // The resource type. Type string `json:"type"` // The resource name. Name string `json:"id"` // Set if a display-friendly alternative name is available. FriendlyName string `json:"friendlyName,omitempty"` }
Resource represents a resource suggestion returned by OpenAI's completion API.
type StreamingMessage ¶
type StreamingMessage struct {
Parts <-chan string
}
StreamingMessage represents a new message that is being streamed from the LLM.
func NewStreamingMessage ¶
func NewStreamingMessage(deltas <-chan string, alreadyStreamed, prefix string) (*StreamingMessage, *tokens.AsynchronousTokenCounter, error)
NewStreamingMessage takes a string channel and converts it to a StreamingMessage. If content was already streamed, it must be passed through the alreadyStreamed parameter. If the already streamed content contains a prefix that must be stripped (like a marker to identify the kind of response the model is providing), the prefix can be passed through the prefix parameter. It will be stripped but will still be reflected in the token count.
func (*StreamingMessage) WaitAndConsume ¶
func (msg *StreamingMessage) WaitAndConsume() string
WaitAndConsume waits until the message stream is over and returns the full message. This can only be called once on a message as it empties its Parts channel.