Documentation
¶
Index ¶
- type AudioFormat
- type Choices
- type DataType
- type Definition
- type Focus
- type HTTPMethod
- type HashMap
- type Image
- type ImageModel
- type ImageSize
- type ModelType
- type RequestFormat
- type SendImage
- type SpeechToText
- type SpeechToTextModel
- type SubordinateFunction
- type TextToSpeech
- type TextToSpeechModel
- type Voice
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AudioFormat ¶
type AudioFormat string
const JSON AudioFormat = "json"
const SRT AudioFormat = "srt"
const Text AudioFormat = "text"
const VTT AudioFormat = "vtt"
const VerboseJSON AudioFormat = "verbose-json"
type DataType ¶
type DataType string
const ( Object DataType = "object" Number DataType = "number" Integer DataType = "integer" String DataType = "string" Array DataType = "array" Null DataType = "null" Boolean DataType = "boolean" Map DataType = "map" Byte DataType = "byte" //this will be used for the audio and image data selection (if this is selected as byte then either Image or Audio must not be nil, if it is then nothing will occur and an empty byte will be returned. The same is true if both are filled. )
type Definition ¶
type Definition struct { // Type specifies the data type of the schema. Type DataType `json:"type,omitempty"` // Instruction is the instruction for what to generate. Instruction string `json:"instruction,omitempty"` // Properties describes the properties of an object, if the schema type is Object. Properties map[string]Definition `json:"properties"` // Items specifies which data type an array contains, if the schema type is Array. Items *Definition `json:"items,omitempty"` // Model Model ModelType `json:"model,omitempty"` // ProcessingOrder this is the order of strings ie the fields of the parent property keys that need to be processed first before this field is processed ProcessingOrder []string `json:"processingOrder,omitempty"` // SystemPrompt allows the developer to spefificy their own system prompt so the processing. It operates current at the properties level. SystemPrompt *string `json:"systemPrompt,omitempty"` // ImprovementProcess --> so that the user can specify when a super high quality completion is needed and it can be improved upon ImprovementProcess bool `json:"improvementProcess,omitempty"` //Map is used here as so that a map of values can be created and then returned -- useful in the instruction creation process -- not sure how useful it is otherwise HashMap *HashMap //the other data types that need to be filled for the object to be generated within GoR TextToSpeech *TextToSpeech `json:"textToSpeech,omitempty"` SpeechToText *SpeechToText `json:"speechToText,omitempty"` Image *Image `json:"image,omitempty"` //Utility fields: Req *RequestFormat `json:"req,omitempty"` // NarrowFocus NarrowFocus *Focus `json:"narrowFocus,omitempty"` // SelectFields has the aim of being able to select multiple pieice of information and when they are all present then continue with processing. Such that the selection of information can work like so: //The system works as an absolute path that has to be selected. So starting from the top most object then down to the selected field(s) //"car.color" --> this would fetch the information from the car field and then the color field. //"cars.color" --> Would return the entire list of colours that have been generated so far SelectFields []string `json:"selectFields,omitempty"` // Choices For determining which of the property fields should be generated Choices *Choices `json:"choices,omitempty"` // Voters this is used for determining whether you want to have voters determine the qulaity of completions. Increases costs but improves quality. If avialible to your tier then turned on automatically. Voters bool `json:"voters,omitempty"` //Image URL --> if the LLM supports reading an image due to it being multi-model then the image URL will be passed in here SendImage *SendImage `json:"sendImage,omitempty"` //Stream - used for instructing when the information should be streamed. Please visit the documentation for more information for which types are supported. Stream bool `json:"stream,omitempty"` //Temp - used for passing in a temperature value for the prompt request Temp float64 `json:"temp,omitempty"` //OverridePrompt - used for overriding the prompt that is passed in OverridePrompt *string `json:"overridePrompt,omitempty"` }
Definition is a struct for describing a JSON Schema. It is fairly limited, and you may have better luck using a third-party library.
func (Definition) MarshalJSON ¶
func (d Definition) MarshalJSON() ([]byte, error)
func (Definition) ToMap ¶
func (d Definition) ToMap() map[string]interface{}
ToMap converts the Definition struct to a map representation
type Focus ¶
type Focus struct { Prompt string `json:"prompt"` //the fields value denotes the properties that will be extracted from the properties fields. These will only operate at a single level of generation. //the order in which the fields that are listed will be the order for which the currently generated information will be presented below the prompt value. Fields []string `json:"fields"` //KeepOriginal -- for keeping the original prompt in cases for lists where it would otherwise be removed from the context KeepOriginal bool `json:"keepOriginal,omitempty"` }
Focus the idea for this is so that when a narrow focus request needs to be sent out to an LLM without needing all the additional information. From prior generation.
type HTTPMethod ¶
type HTTPMethod string
const ( GET HTTPMethod = "GET" POST HTTPMethod = "POST" PUT HTTPMethod = "PUT" DELETE HTTPMethod = "DELETE" PATCH HTTPMethod = "PATCH" )
Constants for HTTP methods
type HashMap ¶
type HashMap struct { KeyInstruction string `json:"keyInstruction,omitempty"` FieldDefinition *Definition `json:"fieldDefinition,omitempty"` }
HashMap this can output a map of values and so whilst it may take up a single field it could output many fields
type Image ¶
type Image struct { Model ImageModel `json:"model,omitempty"` Size ImageSize `json:"size,omitempty"` }
Image if you want the Url of the image use the DataType String otherwise use the DataType Byte
type ImageModel ¶
type ImageModel string
const OpenAiDalle2 ImageModel = "OpenAiDalle2"
const OpenAiDalle3 ImageModel = "OpenAiDalle3"
type ImageSize ¶
type ImageSize string
const ( //this code is nicked from go-openai CreateImageSize256x256 ImageSize = "256x256" CreateImageSize512x512 ImageSize = "512x512" CreateImageSize1024x1024 ImageSize = "1024x1024" // dall-e-3 supported only. CreateImageSize1792x1024 ImageSize = "1792x1024" CreateImageSize1024x1792 ImageSize = "1024x1792" )
type ModelType ¶
type ModelType string
const ( Gpt3 ModelType = "Gpt3" Gpt4 ModelType = "Gpt4" ClaudeSonnet ModelType = "ClaudeSonnet" ClaudeHaiku ModelType = "ClaudeHaiku" Llama70b ModelType = "Llama70b" Gpt4Mini ModelType = "Gpt4Mini" Llama405b ModelType = "Llama405" Llama8b ModelType = "Llama8b" O1 ModelType = "o1-preview" O1Mini ModelType = "o1-mini" GeminiFlash ModelType = "GeminiFlash" GeminiFlash2 ModelType = "GeminiFlash2" GeminiFlash2Lite ModelType = "GeminiFlash2Lite" GeminiFlash8B ModelType = "GeminiFlash8B" GeminiPro ModelType = "GeminiPro" Llama8bInstant ModelType = "Llama8bInstant" Llama70bVersatile ModelType = "Llama70bVersatile" Llama1B ModelType = "Llama1B" Llama3B ModelType = "Llama3B" Default ModelType = "Default" )
type RequestFormat ¶
type RequestFormat struct { URL string `json:"url"` Method HTTPMethod `json:"method"` Headers map[string]string `json:"headers,omitempty"` Body map[string]interface{} `json:"body,omitempty"` Authorization string `json:"authorization,omitempty"` RequireFields []string `json:"requirFields,omitempty"` }
RequestFormat defines the structure of the request
type SendImage ¶
type SendImage struct {
ImagesData [][]byte `json:"imagesData,omitempty"` //When sending multiple images take into account the model you have selected. Such that Gemini Models support multiple images whereas the Claude models only support one image at a time
}
type SpeechToText ¶
type SpeechToText struct { Model SpeechToTextModel `json:"model,omitempty"` AudioToTranscribe []byte `json:"audioToTranscribe,omitempty"` Language string `json:"language,omitempty"` //must be in the format of ISO-639-1 will default to en (english) Format AudioFormat `json:"format,omitempty"` ToString bool `json:"toString,omitempty"` ToCaptions bool `json:"toCaptions,omitempty"` }
SpeechToText the DataType to use with this type is String
type SpeechToTextModel ¶
type SpeechToTextModel string
const GroqWhisper SpeechToTextModel = "GroqWhisper"
const OpenAiWhisper SpeechToTextModel = "OpenAiWhisper"
type SubordinateFunction ¶
type SubordinateFunction struct { Name string `json:"name"` // The name of the subordinate function. Definition *Definition `json:"definition"` // The schema definition of the function. }
SubordinateFunction represents a function under the AI's control, including its name, definition, and responses.
type TextToSpeech ¶
type TextToSpeech struct { Model TextToSpeechModel `json:"model,omitempty"` StringToAudio string `json:"stringToAudio,omitempty"` Voice Voice `json:"voice,omitempty"` Format AudioFormat `json:"format,omitempty"` }
TextToSpeech the DataType to use with this type is Byte
type TextToSpeechModel ¶
type TextToSpeechModel string
AudioModel constant types for the different auido models
const OpenAiTTS TextToSpeechModel = "tts"