Documentation ¶
Index ¶
- Constants
- func WithQuota(quota int) func(*Options)
- func WithTemperature(t float64) func(*Options)
- func WithTopP(p float64) func(*Options)
- type Chatter
- type Example
- type Options
- type Prompt
- func (prompt Prompt) MarshalText() (text []byte, err error)
- func (prompt *Prompt) WithContext(about string, context []string) *Prompt
- func (prompt *Prompt) WithExample(input, output string) *Prompt
- func (prompt *Prompt) WithInput(about string, input []string) *Prompt
- func (prompt *Prompt) WithInstruction(ins string, args ...any) *Prompt
- func (prompt *Prompt) WithRequirement(req string, args ...any) *Prompt
- func (prompt *Prompt) WithRequirements(note string) *Prompt
- func (prompt *Prompt) WithRole(role string) *Prompt
- func (prompt *Prompt) WithTask(task string, args ...any) *Prompt
- type Remark
Constants ¶
const Version = "v0.1.0"
Variables ¶
This section is empty.
Functions ¶
func WithTemperature ¶ added in v0.0.4
LLMs' critical parameter influencing the balance between predictability and creativity in generated text. Lower temperatures prioritize exploiting learned patterns, yielding more deterministic outputs, while higher temperatures encourage exploration, fostering diversity and innovation.
func WithTopP ¶ added in v0.0.4
Nucleus Sampling, a parameter used in LLMs, impacts token selection by considering only the most likely tokens that together represent a cumulative probability mass (e.g., top-p tokens). This limits the number of choices to avoid overly diverse or nonsensical outputs while maintaining diversity within the top-ranked options.
Types ¶
type Chatter ¶
type Chatter interface { UsedInputTokens() int UsedReplyTokens() int Prompt(context.Context, encoding.TextMarshaler, ...func(*Options)) (string, error) }
The generic trait to "interact" with LLMs;
type Example ¶ added in v0.0.5
type Example struct { Input string `json:"input,omitempty"` Output string `json:"output,omitempty"` }
Example is input output pair
type Options ¶ added in v0.0.4
type Options struct { // LLMs' critical parameter influencing the balance between predictability // and creativity in generated text. Lower temperatures prioritize exploiting // learned patterns, yielding more deterministic outputs, while higher // temperatures encourage exploration, fostering diversity and innovation. Temperature float64 // Nucleus Sampling, a parameter used in LLMs, impacts token selection by // considering only the most likely tokens that together represent // a cumulative probability mass (e.g., top-p tokens). This limits the // number of choices to avoid overly diverse or nonsensical outputs while // maintaining diversity within the top-ranked options. TopP float64 // Token quota for reply, the model would limit response given number Quota int }
type Prompt ¶
type Prompt struct { // Ground level constrain of the model behavior. // The latin meaning "something that has been laid down". // Think about it as a cornerstone of the model behavior. // "Act as <role>" ... Role string `json:"stratum,omitempty"` // The task is a summary of what you want the prompt to do. Task string `json:"task,omitempty"` // Instructions informs model how to complete the task. // Examples of how it could go about tasks. Instructions *Remark `json:"instructions,omitempty"` // Requirements is all about giving as much information as possible to ensure // your response does not use any incorrect assumptions. Requirements *Remark `json:"requirements,omitempty"` // Examples how to complete the task Examples []Example `json:"examples,omitempty"` // Input data required to complete the task. Input *Remark `json:"input,omitempty"` // Additional information required to complete the task. Context *Remark `json:"context,omitempty"` }
Prompt data type consisting of context and bag of exchange messages.
func (Prompt) MarshalText ¶ added in v0.1.0
Generic prompt formatter. Build prompt following the best approach
{role}. {task}. {instructions}. 1. {requirements} 2. {requirements} 3. ... Examples: Input: {input} Output: {output} {about input}: - {input} - {input} - ... {about context} - {context} - {context} - ...
func (*Prompt) WithContext ¶ added in v0.0.4
Additional information required to complete the task.
func (*Prompt) WithExample ¶ added in v0.0.5
Define the example of expected task
func (*Prompt) WithInstruction ¶ added in v0.0.4
Instructions informs model how to complete the task. Examples of how it could go about tasks.
func (*Prompt) WithRequirement ¶ added in v0.0.4
Requirements is all about giving as much information as possible to ensure your response does not use any incorrect assumptions.
func (*Prompt) WithRequirements ¶ added in v0.0.4
Requirements is all about giving as much information as possible to ensure your response does not use any incorrect assumptions.