chatter

package module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 8, 2024 License: MIT Imports: 6 Imported by: 6

README

chatter

adapter over LLMs interface

sub-moduledocabout
chatter types
AWS Bedrock LLMs
AWS Bedrock Batch Inference
OpenAI LLMs


The library is adapter over various popular Large Language Models (LLMs) tuned for text generation: AWS BedRock, OpenAI.

Inspiration

A good prompt has 4 key elements: Role, Task, Requirements, Instructions. "Are You AI Ready? Investigating AI Tools in Higher Education – Student Guide"

In the research community, there was an attempt for making standardized taxonomy of prompts for large language models (LLMs) to solve complex tasks. It encourages the community to adopt the TELeR taxonomy to achieve meaningful comparisons among LLMs, facilitating more accurate conclusions and helping the community achieve consensus on state-of-the-art LLM performance more efficiently.

The library addresses the LLMs comparisons by

  • Creating generic trait to "interact" with LLMs;
  • Enabling prompt definition into seven distinct levels;
  • Supporting variety of LLMs.
type Chatter interface {
	Prompt(context.Context, encoding.TextMarshaler, ...func(*Options)) (string, error)
}

Getting started

The latest version of the library is available at main branch of this repository. All development, including new features and bug fixes, take place on the main branch using forking and pull requests as described in contribution guidelines. The stable version is available via Golang modules.

package main

import (
	"context"
	"fmt"

	"github.com/kshard/chatter"
	"github.com/kshard/chatter/bedrock"
)

func main() {
	assistant, err := bedrock.New(
		bedrock.WithLLM(bedrock.LLAMA3_0_8B_INSTRUCT),
	)
	if err != nil {
		panic(err)
	}

	var prompt chatter.Prompt
	prompt.WithTask("Extract keywords from the text: %s", /* ... */)

	reply, err := assistant.Prompt(context.Background(), &prompt)
	if err != nil {
		panic(err)
	}

	fmt.Printf("==> (%d)\n%s\n", assistant.ConsumedTokens(), reply)
}

How To Contribute

The library is MIT licensed and accepts contributions via GitHub pull requests:

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Added some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

The build and testing process requires Go version 1.21 or later.

build and test library.

git clone https://github.com/kshard/chatter
cd chatter
go test ./...

commit message

The commit message helps us to write a good release note, speed-up review process. The message should address two question what changed and why. The project follows the template defined by chapter Contributing to a Project of Git book.

bugs

If you experience any issues with the library, please let us know via GitHub issues. We appreciate detailed and accurate reports that help us to identity and replicate the issue.

License

See LICENSE

Documentation

Index

Constants

View Source
const Version = "v0.1.0"

Variables

This section is empty.

Functions

func WithQuota added in v0.1.0

func WithQuota(quota int) func(*Options)

Limit reply to given quota

func WithTemperature added in v0.0.4

func WithTemperature(t float64) func(*Options)

LLMs' critical parameter influencing the balance between predictability and creativity in generated text. Lower temperatures prioritize exploiting learned patterns, yielding more deterministic outputs, while higher temperatures encourage exploration, fostering diversity and innovation.

func WithTopP added in v0.0.4

func WithTopP(p float64) func(*Options)

Nucleus Sampling, a parameter used in LLMs, impacts token selection by considering only the most likely tokens that together represent a cumulative probability mass (e.g., top-p tokens). This limits the number of choices to avoid overly diverse or nonsensical outputs while maintaining diversity within the top-ranked options.

Types

type Chatter

type Chatter interface {
	UsedInputTokens() int
	UsedReplyTokens() int
	Prompt(context.Context, encoding.TextMarshaler, ...func(*Options)) (string, error)
}

The generic trait to "interact" with LLMs;

type Example added in v0.0.5

type Example struct {
	Input  string `json:"input,omitempty"`
	Output string `json:"output,omitempty"`
}

Example is input output pair

type Options added in v0.0.4

type Options struct {
	// LLMs' critical parameter influencing the balance between predictability
	// and creativity in generated text. Lower temperatures prioritize exploiting
	// learned patterns, yielding more deterministic outputs, while higher
	// temperatures encourage exploration, fostering diversity and innovation.
	Temperature float64

	// Nucleus Sampling, a parameter used in LLMs, impacts token selection by
	// considering only the most likely tokens that together represent
	// a cumulative probability mass (e.g., top-p tokens). This limits the
	// number of choices to avoid overly diverse or nonsensical outputs while
	// maintaining diversity within the top-ranked options.
	TopP float64

	// Token quota for reply, the model would limit response given number
	Quota int
}

func NewOptions added in v0.1.0

func NewOptions() Options

Return default options chatter

type Prompt

type Prompt struct {
	// Ground level constrain of the model behavior.
	// The latin meaning "something that has been laid down".
	// Think about it as a cornerstone of the model behavior.
	// "Act as <role>" ...
	Role string `json:"stratum,omitempty"`

	// The task is a summary of what you want the prompt to do.
	Task string `json:"task,omitempty"`

	// Instructions informs model how to complete the task.
	// Examples of how it could go about tasks.
	Instructions *Remark `json:"instructions,omitempty"`

	// Requirements is all about giving as much information as possible to ensure
	// your response does not use any incorrect assumptions.
	Requirements *Remark `json:"requirements,omitempty"`

	// Examples how to complete the task
	Examples []Example `json:"examples,omitempty"`

	// Input data required to complete the task.
	Input *Remark `json:"input,omitempty"`

	// Additional information required to complete the task.
	Context *Remark `json:"context,omitempty"`
}

Prompt data type consisting of context and bag of exchange messages.

func (Prompt) MarshalText added in v0.1.0

func (prompt Prompt) MarshalText() (text []byte, err error)

Generic prompt formatter. Build prompt following the best approach

{role}. {task}. {instructions}.
1. {requirements}
2. {requirements}
3. ...

Examples:
Input: {input}
Output: {output}

{about input}:
- {input}
- {input}
- ...

{about context}
- {context}
- {context}
- ...

func (*Prompt) WithContext added in v0.0.4

func (prompt *Prompt) WithContext(about string, context []string) *Prompt

Additional information required to complete the task.

func (*Prompt) WithExample added in v0.0.5

func (prompt *Prompt) WithExample(input, output string) *Prompt

Define the example of expected task

func (*Prompt) WithInput added in v0.0.4

func (prompt *Prompt) WithInput(about string, input []string) *Prompt

Input data required to complete the task.

func (*Prompt) WithInstruction added in v0.0.4

func (prompt *Prompt) WithInstruction(ins string, args ...any) *Prompt

Instructions informs model how to complete the task. Examples of how it could go about tasks.

func (*Prompt) WithRequirement added in v0.0.4

func (prompt *Prompt) WithRequirement(req string, args ...any) *Prompt

Requirements is all about giving as much information as possible to ensure your response does not use any incorrect assumptions.

func (*Prompt) WithRequirements added in v0.0.4

func (prompt *Prompt) WithRequirements(note string) *Prompt

Requirements is all about giving as much information as possible to ensure your response does not use any incorrect assumptions.

func (*Prompt) WithRole added in v0.0.4

func (prompt *Prompt) WithRole(role string) *Prompt

Setting a specific role for a given prompt increases the likelihood of more accurate information, when done appropriately.

func (*Prompt) WithTask added in v0.0.4

func (prompt *Prompt) WithTask(task string, args ...any) *Prompt

The task is a summary of what you want the prompt to do.

type Remark added in v0.0.4

type Remark struct {
	Note string   `json:"note,omitempty"`
	Text []string `json:"text,omitempty"`
}

Remark is the sequence to statements annotated with note for the model.

Directories

Path Synopsis
bedrock module
bedrockbatch module
examples module
openai module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL