anthropic

package module
v0.3.9 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 23, 2024 License: MIT Imports: 16 Imported by: 0

README

anthropic

Zero dependency (Unofficial) Go client for the Anthropic API.

Installation
go get github.com/fabiustech/anthropic
Example Usage
package main

import (
	"context"
	"flag"
	"fmt"

	"github.com/fabiustech/anthropic"
)

var key *string

func init() {
	key = flag.String("key", "", "api key")
	flag.Parse()
}

func main() {
	var client = anthropic.NewClient(*key)
	var resp, err = client.NewCompletion(context.Background(), &anthropic.Request{
		Prompt:            anthropic.NewPromptFromString("Tell me a haiku about trees"),
		Model:             anthropic.Claude,
		MaxTokensToSample: 300,
	})
	if err != nil {
		panic(err)
	}

	fmt.Println(resp.Completion)
}
Contributing

Contributions are welcome and encouraged! Feel free to report any bugs / feature requests as issues.

Documentation

Overview

Package anthropic is a client library for interacting with the Anthropic API.

Index

Constants

View Source
const (
	// UserTypeHuman is the user type for the human in the dialogue.
	UserTypeHuman = "\n\nHuman"
	// UserTypeAssistant is the user type for the assistant in the dialogue.
	UserTypeAssistant = "\n\nAssistant"
	// UserTypeSystem is the user type for the system in the dialogue. It should always be the first "message" in the
	// dialogue.
	UserTypeSystem = "System"
)

Variables

View Source
var (
	// ErrEmptyMessages indicates a Message slice is empty.
	ErrEmptyMessages = errors.New("messages cannot be empty")
	// ErrBadSystemMessage indicates that a Message slice contains a system message that was not the first Message in
	// the slice.
	ErrBadSystemMessage = errors.New("system messages must be the first message in the dialogue")
	// ErrMissingAssistant indicates that a Message slice's last Message was not from the assistant.
	ErrMissingAssistant = errors.New("the final message in the dialogue must be from the assistant")
)
View Source
var ErrBadEvent = errors.New("bad event")

ErrBadEvent is returned when an event is received that cannot be parsed.

Functions

func Optional

func Optional[T any](v T) *T

Optional returns a pointer to |v|. Used to easily assign literals to optional parameters.

Types

type BedrockClient added in v0.2.0

type BedrockClient struct {
	// contains filtered or unexported fields
}

func NewBedrockClient added in v0.2.0

func NewBedrockClient(p client.ConfigProvider, cfgs ...*aws.Config) *BedrockClient

NewBedrockClient returns a new client for the Bedrock API. I've implemented this client to have identical signatures to the original client.

func (*BedrockClient) Debug added in v0.2.0

func (bc *BedrockClient) Debug()

Debug enables debug logging. When enabled, the client will log the request's prompt.

func (*BedrockClient) NewCompletion added in v0.2.0

func (bc *BedrockClient) NewCompletion(ctx context.Context, req *Request) (*Response, error)

NewCompletion returns a completion response from the API.

func (*BedrockClient) NewCompletionStreamedBatchResponse added in v0.2.0

func (bc *BedrockClient) NewCompletionStreamedBatchResponse(ctx context.Context, req *Request) (*Response, error)

NewCompletionStreamedBatchResponse returns a completion response from the API, which appears to the caller as a non-streaming response. However, it is actually a streaming response under the hood. This is useful in cases where you are getting a 524 error from the API, which is caused by the API taking too long to respond. Our theory is that these errors are caused by the API taking too long to respond to the load balancer, which then closes the connection. Since a streaming request will get a response as soon as the API has generated the first token, this should prevent the load balancer from closing the connection.

Note: This may be deprecated at any time, but is currently needed as most requests are running into this issue.

func (*BedrockClient) NewStreamingCompletion added in v0.2.0

func (bc *BedrockClient) NewStreamingCompletion(ctx context.Context, req *Request) (<-chan *Response, <-chan error, error)

NewStreamingCompletion returns two channels: the first will be sent |*Response|s as they are received from the API and the second is sent any error(s) encountered while receiving / parsing responses.

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client is a client for the Anthropic API.

func NewClient

func NewClient(key string) *Client

NewClient returns a client with the given API key.

func (*Client) AddRequestHeaders added in v0.3.7

func (c *Client) AddRequestHeaders(headers http.Header)

AddRequestHeaders adds the custom headers to be sent with each request.

func (*Client) Debug added in v0.1.1

func (c *Client) Debug()

Debug enables debug logging. When enabled, the client will log the request's prompt.

func (*Client) NewCompletion

func (c *Client) NewCompletion(ctx context.Context, req *Request) (*Response, error)

NewCompletion returns a completion response from the API.

func (*Client) NewCompletionStreamedBatchResponse added in v0.1.4

func (c *Client) NewCompletionStreamedBatchResponse(ctx context.Context, req *Request) (*Response, error)

NewCompletionStreamedBatchResponse returns a completion response from the API, which appears to the caller as a non-streaming response. However, it is actually a streaming response under the hood. This is useful in cases where you are getting a 524 error from the API, which is caused by the API taking too long to respond. Our theory is that these errors are caused by the API taking too long to respond to the load balancer, which then closes the connection. Since a streaming request will get a response as soon as the API has generated the first token, this should prevent the load balancer from closing the connection.

Note: This may be deprecated at any time, but is currently needed as most requests are running into this issue.

func (*Client) NewMessageRequest added in v0.3.0

func (c *Client) NewMessageRequest(ctx context.Context, req *v3.Request[v3.Message]) (*v3.Response, error)

NewMessageRequest makes a request to the messages endpoint.

func (*Client) NewShortHandMessageRequest added in v0.3.0

func (c *Client) NewShortHandMessageRequest(ctx context.Context, req *v3.Request[v3.ShortHandMessage]) (*v3.Response, error)

NewShortHandMessageRequest makes a request to the messages endpoint.

func (*Client) NewStreamingCompletion added in v0.1.2

func (c *Client) NewStreamingCompletion(ctx context.Context, req *Request) (<-chan *Response, <-chan error, error)

NewStreamingCompletion returns two channels: the first will be sent |*Response|s as they are received from the API and the second is sent any error(s) encountered while receiving / parsing responses.

func (*Client) NewStreamingMessageRequest added in v0.3.1

func (c *Client) NewStreamingMessageRequest(ctx context.Context, req *v3.Request[v3.Message]) (*v3.Response, <-chan string, <-chan error, error)

func (*Client) NewStreamingShortHandMessageRequest added in v0.3.1

func (c *Client) NewStreamingShortHandMessageRequest(ctx context.Context, req *v3.Request[v3.ShortHandMessage]) (*v3.Response, <-chan string, <-chan error, error)

func (*Client) SetBetaMaxOutputTokenHeader added in v0.3.7

func (c *Client) SetBetaMaxOutputTokenHeader()

SetBetaMaxOutputTokenHeader sets the |anthropic-beta| header to "max-tokens-3-5-sonnet-2024-07-15".

func (*Client) SetBetaPromptCacheHeader added in v0.3.8

func (c *Client) SetBetaPromptCacheHeader()

SetBetaPromptCacheHeader sets the |anthropic-beta| header to "prompt-caching-2024-07-31".

func (*Client) SetVersion added in v0.1.0

func (c *Client) SetVersion(version string)

SetVersion set's the value passed in the |Anthropic-Version| header for requests. The default value is "2023-06-01".

type Error added in v0.1.2

type Error struct {
	// Type represents the type of error (e.g. "invalid_request_error").
	Type string `json:"type"`
	// Message is a human-readable message about the error.
	Message string `json:"message"`
	// Code is the HTTP status code returned by the API (populated by the client).
	Code int `json:"code"`
}

Error represents an error returned from the API.

type Message

type Message struct {
	UserType UserType
	Text     string
}

Message represents a single message in a dialogue. It contains the UserType and the text of the message.

type Messages added in v0.2.4

type Messages []*Message

func (Messages) Validate added in v0.2.4

func (m Messages) Validate() error

Validate ensures that |m| is valid. It returns an error if |m| is invalid.

type Metadata

type Metadata struct {
	// UserID is a UUID, hash value, or other external identifier for the user who is associated with the request.
	// Anthropic may use this id to help detect abuse. Do not include any identifying information such as name, email
	// address, or phone number.
	UserID string `json:"user_id,omitempty"`
}

Metadata is an object describing metadata about the request.

type Model

type Model int

Model represents all models.

const (
	// UnknownModel represents an invalid model.
	UnknownModel Model = iota
	// Claude is Anthropic's largest model, ideal for a wide range of more complex tasks. This is the "major version"
	// which will automatically get updates to the model as they are released.
	Claude
	// Claude2Dot0 is Anthropic's largest model, ideal for a wide range of more complex tasks. If you rely on the exact
	// output shape, you should specify this full model version. It has a context window of 100K tokens.
	Claude2Dot0
	// Claude2Dot1 represents an improvement in specific capabilities and performance over Claude 2. With strong
	// accuracy upgrades, double the context window, and experimental tool use features, Claude can handle more complex
	// reasoning and generation while remaining honest and grounded in factual evidence.
	// Claude 2.1's context window is 200K tokens, enabling it to leverage much richer contextual information to
	// generate higher quality and more nuanced output.
	Claude2Dot1
	// ClaudeInstant is a smaller model with far lower latency, sampling at roughly 40 words/sec! Its output quality
	// is somewhat lower than the latest Claude model, particularly for complex tasks. However, it is much less
	// expensive and blazing fast. Anthropic believes that this model provides more than adequate performance on a range
	// of tasks including text classification, summarization, and lightweight chat applications, as well as search
	// result summarization. This is the "major version" which will automatically get updates to the model as they
	// are released.
	ClaudeInstant
	// ClaudeInstant1Dot1 is a smaller model with far lower latency, sampling at roughly 40 words/sec! Its output
	// quality is somewhat lower than the latest Claude model, particularly for complex tasks. However, it is much less
	// expensive and blazing fast. Anthropic believes that this model provides more than adequate performance on a range
	// of tasks including text classification, summarization, and lightweight chat applications, as well as search
	// result summarization. If you rely on the exact output shape, you should specify this full model version.
	ClaudeInstant1Dot1
)

func (Model) BedrockString added in v0.2.0

func (c Model) BedrockString() string

BedrockString returns the string representation of the model for use with AWS Bedrock.

func (Model) MarshalText

func (c Model) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (Model) String

func (c Model) String() string

String implements the fmt.Stringer interface.

func (*Model) UnmarshalText

func (c *Model) UnmarshalText(b []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface. On unrecognized value, it sets |e| to Unknown.

type Prompt

type Prompt string

Prompt represents the prompt passed to the model. The text that you give Claude is designed to elicit, or "prompt", a relevant response. A prompt is usually in the form of a question or instruction. For example:

Human: Why is the sky blue?

Assistant:

Prompts sent via the API must contain \n\nHuman: and \n\nAssistant: as the signals of who's speaking. In Slack and our web interface we automatically add these for you.

func NewPromptFromMessages

func NewPromptFromMessages(msg []*Message) Prompt

NewPromptFromMessages returns a Prompt from a slice of |Message|s by wrapping them in the expected Human/Assistant format. You can use this style to "Put words in Claude's mouth." Note: this function does not validate the messages, and therefore can result in a 4xx response from the API.

func NewPromptFromString

func NewPromptFromString(s string) Prompt

NewPromptFromString returns a Prompt from a string by wrapping it in the expected Human/Assistant format.

func NewPromptFromStringWithSystemMessage added in v0.2.4

func NewPromptFromStringWithSystemMessage(system, human string) Prompt

NewPromptFromStringWithSystemMessage returns a Prompt from both a system and human string by wrapping them in the exp Human/Assistant format.

func NewValidPromptFromMessages added in v0.2.4

func NewValidPromptFromMessages(msgs Messages) (Prompt, error)

NewValidPromptFromMessages returns a Prompt from a slice of |Message|s by wrapping them in the expected Human/Assistant format. It also validates the messages to ensure they are in the correct format.

type Request

type Request struct {
	// Prompt is the prompt you want Claude to complete. Required.
	Prompt Prompt `json:"prompt"`
	// Model controls which version of Claude answers your request. For more on the models, see the documentation in
	// models.go or visit https://console.anthropic.com/docs/api/reference. Required.
	// When making a request via AWS Bedrock, this will be zeroed out (and instead used as the model ID the request),
	// thus the omitempty tag.
	Model Model `json:"model,omitempty"`
	// MaxTokensToSample is the maximum number of tokens to generate before stopping. Required.
	MaxTokensToSample int `json:"max_tokens_to_sample"`
	// StopSequences specifies a list of sequences to stop sampling at. Anthropic's models stop on "\n\nHuman:", and
	// may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may
	// include additional strings that will cause the model to stop generating.
	StopSequences []string `json:"stop_sequences,omitempty"`
	// Temperature specifies the amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to
	// 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks.
	// Optional. Defaults to 1.
	Temperature *float64 `json:"temperature,omitempty"`
	// TopK specifies to only sample from the top K options for each subsequent token. Used to remove "long tail" low
	// probability responses.
	// Optional. Defaults to -1, which disables it. You should either alter Temperature or TopP, but not both.
	TopK *int `json:"topK,omitempty"`
	// TopP does nucleus sampling, in which we compute the cumulative distribution over all the options for each
	// subsequent token in decreasing probability order and cut it off once it reaches a particular probability
	// specified by TopP.
	//	Optional: Defaults to -1, which disables it. You should either alter Temperature or TopP, but not both.
	TopP *int `json:"topP,omitempty"`
	// Metadata is an object describing metadata about the request. Optional.
	Metadata *Metadata `json:"metadata,omitempty"`
}

Request represents the request to the API.

type Response

type Response struct {
	// Completion is he resulting completion up to and excluding the stop sequences.
	Completion string `json:"completion"`
	// StopReason is the reason Anthropic stopped sampling. It will be one of "stop_sequence" or "max_tokens".
	StopReason *string `json:"stop_reason"`
	// Stop is the stop sequence that caused the model to stop sampling.
	Stop *string `json:"stop"`
	// Model is the model that performed the completion.
	Model Model `json:"model"`
}

Response represents the response from the API.

type ResponseError added in v0.2.3

type ResponseError struct {
	Err Error `json:"error"`
}

func (*ResponseError) Error added in v0.2.3

func (r *ResponseError) Error() string

Error implements the error interface.

func (*ResponseError) Retryable added in v0.2.3

func (r *ResponseError) Retryable() bool

Retryable returns true if the error is retryable. For now, we assume all 5xx errors are transient. We also return true on 429, but it's up to the caller to determine the appropriate retry strategy given rate limits are based on # of concurrent requests.

type UserType

type UserType string

UserType represents the two types of users in the dialogue: Human and Assistant.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL