Documentation
¶
Index ¶
- Constants
- Variables
- func HandleAPIError(resp *http.Response) error
- func HandleNormalRequest(c Client, req *http.Request) (*http.Response, error)
- func HandleSendChatCompletionRequest(c Client, req *http.Request) (*http.Response, error)
- func HandleTimeout() (time.Duration, error)
- type APIError
- type APIModels
- type BalanceInfo
- type BalanceResponse
- type ChatCompletionMessage
- type ChatCompletionRequest
- type ChatCompletionResponse
- type ChatCompletionStream
- type Choice
- type Client
- func (c *Client) CreateChatCompletion(ctx context.Context, request *ChatCompletionRequest) (*ChatCompletionResponse, error)
- func (c *Client) CreateChatCompletionStream(ctx context.Context, request *StreamChatCompletionRequest) (ChatCompletionStream, error)
- func (c *Client) CreateFIMCompletion(ctx context.Context, request *FIMCompletionRequest) (*FIMCompletionResponse, error)
- func (c *Client) CreateFIMStreamCompletion(ctx context.Context, request *FIMStreamCompletionRequest) (FIMChatCompletionStream, error)
- type ContentToken
- type FIMChatCompletionStream
- type FIMCompletionRequest
- type FIMCompletionResponse
- type FIMStreamChoice
- type FIMStreamCompletionRequest
- type FIMStreamCompletionResponse
- type Function
- type FunctionParameters
- type HTTPDoer
- type JSONExtractor
- type Logprobs
- type Message
- type Model
- type Option
- type ResponseFormat
- type StreamChatCompletionMessage
- type StreamChatCompletionRequest
- type StreamChatCompletionResponse
- type StreamChoices
- type StreamDelta
- type StreamOptions
- type StreamUsage
- type TokenEstimate
- type Tool
- type ToolCall
- type ToolCallFunction
- type ToolChoice
- type ToolChoiceFunction
- type TopLogprobToken
- type Usage
Constants ¶
const ( // ChatMessageRoleSystem is the role of a system message ChatMessageRoleSystem = constants.ChatMessageRoleSystem // ChatMessageRoleUser is the role of a user message ChatMessageRoleUser = constants.ChatMessageRoleUser // ChatMessageRoleAssistant is the role of an assistant message ChatMessageRoleAssistant = constants.ChatMessageRoleAssistant // ChatMessageRoleTool is the role of a tool message ChatMessageRoleTool = constants.ChatMessageRoleTool )
const ( DeepSeekChat = "deepseek-chat" // DeepSeekChat is the official model for chat completions DeepSeekCoder = "deepseek-coder" // DeepSeekCoder has been combined with DeepSeekChat, but you can still use it. Please read: https://api-docs.deepseek.com/updates#version-2024-09-05 DeepSeekReasoner = "deepseek-reasoner" // DeepSeekReasoner is the official model for reasoning completions )
Official DeepSeek Models
const ( AzureDeepSeekR1 = "DeepSeek-R1" // Azure model for DeepSeek R1 OpenRouterDeepSeekR1 = "deepseek/deepseek-r1" // OpenRouter model for DeepSeek R1 OpenRouterDeepSeekR1DistillLlama70B = "deepseek/deepseek-r1-distill-llama-70b" // DeepSeek R1 Distill Llama 70B OpenRouterDeepSeekR1DistillLlama8B = "deepseek/deepseek-r1-distill-llama-8b" // DeepSeek R1 Distill Llama 8B OpenRouterDeepSeekR1DistillQwen14B = "deepseek/deepseek-r1-distill-qwen-14b" // DeepSeek R1 Distill Qwen 14B OpenRouterDeepSeekR1DistillQwen1_5B = "deepseek/deepseek-r1-distill-qwen-1.5b" // DeepSeek R1 Distill Qwen 1.5B OpenRouterDeepSeekR1DistillQwen32B = "deepseek/deepseek-r1-distill-qwen-32b" // DeepSeek R1 Distill Qwen 32B )
External Models that can be used with the API
const BaseURL string = "https://api.deepseek.com/v1"
BaseURL is the base URL for the Deepseek API
Variables ¶
var ( // ErrChatCompletionStreamNotSupported is returned when streaming is not supported with the method. ErrChatCompletionStreamNotSupported = errors.New("streaming is not supported with this method") // ErrChatCompletionRequestNil is returned when the request is nil. ErrUnexpectedResponseFormat = errors.New("unexpected response format") )
Functions ¶
func HandleAPIError ¶
HandleAPIError handles an error response from the API.
func HandleNormalRequest ¶ added in v1.1.1
HandleNormalRequest sends a request to the DeepSeek API and returns the response.
(xgfone): Do we need to export this function?
func HandleSendChatCompletionRequest ¶ added in v1.1.1
HandleSendChatCompletionRequest sends a request to the DeepSeek API and returns the response.
(xgfone): Do we need to export this function?
func HandleTimeout ¶ added in v1.1.1
HandleTimeout gets the timeout duration from the DEEPSEEK_TIMEOUT environment variable.
(xgfone): Do we need to export the function?
Types ¶
type APIError ¶
type APIError struct { StatusCode int // HTTP status code APICode int // Business error code from API response Message string // Human-readable error message OriginalError error // Wrapped error for debugging ResponseBody string // Raw JSON response body }
APIError represents an error returned by the API.
type APIModels ¶ added in v0.1.1
type APIModels struct { Object string `json:"object"` // Object (string) Data []Model `json:"data"` // List of Models }
APIModels represents the response from the API endpoint.
type BalanceInfo ¶
type BalanceInfo struct { Currency string `json:"currency"` //The currency of the balance. TotalBalance string `json:"total_balance"` //The total available balance, including the granted balance and the topped-up balance. GrantedBalance string `json:"granted_balance"` //The total not expired granted balance. ToppedUpBalance string `json:"topped_up_balance"` //The total topped-up balance. }
BalanceInfo represents the balance information for a specific currency.
type BalanceResponse ¶
type BalanceResponse struct { IsAvailable bool `json:"is_available"` //Whether the user's balance is sufficient for API calls. BalanceInfos []BalanceInfo `json:"balance_infos"` //List of Balance infos }
BalanceResponse represents the response from the API endpoint.
func GetBalance ¶
func GetBalance(c *Client, ctx context.Context) (*BalanceResponse, error)
GetBalance sends a request to the API to get the user's balance.
type ChatCompletionMessage ¶
type ChatCompletionMessage struct { Role string `json:"role"` // The role of the message sender, e.g., "user", "assistant", "system". Content string `json:"content"` // The content of the message. Prefix bool `json:"prefix,omitempty"` // The prefix of the message (optional) for Chat Prefix Completion [Beta Feature]. ReasoningContent string `json:"reasoning_content,omitempty"` // The reasoning content of the message (optional) when using the reasoner model with Chat Prefix Completion. When using this feature, the Prefix parameter must be set to true. ToolCallID string `json:"tool_call_id,omitempty"` // Tool call that this message is responding to. ToolCalls []ToolCall `json:"tool_calls,omitempty"` // Optional tool calls. }
ChatCompletionMessage represents a single message in a chat completion conversation.
func MapMessageToChatCompletionMessage ¶
func MapMessageToChatCompletionMessage(m Message) (ChatCompletionMessage, error)
MapMessageToChatCompletionMessage maps a Message to a ChatCompletionMessage
type ChatCompletionRequest ¶
type ChatCompletionRequest struct { Model string `json:"model"` // The ID of the model to use (required). Messages []ChatCompletionMessage `json:"messages"` // A list of messages comprising the conversation (required). FrequencyPenalty float32 `json:"frequency_penalty,omitempty"` // Penalty for new tokens based on their frequency in the text so far (optional). MaxTokens int `json:"max_tokens,omitempty"` // The maximum number of tokens to generate in the chat completion (optional). PresencePenalty float32 `json:"presence_penalty,omitempty"` // Penalty for new tokens based on their presence in the text so far (optional). Temperature float32 `json:"temperature,omitempty"` // The sampling temperature, between 0 and 2 (optional). TopP float32 `json:"top_p,omitempty"` // The nucleus sampling parameter, between 0 and 1 (optional). ResponseFormat *ResponseFormat `json:"response_format,omitempty"` // The desired response format (optional). Stop []string `json:"stop,omitempty"` // A list of sequences where the model should stop generating further tokens (optional). Tools []Tool `json:"tools,omitempty"` // A list of tools the model may use (optional). ToolChoice interface{} `json:"tool_choice,omitempty"` // Controls which (if any) tool is called by the model (optional). LogProbs bool `json:"logprobs,omitempty"` // Whether to return log probabilities of the most likely tokens (optional). TopLogProbs int `json:"top_logprobs,omitempty"` // The number of top most likely tokens to return log probabilities for (optional). JSONMode bool `json:"json,omitempty"` // [deepseek-go feature] Optional: Enable JSON mode. If you're using the JSON mode, please mention "json" anywhere in your prompt, and also include the JSON schema in the request. }
ChatCompletionRequest defines the structure for a chat completion request.
type ChatCompletionResponse ¶ added in v1.1.1
type ChatCompletionResponse struct { ID string `json:"id"` // Unique identifier for the chat completion. Object string `json:"object"` // Type of the object, typically "chat.completion". Created int64 `json:"created"` // Timestamp when the chat completion was created. Model string `json:"model"` // The model used for generating the completion. Choices []Choice `json:"choices"` // List of completion choices generated by the model. Usage Usage `json:"usage"` // Token usage statistics. SystemFingerprint *string `json:"system_fingerprint,omitempty"` // Fingerprint of the system configuration. }
ChatCompletionResponse represents a response from the chat completion endpoint.
func HandleChatCompletionResponse ¶ added in v1.1.1
func HandleChatCompletionResponse(resp *http.Response) (*ChatCompletionResponse, error)
HandleChatCompletionResponse parses the response from the chat completion endpoint.
type ChatCompletionStream ¶
type ChatCompletionStream interface { Recv() (*StreamChatCompletionResponse, error) Close() error }
ChatCompletionStream is an interface for receiving streaming chat completion responses.
type Choice ¶ added in v1.1.1
type Choice struct { Index int `json:"index"` // Index of the choice in the list of choices. Message Message `json:"message"` // The message generated by the model. Logprobs *Logprobs `json:"logprobs,omitempty"` // Log probabilities of the tokens, if available. FinishReason string `json:"finish_reason"` // Reason why the completion finished. }
Choice represents a completion choice generated by the model.
type Client ¶
type Client struct { AuthToken string // The authentication token for the API BaseURL string // The base URL for the API Timeout time.Duration // The timeout for the current Client Path string // The path for the API request. Defaults to "chat/completions" HTTPClient HTTPDoer // The HTTP client to send the request and get the response }
Client is the main struct for interacting with the Deepseek API.
func NewClient ¶
NewClient creates a new client with an authentication token and an optional custom baseURL. If no baseURL is provided, it defaults to "https://api.deepseek.com/". You can't set path with this method. If you want to set path, use NewClientWithOptions.
func NewClientWithOptions ¶ added in v1.1.1
NewClientWithOptions creates a new client with required authentication token and optional configurations. Defaults: - BaseURL: "https://api.deepseek.com/" - Timeout: 5 minutes
func (*Client) CreateChatCompletion ¶
func (c *Client) CreateChatCompletion( ctx context.Context, request *ChatCompletionRequest, ) (*ChatCompletionResponse, error)
CreateChatCompletion sends a chat completion request and returns the generated response.
func (*Client) CreateChatCompletionStream ¶
func (c *Client) CreateChatCompletionStream( ctx context.Context, request *StreamChatCompletionRequest, ) (ChatCompletionStream, error)
CreateChatCompletionStream sends a chat completion request with stream = true and returns the delta
func (*Client) CreateFIMCompletion ¶ added in v1.1.2
func (c *Client) CreateFIMCompletion( ctx context.Context, request *FIMCompletionRequest, ) (*FIMCompletionResponse, error)
CreateFIMCompletion is a beta feature. It sends a FIM completion request and returns the generated response. the base URL is set to "https://api.deepseek.com/beta/"
func (*Client) CreateFIMStreamCompletion ¶ added in v1.1.2
func (c *Client) CreateFIMStreamCompletion( ctx context.Context, request *FIMStreamCompletionRequest, ) (FIMChatCompletionStream, error)
CreateFIMStreamCompletion sends a FIM completion request with stream = true and returns the delta
type ContentToken ¶ added in v1.2.0
type ContentToken struct { Token string `json:"token"` // The token string. Logprob float64 `json:"logprob"` // The log probability of this token. -9999.0 if not in top 20. Bytes []int `json:"bytes,omitempty"` // UTF-8 byte representation of the token. Can be nil. TopLogprobs []TopLogprobToken `json:"top_logprobs"` // List of the most likely tokens and their log probability. }
ContentToken represents a single token within the content with its log probability and byte information.
type FIMChatCompletionStream ¶ added in v1.2.0
type FIMChatCompletionStream interface { FIMRecv() (*FIMStreamCompletionResponse, error) FIMClose() error }
FIMChatCompletionStream is an interface for receiving streaming chat completion responses.
type FIMCompletionRequest ¶ added in v1.1.2
type FIMCompletionRequest struct { Model string `json:"model"` // Model name to use for completion. Prompt string `json:"prompt"` // The prompt to start the completion from. Suffix string `json:"suffix,omitempty"` // Optional: The suffix to complete the prompt with. MaxTokens int `json:"max_tokens,omitempty"` // Optional: Maximum tokens to generate, > 1 and <= 4000. Temperature float64 `json:"temperature,omitempty"` // Optional: Sampling temperature, between 0 and 1. TopP float64 `json:"top_p,omitempty"` // Optional: Nucleus sampling probability threshold. N int `json:"n,omitempty"` // Optional: Number of completions to generate. Logprobs int `json:"logprobs,omitempty"` // Optional: Number of log probabilities to return. Echo bool `json:"echo,omitempty"` // Optional: Whether to echo the prompt in the completion. Stop []string `json:"stop,omitempty"` // Optional: List of stop sequences. PresencePenalty float64 `json:"presence_penalty,omitempty"` // Optional: Penalty for new tokens based on their presence in the text so far. FrequencyPenalty float64 `json:"frequency_penalty,omitempty"` // Optional: Penalty for new tokens based on their frequency in the text so far. }
FIMCompletionRequest represents the request body for a Fill-In-the-Middle (FIM) completion.
type FIMCompletionResponse ¶ added in v1.1.2
type FIMCompletionResponse struct { ID string `json:"id"` // Unique ID for the completion. Object string `json:"object"` // The object type, e.g., "text_completion". Created int `json:"created"` // Timestamp of when the completion was created. Model string `json:"model"` // Model used for the completion. Choices []struct { Text string `json:"text"` // The generated completion text. Index int `json:"index"` // Index of the choice. Logprobs Logprobs `json:"logprobs"` // Log probabilities of the generated tokens (if requested). FinishReason string `json:"finish_reason"` // Reason for finishing the completion, e.g., "stop", "length". } `json:"choices"` Usage struct { PromptTokens int `json:"prompt_tokens"` // Number of tokens in the prompt. CompletionTokens int `json:"completion_tokens"` // Number of tokens in the completion. TotalTokens int `json:"total_tokens"` // Total number of tokens used. } `json:"usage"` }
FIMCompletionResponse represents the response body for a Fill-In-the-Middle (FIM) completion.
func HandleFIMCompletionRequest ¶ added in v1.1.2
func HandleFIMCompletionRequest(resp *http.Response) (*FIMCompletionResponse, error)
HandleFIMCompletionRequest parses the response from the FIM completion endpoint.
type FIMStreamChoice ¶ added in v1.2.0
type FIMStreamChoice struct { // Text generated by the model for this choice. Text string `json:"text"` // Index of this choice within the list of choices. Index int `json:"index"` // Log probabilities for the generated tokens (if available). May be `nil`. Logprobs Logprobs `json:"logprobs,omitempty"` // Reason why the generation finished (e.g., "stop", "length"). May be `nil`. FinishReason interface{} `json:"finish_reason,omitempty"` }
FIMStreamChoice represents a single choice within a streaming Fill-In-the-Middle (FIM) completion response.
type FIMStreamCompletionRequest ¶ added in v1.1.2
type FIMStreamCompletionRequest struct { Model string `json:"model"` // Model name to use for completion. Prompt string `json:"prompt"` // The prompt to start the completion from. Stream bool `json:"stream"` // Whether to stream the completion. This is the key difference. StreamOptions StreamOptions `json:"stream_options,omitempty"` // Optional: Options for streaming the completion. Suffix string `json:"suffix,omitempty"` // Optional: The suffix to complete the prompt with. MaxTokens int `json:"max_tokens,omitempty"` // Optional: Maximum tokens to generate, > 1 and <= 4000. Temperature float64 `json:"temperature,omitempty"` // Optional: Sampling temperature, between 0 and 1. TopP float64 `json:"top_p,omitempty"` // Optional: Nucleus sampling probability threshold. N int `json:"n,omitempty"` // Optional: Number of completions to generate. Logprobs int `json:"logprobs,omitempty"` // Optional: Number of log probabilities to return. Echo bool `json:"echo,omitempty"` // Optional: Whether to echo the prompt in the completion. Stop []string `json:"stop,omitempty"` // Optional: List of stop sequences. PresencePenalty float64 `json:"presence_penalty,omitempty"` // Optional: Penalty for new tokens based on their presence in the text so far. FrequencyPenalty float64 `json:"frequency_penalty,omitempty"` // Optional: Penalty for new tokens based on their frequency in the text so far. }
FIMStreamCompletionRequest represents the request body for a streaming Fill-In-the-Middle (FIM) completion. It's similar to FIMCompletionRequest but includes a `Stream` field.
type FIMStreamCompletionResponse ¶ added in v1.2.0
type FIMStreamCompletionResponse struct { // Unique identifier for the completion response. ID string `json:"id"` // List of choices generated by the model. Each choice represents a possible completion. Choices []FIMStreamChoice `json:"choices"` // Unix timestamp (seconds since the epoch) of when the completion was created. Created int64 `json:"created"` // Name of the model used for the completion. Model string `json:"model"` // Fingerprint of the system that generated the completion. SystemFingerprint string `json:"system_fingerprint"` // Type of object returned (always "text_completion" for FIM completions). Object string `json:"object"` // Usage statistics for the completion request (if available). May be `nil`. Usage *StreamUsage `json:"usage,omitempty"` }
FIMStreamCompletionResponse represents the full response body for a streaming Fill-In-the-Middle (FIM) completion. It contains metadata about the completion request and a list of choices generated by the model.
type Function ¶
type Function struct { Name string `json:"name"` // The name of the function (required). Description string `json:"description"` // A description of the function (required). Parameters *FunctionParameters `json:"parameters,omitempty"` // The parameters of the function (optional). }
Function defines the structure of a function tool.
type FunctionParameters ¶ added in v1.1.2
type FunctionParameters struct { Type string `json:"type"` // The type of the parameters, e.g., "object" (required). Properties map[string]interface{} `json:"properties,omitempty"` // The properties of the parameters (optional). Required []string `json:"required,omitempty"` // A list of required parameter names (optional). }
FunctionParameters defines the parameters for a function.
type JSONExtractor ¶ added in v1.1.1
type JSONExtractor struct {
// contains filtered or unexported fields
}
JSONExtractor helps extract structured data from LLM responses
func NewJSONExtractor ¶ added in v1.1.1
func NewJSONExtractor(schema json.RawMessage) *JSONExtractor
NewJSONExtractor creates a new JSONExtractor instance
func (*JSONExtractor) ExtractJSON ¶ added in v1.1.1
func (je *JSONExtractor) ExtractJSON(response *ChatCompletionResponse, target interface{}) error
ExtractJSON attempts to extract and parse JSON from an LLM response
type Logprobs ¶ added in v1.2.0
type Logprobs struct {
Content []ContentToken `json:"content"` // A list of message content tokens with log probability information.
}
Logprobs represents log probability information for a choice or token.
type Message ¶ added in v1.1.1
type Message struct { Role string `json:"role"` // Role of the message sender (e.g., "user", "assistant"). Content string `json:"content"` // Content of the message. ReasoningContent string `json:"reasoning_content,omitempty"` // Optional reasoning content. ToolCalls []ToolCall `json:"tool_calls,omitempty"` // Optional tool calls. }
Message represents a message generated by the model.
type Model ¶ added in v0.1.1
type Model struct { ID string `json:"id"` //The id of the model (string) Object string `json:"object"` //The object of the model (string) OwnedBy string `json:"owned_by"` //The owner of the model(usually deepseek) }
Model represents a model that can be used with the API
type Option ¶ added in v1.1.1
Option configures a Client instance
func WithBaseURL ¶ added in v1.1.1
WithBaseURL sets the base URL for the API client
func WithHTTPClient ¶ added in v1.2.5
WithHTTPClient sets the http client for the API client.
func WithPath ¶ added in v1.2.4
WithPath sets the path for the API request. Defaults to "chat/completions", if not set. Example usages would be "/c/chat/" or any http after the baseURL extension
func WithTimeout ¶ added in v1.1.1
WithTimeout sets the timeout for API requests
func WithTimeoutString ¶ added in v1.1.1
WithTimeoutString parses a duration string and sets the timeout Example valid values: "5s", "2m", "1h"
type ResponseFormat ¶
type ResponseFormat struct {
Type string `json:"type"` // The desired response format, either "text" or "json_object".
}
ResponseFormat defines the structure for the response format.
type StreamChatCompletionMessage ¶
type StreamChatCompletionMessage struct { Role string `json:"role"` Content string `json:"content"` }
StreamChatCompletionMessage represents a single message in a chat completion stream.
type StreamChatCompletionRequest ¶
type StreamChatCompletionRequest struct { Stream bool `json:"stream,omitempty"` //Comments: Defaults to true, since it's "STREAM" StreamOptions StreamOptions `json:"stream_options,omitempty"` // Optional: Stream options for the request. Model string `json:"model"` // Required: Model ID, e.g., "deepseek-chat" Messages []ChatCompletionMessage `json:"messages"` // Required: List of messages FrequencyPenalty float32 `json:"frequency_penalty,omitempty"` // Optional: Frequency penalty, >= -2 and <= 2 MaxTokens int `json:"max_tokens,omitempty"` // Optional: Maximum tokens, > 1 PresencePenalty float32 `json:"presence_penalty,omitempty"` // Optional: Presence penalty, >= -2 and <= 2 Temperature float32 `json:"temperature,omitempty"` // Optional: Sampling temperature, <= 2 TopP float32 `json:"top_p,omitempty"` // Optional: Nucleus sampling parameter, <= 1 ResponseFormat *ResponseFormat `json:"response_format,omitempty"` // Optional: Custom response format: just don't try, it breaks rn ;) Stop []string `json:"stop,omitempty"` // Optional: Stop signals Tools []Tool `json:"tools,omitempty"` // Optional: List of tools LogProbs bool `json:"logprobs,omitempty"` // Optional: Enable log probabilities TopLogProbs int `json:"top_logprobs,omitempty"` // Optional: Number of top tokens with log probabilities, <= 20 }
StreamChatCompletionRequest represents the request body for a streaming chat completion API call.
type StreamChatCompletionResponse ¶
type StreamChatCompletionResponse struct { ID string `json:"id"` // ID of the response. Object string `json:"object"` // Type of object. Created int64 `json:"created"` // Creation timestamp. Model string `json:"model"` // Model used. Choices []StreamChoices `json:"choices"` // Choices generated. Usage *StreamUsage `json:"usage,omitempty"` // Usage statistics (optional). }
StreamChatCompletionResponse represents a single response from a streaming chat completion API call.
type StreamChoices ¶
type StreamChoices struct { Index int `json:"index"` // Index of the choice. Delta StreamDelta // Delta information for the choice. FinishReason string `json:"finish_reason,omitempty"` // Reason for finishing the generation. Logprobs Logprobs `json:"logprobs,omitempty"` // Log probabilities for the generated tokens. }
StreamChoices represents a choice in the chat completion stream.
type StreamDelta ¶
type StreamDelta struct { Role string `json:"role,omitempty"` // Role of the message. Content string `json:"content"` // Content of the message. ReasoningContent string `json:"reasoning_content,omitempty"` // Reasoning content of the message. }
StreamDelta represents a delta in the chat completion stream.
type StreamOptions ¶
type StreamOptions struct {
IncludeUsage bool `json:"include_usage"` // Whether to include usage information in the stream. The API returns the usage sometimes even if this is set to false.
}
StreamOptions provides options for streaming chat completion responses.
type StreamUsage ¶
type StreamUsage struct { PromptTokens int `json:"prompt_tokens"` // Number of tokens in the prompt. CompletionTokens int `json:"completion_tokens"` // Number of tokens in the completion. TotalTokens int `json:"total_tokens"` // Total number of tokens used. }
StreamUsage represents token usage statistics for a streaming chat completion response. You will get {0 0 0} up until the last stream delta.
type TokenEstimate ¶ added in v0.1.1
type TokenEstimate struct {
EstimatedTokens int `json:"estimated_tokens"` //the total estimated prompt tokens. These are different form total tokens used.
}
TokenEstimate represents an estimated token count
func EstimateTokenCount ¶ added in v0.1.1
func EstimateTokenCount(text string) *TokenEstimate
EstimateTokenCount estimates the number of tokens in a text based on character type ratios
func EstimateTokensFromMessages ¶ added in v0.1.1
func EstimateTokensFromMessages(messages *ChatCompletionRequest) *TokenEstimate
EstimateTokensFromMessages estimates the number of tokens in a list of chat messages
type Tool ¶ added in v1.1.2
type Tool struct { Type string `json:"type"` // The type of the tool, e.g., "function" (required). Function Function `json:"function"` // The function details (required). }
Tool defines the structure for a tool.
type ToolCall ¶ added in v1.1.2
type ToolCall struct { Index int `json:"index"` // Index of the tool call ID string `json:"id"` // Unique identifier for the tool call Type string `json:"type"` // Type of the tool call, e.g., "function" Function ToolCallFunction `json:"function"` // The function details for the call }
ToolCall represents a tool call in the completion.
type ToolCallFunction ¶ added in v1.1.2
type ToolCallFunction struct { Name string `json:"name"` // Name of the function (required) Arguments string `json:"arguments"` // JSON string of arguments passed to the function (required) }
ToolCallFunction represents a function call in the tool.
type ToolChoice ¶ added in v1.1.2
type ToolChoice struct { Type string `json:"type"` // The type of the tool, e.g., "function" (required). Function ToolChoiceFunction `json:"function,omitempty"` // The function details (optional, but required if type is "function"). }
ToolChoice defines the structure for a tool choice.
type ToolChoiceFunction ¶ added in v1.1.2
type ToolChoiceFunction struct {
Name string `json:"name"` // The name of the function to call (required).
}
ToolChoiceFunction defines the function details within ToolChoice.
type TopLogprobToken ¶ added in v1.2.0
type TopLogprobToken struct { Token string `json:"token"` // The token string. Logprob float64 `json:"logprob"` // The log probability of this token. -9999.0 if not in top 20. Bytes []int `json:"bytes,omitempty"` // UTF-8 byte representation of the token. Can be nil. }
TopLogprobToken represents a single token within the top log probabilities with its log probability and byte information.
type Usage ¶ added in v1.1.1
type Usage struct { PromptTokens int `json:"prompt_tokens"` // Number of tokens used in the prompt. CompletionTokens int `json:"completion_tokens"` // Number of tokens used in the completion. TotalTokens int `json:"total_tokens"` // Total number of tokens used. PromptCacheHitTokens int `json:"prompt_cache_hit_tokens"` // Number of tokens served from cache. PromptCacheMissTokens int `json:"prompt_cache_miss_tokens"` // Number of tokens not served from cache. }
Usage represents token usage statistics.
Source Files
¶
Directories
¶
Path | Synopsis |
---|---|
package constants contains the constants used in the deepseek-go package
|
package constants contains the constants used in the deepseek-go package |
examples
|
|
internal
|
|
testutil
Package testutil provides testing utilities for the DeepSeek client.
|
Package testutil provides testing utilities for the DeepSeek client. |