Documentation
¶
Overview ¶
Package aws provides the AWS adaptor for the relay service.
Index ¶
- Constants
- Variables
- func Handler(meta *meta.Meta, c *gin.Context) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
- func RenderPrompt(messages []*relaymodel.Message) string
- func ResponseLlama2OpenAI(llamaResponse *Response) *openai.TextResponse
- func StreamHandler(meta *meta.Meta, c *gin.Context) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
- func StreamResponseLlama2OpenAI(llamaResponse *StreamResponse) *openai.ChatCompletionsStreamResponse
- type Adaptor
- type Request
- type Response
- type StreamResponse
Constants ¶
View Source
const (
ConvertedRequest = "convertedRequest"
)
Variables ¶
View Source
var AwsModelIDMap = map[string]awsModelItem{ "llama3-8b-8192": { ModelConfig: model.ModelConfig{ Model: "llama3-8b-8192", Type: relaymode.ChatCompletions, Owner: model.ModelOwnerMeta, }, ID: "meta.llama3-8b-instruct-v1:0", }, "llama3-70b-8192": { ModelConfig: model.ModelConfig{ Model: "llama3-70b-8192", Type: relaymode.ChatCompletions, Owner: model.ModelOwnerMeta, }, ID: "meta.llama3-70b-instruct-v1:0", }, }
AwsModelIDMap maps internal model identifiers to AWS model identifiers. It currently supports only llama-3-8b and llama-3-70b instruction models. For more details, see: https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html
Functions ¶
func Handler ¶
func Handler(meta *meta.Meta, c *gin.Context) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
func RenderPrompt ¶
func RenderPrompt(messages []*relaymodel.Message) string
func ResponseLlama2OpenAI ¶
func ResponseLlama2OpenAI(llamaResponse *Response) *openai.TextResponse
func StreamHandler ¶
func StreamHandler(meta *meta.Meta, c *gin.Context) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
func StreamResponseLlama2OpenAI ¶
func StreamResponseLlama2OpenAI(llamaResponse *StreamResponse) *openai.ChatCompletionsStreamResponse
Types ¶
type Request ¶
type Request struct { Temperature *float64 `json:"temperature,omitempty"` TopP *float64 `json:"top_p,omitempty"` Prompt string `json:"prompt"` MaxGenLen int `json:"max_gen_len,omitempty"` }
Request is the request to AWS Llama3
https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html
func ConvertRequest ¶
func ConvertRequest(textRequest *relaymodel.GeneralOpenAIRequest) *Request
type Response ¶
type Response struct { Generation string `json:"generation"` StopReason string `json:"stop_reason"` PromptTokenCount int `json:"prompt_token_count"` GenerationTokenCount int `json:"generation_token_count"` }
Response is the response from AWS Llama3
https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html
type StreamResponse ¶
type StreamResponse struct { Generation string `json:"generation"` StopReason string `json:"stop_reason"` PromptTokenCount int `json:"prompt_token_count"` GenerationTokenCount int `json:"generation_token_count"` }
{'generation': 'Hi', 'prompt_token_count': 15, 'generation_token_count': 1, 'stop_reason': None}
Click to show internal directories.
Click to hide internal directories.