ollama

package
v1.18.12 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 6, 2024 License: Apache-2.0 Imports: 18 Imported by: 0

README

Ollama LLM Inference Provider

Build up your YoMo AI architecture with fully open-source dependencies.

1. Install Ollama

Follow the Ollama doc:

https://github.com/ollama/ollama?tab=readme-ov-file#ollama

2. Run the Mistral model

Notice that only the Mistral v0.3+ models are supported currently.

ollama run mistral:7b

3. Start YoMo Zipper

By default, ollama server will be listening at the port 11434.

Then the config file could be:

name: Service
host: 0.0.0.0
port: 9000

bridge:
  ai:
    server:
      addr: "localhost:8000"
      provider: ollama
    providers:
      ollama:
        api_endpoint: "http://localhost:11434/"
yomo serve -c config.yml

4. Start YoMo serverless function

example

Documentation

Overview

Package ollama is used to provide the Ollama service for YoMo Bridge.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Provider

type Provider struct {
	Endpoint string
}

Provider is the provider for Ollama

func NewProvider

func NewProvider(endpoint string) *Provider

NewProvider creates a new OllamaProvider

func (*Provider) GetChatCompletions

GetChatCompletions implements ai.LLMProvider.

func (*Provider) GetChatCompletionsStream

func (p *Provider) GetChatCompletionsStream(ctx context.Context, req openai.ChatCompletionRequest, _ metadata.M) (provider.ResponseRecver, error)

GetChatCompletionsStream implements ai.LLMProvider.

func (*Provider) Name

func (p *Provider) Name() string

Name implements ai.LLMProvider.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL