ollama

package
v1.19.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 29, 2025 License: Apache-2.0 Imports: 5 Imported by: 0

README

Ollama LLM Inference Provider

Build up your YoMo AI architecture with fully open-source dependencies.

1. Install Ollama

Follow the Ollama doc:

https://github.com/ollama/ollama?tab=readme-ov-file#ollama

2. Run the model

ollama run mistral

3. Start YoMo Zipper

By default, ollama server will be listening at the port 11434.

Then the config file could be:

name: Service
host: 0.0.0.0
port: 9000

bridge:
  ai:
    server:
      addr: "localhost:8000"
      provider: ollama
    providers:
      ollama:
        api_endpoint: "http://localhost:11434/v1"
yomo serve -c config.yml

4. Start YoMo serverless function

example

Documentation

Overview

Package ollama is used to provide the Ollama service for YoMo Bridge.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Provider

type Provider struct {
	// ollama OpenAI compatibility api endpoint
	APIEndpoint string
	// Model is the default model for Ollama
	Model string
	// contains filtered or unexported fields
}

Provider is the provider for Ollama

func NewProvider

func NewProvider(apiEndpoint string, model string) *Provider

NewProvider creates a new OllamaProvider

func (*Provider) GetChatCompletions

GetChatCompletions implements ai.LLMProvider.

func (*Provider) GetChatCompletionsStream

func (p *Provider) GetChatCompletionsStream(ctx context.Context, req openai.ChatCompletionRequest, _ metadata.M) (provider.ResponseRecver, error)

GetChatCompletionsStream implements ai.LLMProvider.

func (*Provider) Name

func (p *Provider) Name() string

Name implements ai.LLMProvider.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL