ollama

package
v0.27.0-beta Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 10, 2024 License: MIT Imports: 12 Imported by: 0

README

---
title: "Ollama"
lang: "en-US"
draft: false
description: "Learn about how to set up a VDP Ollama component https://github.com/instill-ai/instill-core"
---

The Ollama component is an AI component that allows users to connect the AI models served with the Ollama library.
It can carry out the following tasks:

- [Text Generation Chat](#text-generation-chat)
- [Text Embeddings](#text-embeddings)



## Release Stage

`Alpha`



## Configuration

The component configuration is defined and maintained [here](https://github.com/instill-ai/component/blob/main/ai/ollama/v0/config/definition.json).




## Setup




In order to communicate with Ollama, the following connection details need to be
provided. You may specify them directly in a pipeline recipe as key-value pairs
withing the component's `setup` block, or you can create a **Connection** from
the [**Integration Settings**](https://www.instill.tech/docs/vdp/integration)
page and reference the whole `setup` as `setup:
${connection.<my-connection-id>}`.

| Field | Field ID | Type | Note |
| :--- | :--- | :--- | :--- |
| Endpoint (required) | `endpoint` | string | Fill in your Ollama hosting endpoint. ### WARNING ###: As of 2024-07-26, the Ollama component does not support authentication methods. To prevent unauthorized access to your Ollama serving resources, please implement additional security measures such as IP whitelisting. |
| Model Auto-Pull (required) | `auto-pull` | boolean | Automatically pull the requested models from the Ollama server if the model is not found in the local cache. |




## Supported Tasks

### Text Generation Chat

Provide text outputs in response to text/image inputs.


| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION_CHAT` |
| Model Name (required) | `model` | string | The OSS model to be used, check https://ollama.com/library for list of models available |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| Chat history | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Top k for sampling |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |



| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Text | `text` | string | Model Output |


#### Local Ollama Instance

To set up an Ollama instance on your local machine, follow the instructions below:

> Note: These instructions only work for Instill Core CE

1. Follow the tutorial on the official [GitHub repository](https://github.com/ollama/ollama) to install Ollama on your machine.
2. Follow the instructions in the [FAQ section](https://github.com/ollama/ollama/blob/main/docs/faq.md) to modify the variable `OLLAMA_HOST` to `0.0.0.0`, then restart Ollama.
3. Get the IP address of your machine on the local network.
    - On Linux and macOS, open the terminal and type `ifconfig`.
    - On Windows, open the command prompt and type `ipconfig`.
4. Suppose the IP address is `192.168.178.88`, then the Ollama hosting endpoint would be `192.168.178.88:11434`.
5. Enjoy fast LLM inference on your local machine and integration with 💧 Instill VDP.



### Text Embeddings

Turn text into a vector of numbers that capture its meaning, unlocking use cases like semantic search.


| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TEXT_EMBEDDINGS` |
| Model Name (required) | `model` | string | The OSS model to be used, check https://ollama.com/library for list of models available |
| Text (required) | `text` | string | The text |



| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Embedding | `embedding` | array[number] | Embedding of the input text |







Documentation

Index

Constants

View Source
const (
	TaskTextGenerationChat = "TASK_TEXT_GENERATION_CHAT"
	TaskTextEmbeddings     = "TASK_TEXT_EMBEDDINGS"
)

Variables

This section is empty.

Functions

func Init

func Init(bc base.Component) *component

Types

type ChatMessage

type ChatMessage struct {
	Role    string              `json:"role"`
	Content []MultiModalContent `json:"content"`
}

type ChatRequest

type ChatRequest struct {
	Model    string              `json:"model"`
	Messages []OllamaChatMessage `json:"messages"`
	Stream   bool                `json:"stream"`
	Options  OllamaOptions       `json:"options"`
}

type ChatResponse

type ChatResponse struct {
	Model              string            `json:"model"`
	CreatedAt          string            `json:"created_at"`
	Message            OllamaChatMessage `json:"message"`
	Done               bool              `json:"done"`
	DoneReason         string            `json:"done_reason"`
	TotalDuration      int               `json:"total_duration"`
	LoadDuration       int               `json:"load_duration"`
	PromptEvalCount    int               `json:"prompt_eval_count"`
	PromptEvalDuration int               `json:"prompt_eval_duration"`
	EvalCount          int               `json:"eval_count"`
	EvalDuration       int               `json:"eval_duration"`
}

type EmbedRequest

type EmbedRequest struct {
	Model  string `json:"model"`
	Prompt string `json:"prompt"`
}

type EmbedResponse

type EmbedResponse struct {
	Embedding []float32 `json:"embedding"`
}

type ListLocalModelsRequest

type ListLocalModelsRequest struct {
}

type ListLocalModelsResponse

type ListLocalModelsResponse struct {
	Models []OllamaModelInfo `json:"models"`
}

type MultiModalContent

type MultiModalContent struct {
	ImageURL URL    `json:"image-url"`
	Text     string `json:"text"`
	Type     string `json:"type"`
}

type OllamaChatMessage

type OllamaChatMessage struct {
	Role    string   `json:"role"`
	Content string   `json:"content"`
	Images  []string `json:"images,omitempty"`
}

type OllamaClient

type OllamaClient struct {
	// contains filtered or unexported fields
}

func NewClient

func NewClient(endpoint string, autoPull bool, logger *zap.Logger) *OllamaClient

func (*OllamaClient) Chat

func (c *OllamaClient) Chat(request ChatRequest) (ChatResponse, error)

func (*OllamaClient) CheckModelAvailability

func (c *OllamaClient) CheckModelAvailability(modelName string) bool

func (*OllamaClient) Embed

func (c *OllamaClient) Embed(request EmbedRequest) (EmbedResponse, error)

func (*OllamaClient) IsAutoPull

func (c *OllamaClient) IsAutoPull() bool

func (*OllamaClient) Pull

func (c *OllamaClient) Pull(modelName string) error

type OllamaClientInterface

type OllamaClientInterface interface {
	Chat(ChatRequest) (ChatResponse, error)
	Embed(EmbedRequest) (EmbedResponse, error)
	IsAutoPull() bool
}

type OllamaClientInterfaceMock

type OllamaClientInterfaceMock struct {
	ChatMock mOllamaClientInterfaceMockChat

	EmbedMock mOllamaClientInterfaceMockEmbed

	IsAutoPullMock mOllamaClientInterfaceMockIsAutoPull
	// contains filtered or unexported fields
}

OllamaClientInterfaceMock implements OllamaClientInterface

func NewOllamaClientInterfaceMock

func NewOllamaClientInterfaceMock(t minimock.Tester) *OllamaClientInterfaceMock

NewOllamaClientInterfaceMock returns a mock for OllamaClientInterface

func (*OllamaClientInterfaceMock) Chat

func (mmChat *OllamaClientInterfaceMock) Chat(c1 ChatRequest) (c2 ChatResponse, err error)

Chat implements OllamaClientInterface

func (*OllamaClientInterfaceMock) ChatAfterCounter

func (mmChat *OllamaClientInterfaceMock) ChatAfterCounter() uint64

ChatAfterCounter returns a count of finished OllamaClientInterfaceMock.Chat invocations

func (*OllamaClientInterfaceMock) ChatBeforeCounter

func (mmChat *OllamaClientInterfaceMock) ChatBeforeCounter() uint64

ChatBeforeCounter returns a count of OllamaClientInterfaceMock.Chat invocations

func (*OllamaClientInterfaceMock) Embed

func (mmEmbed *OllamaClientInterfaceMock) Embed(e1 EmbedRequest) (e2 EmbedResponse, err error)

Embed implements OllamaClientInterface

func (*OllamaClientInterfaceMock) EmbedAfterCounter

func (mmEmbed *OllamaClientInterfaceMock) EmbedAfterCounter() uint64

EmbedAfterCounter returns a count of finished OllamaClientInterfaceMock.Embed invocations

func (*OllamaClientInterfaceMock) EmbedBeforeCounter

func (mmEmbed *OllamaClientInterfaceMock) EmbedBeforeCounter() uint64

EmbedBeforeCounter returns a count of OllamaClientInterfaceMock.Embed invocations

func (*OllamaClientInterfaceMock) IsAutoPull

func (mmIsAutoPull *OllamaClientInterfaceMock) IsAutoPull() (b1 bool)

IsAutoPull implements OllamaClientInterface

func (*OllamaClientInterfaceMock) IsAutoPullAfterCounter

func (mmIsAutoPull *OllamaClientInterfaceMock) IsAutoPullAfterCounter() uint64

IsAutoPullAfterCounter returns a count of finished OllamaClientInterfaceMock.IsAutoPull invocations

func (*OllamaClientInterfaceMock) IsAutoPullBeforeCounter

func (mmIsAutoPull *OllamaClientInterfaceMock) IsAutoPullBeforeCounter() uint64

IsAutoPullBeforeCounter returns a count of OllamaClientInterfaceMock.IsAutoPull invocations

func (*OllamaClientInterfaceMock) MinimockChatDone

func (m *OllamaClientInterfaceMock) MinimockChatDone() bool

MinimockChatDone returns true if the count of the Chat invocations corresponds the number of defined expectations

func (*OllamaClientInterfaceMock) MinimockChatInspect

func (m *OllamaClientInterfaceMock) MinimockChatInspect()

MinimockChatInspect logs each unmet expectation

func (*OllamaClientInterfaceMock) MinimockEmbedDone

func (m *OllamaClientInterfaceMock) MinimockEmbedDone() bool

MinimockEmbedDone returns true if the count of the Embed invocations corresponds the number of defined expectations

func (*OllamaClientInterfaceMock) MinimockEmbedInspect

func (m *OllamaClientInterfaceMock) MinimockEmbedInspect()

MinimockEmbedInspect logs each unmet expectation

func (*OllamaClientInterfaceMock) MinimockFinish

func (m *OllamaClientInterfaceMock) MinimockFinish()

MinimockFinish checks that all mocked methods have been called the expected number of times

func (*OllamaClientInterfaceMock) MinimockIsAutoPullDone

func (m *OllamaClientInterfaceMock) MinimockIsAutoPullDone() bool

MinimockIsAutoPullDone returns true if the count of the IsAutoPull invocations corresponds the number of defined expectations

func (*OllamaClientInterfaceMock) MinimockIsAutoPullInspect

func (m *OllamaClientInterfaceMock) MinimockIsAutoPullInspect()

MinimockIsAutoPullInspect logs each unmet expectation

func (*OllamaClientInterfaceMock) MinimockWait

func (m *OllamaClientInterfaceMock) MinimockWait(timeout mm_time.Duration)

MinimockWait waits for all mocked methods to be called the expected number of times

type OllamaClientInterfaceMockChatExpectation

type OllamaClientInterfaceMockChatExpectation struct {
	Counter uint64
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockChatExpectation specifies expectation struct of the OllamaClientInterface.Chat

func (*OllamaClientInterfaceMockChatExpectation) Then

Then sets up OllamaClientInterface.Chat return parameters for the expectation previously defined by the When method

type OllamaClientInterfaceMockChatParamPtrs

type OllamaClientInterfaceMockChatParamPtrs struct {
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockChatParamPtrs contains pointers to parameters of the OllamaClientInterface.Chat

type OllamaClientInterfaceMockChatParams

type OllamaClientInterfaceMockChatParams struct {
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockChatParams contains parameters of the OllamaClientInterface.Chat

type OllamaClientInterfaceMockChatResults

type OllamaClientInterfaceMockChatResults struct {
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockChatResults contains results of the OllamaClientInterface.Chat

type OllamaClientInterfaceMockEmbedExpectation

type OllamaClientInterfaceMockEmbedExpectation struct {
	Counter uint64
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockEmbedExpectation specifies expectation struct of the OllamaClientInterface.Embed

func (*OllamaClientInterfaceMockEmbedExpectation) Then

Then sets up OllamaClientInterface.Embed return parameters for the expectation previously defined by the When method

type OllamaClientInterfaceMockEmbedParamPtrs

type OllamaClientInterfaceMockEmbedParamPtrs struct {
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockEmbedParamPtrs contains pointers to parameters of the OllamaClientInterface.Embed

type OllamaClientInterfaceMockEmbedParams

type OllamaClientInterfaceMockEmbedParams struct {
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockEmbedParams contains parameters of the OllamaClientInterface.Embed

type OllamaClientInterfaceMockEmbedResults

type OllamaClientInterfaceMockEmbedResults struct {
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockEmbedResults contains results of the OllamaClientInterface.Embed

type OllamaClientInterfaceMockIsAutoPullExpectation

type OllamaClientInterfaceMockIsAutoPullExpectation struct {
	Counter uint64
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockIsAutoPullExpectation specifies expectation struct of the OllamaClientInterface.IsAutoPull

type OllamaClientInterfaceMockIsAutoPullResults

type OllamaClientInterfaceMockIsAutoPullResults struct {
	// contains filtered or unexported fields
}

OllamaClientInterfaceMockIsAutoPullResults contains results of the OllamaClientInterface.IsAutoPull

type OllamaModelInfo

type OllamaModelInfo struct {
	Name       string `json:"name"`
	ModifiedAt string `json:"modified_at"`
	Size       int    `json:"size"`
	Dijest     string `json:"digest"`
	Details    struct {
		Format            string `json:"format"`
		Family            string `json:"family"`
		Families          string `json:"families"`
		ParameterSize     string `json:"parameter_size"`
		QuantizationLevel string `json:"quantization_level"`
	} `json:"details"`
}

type OllamaOptions

type OllamaOptions struct {
	Temperature float32 `json:"temperature,omitempty"`
	TopK        int     `json:"top_k,omitempty"`
	Seed        int     `json:"seed,omitempty"`
}

type OllamaSetup

type OllamaSetup struct {
	AutoPull bool   `json:"auto-pull"`
	Endpoint string `json:"endpoint"`
}

type PullModelRequest

type PullModelRequest struct {
	Name   string `json:"name"`
	Stream bool   `json:"stream"`
}

type PullModelResponse

type PullModelResponse struct {
}

type TaskTextEmbeddingsInput

type TaskTextEmbeddingsInput struct {
	Text  string `json:"text"`
	Model string `json:"model"`
}

type TaskTextEmbeddingsOutput

type TaskTextEmbeddingsOutput struct {
	Embedding []float32 `json:"embedding"`
}

type TaskTextGenerationChatInput

type TaskTextGenerationChatInput struct {
	ChatHistory  []ChatMessage `json:"chat-history"`
	MaxNewTokens int           `json:"max-new-tokens"`
	Model        string        `json:"model"`
	Prompt       string        `json:"prompt"`
	PromptImages []string      `json:"prompt-images"`
	Seed         int           `json:"seed"`
	SystemMsg    string        `json:"system-message"`
	Temperature  float32       `json:"temperature"`
	TopK         int           `json:"top-k"`
}

type TaskTextGenerationChatOuput

type TaskTextGenerationChatOuput struct {
	Text string `json:"text"`
}

type URL

type URL struct {
	URL string `json:"url"`
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL