grafana-llm-app

module
v0.4.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 20, 2023 License: Apache-2.0

README

Grafana LLM App (Public Preview)

A Grafana plugin designed to centralize access to LLMs, providing authentication, proxying, streaming, and custom extensions. Installing this plugin will enable various pieces of LLM-based functionality throughout Grafana.

Note: The Grafana LLM App plugin is currently in Public preview. Grafana Labs offers support on a best-effort basis, and there might be breaking changes before the feature is generally available.

Install the plugin on Grafana Cloud

Prerequisites:

  • Any Grafana Cloud environment (including Free)
  • API connection details from an account with OpenAI or Azure OpenAI

Steps:

  1. In your Grafana instance, open Administration → Plugins
  2. Select "All" instead of "Installed" and search for "LLM"
  3. Click "Install via grafana.com"
  4. On the LLM's plugin page, you should see your instance listed; click "Install plugin"
  5. Return to Grafana, and search installed plugins, reloading until the LLM plugin is listed (this may take a minute or two)
  6. Configuration: choose your provider (OpenAI or Azure) and fill in the fields needed
  7. Save settings, then click "Enable" (upper right) to enable the plugin

Install the plugin directly

To install this plugin, use the GF_INSTALL_PLUGINS environment variable when running Grafana:

GF_INSTALL_PLUGINS=grafana-llm-app

or alternatively install using the Grafana CLI:

grafana cli plugins install grafana-llm-app

The plugin can then be configured either in the UI or using provisioning, as shown below.

Provisioning this plugin

To provision this plugin you should set the following environment variable when running Grafana:

OPENAI_API_KEY=sk-...

and use the following provisioning file (e.g. in /etc/grafana/provisioning/plugins/grafana-llm-app, when running in Docker):

apiVersion: 1

apps:
  - type: 'grafana-llm-app'
    disabled: false
    jsonData:
      openAI:
        url: https://api.openai.com
    secureJsonData:
      openAIKey: $OPENAI_API_KEY
Using Azure OpenAI

To provision the plugin to use Azure OpenAI, use settings similar to this:

apiVersion: 1

apps:
  - type: 'grafana-llm-app'
    disabled: false
    jsonData:
      openAI:
        provider: azure
        url: https://<resource>.openai.azure.com
        azureModelMapping:
          - ["gpt-3.5-turbo", "gpt-35-turbo"]
    secureJsonData:
      openAIKey: $OPENAI_API_KEY

where:

  • <resource> is your Azure OpenAI resource name
  • the azureModelMapping field contains [model, deployment] pairs so that features know which Azure deployment to use in place of each model you wish to be used.

Adding LLM features to your plugin or Grafana core

To make use of this plugin when adding LLM-based features, you can use the helper functions in the @grafana/experimental package.

First, add the correct version of @grafana/experimental to your dependencies in package.json:

{
  "dependencies": {
    "@grafana/experimental": "1.7.0"
  }
}

Then in your components you can use the llm object from @grafana/experimental like so:

import React, { useState } from 'react';
import { useAsync } from 'react-use';
import { scan } from 'rxjs/operators';

import { llms } from '@grafana/experimental';
import { PluginPage } from '@grafana/runtime';

import { Button, Input, Spinner } from '@grafana/ui';

const MyComponent = (): JSX.Element => {
  const [input, setInput] = React.useState('');
  const [message, setMessage] = React.useState('');
  const [reply, setReply] = useState('');

  const { loading, error } = useAsync(async () => {
    const enabled = await llms.openai.enabled();
    if (!enabled) {
      return false;
    }
    if (message === '') {
      return;
    }
    // Stream the completions. Each element is the next stream chunk.
    const stream = llms.openai.streamChatCompletions({
      model: 'gpt-3.5-turbo',
      messages: [
        { role: 'system', content: 'You are a cynical assistant.' },
        { role: 'user', content: message },
      ],
    }).pipe(
      // Accumulate the stream chunks into a single string.
      scan((acc, delta) => acc + delta, '')
    );
    // Subscribe to the stream and update the state for each returned value.
    return stream.subscribe(setReply);
  }, [message]);

  if (error) {
    // TODO: handle errors.
    return null;
  }

  return (
    <div>
      <Input
        value={input}
        onChange={(e) => setInput(e.currentTarget.value)}
        placeholder="Enter a message"
      />
      <br />
      <Button type="submit" onClick={() => setMessage(input)}>Submit</Button>
      <br />
      <div>{loading ? <Spinner /> : reply}</div>
    </div>
  );
}

Developing this plugin

Backend
  1. Update Grafana plugin SDK for Go dependency to the latest minor version:

    go get -u github.com/grafana/grafana-plugin-sdk-go
    go mod tidy
    
  2. Build backend plugin binaries for Linux, Windows and Darwin:

    mage -v
    
  3. List all available Mage targets for additional commands:

    mage -l
    
Frontend
  1. Install dependencies

    npm run install
    
  2. Build plugin in development mode and run in watch mode

    npm run dev
    
  3. Build plugin in production mode

    npm run build
    
  4. Run the tests (using Jest)

    # Runs the tests and watches for changes, requires git init first
    npm run test
    
    # Exits after running all the tests
    npm run test:ci
    
  5. Spin up a Grafana instance and run the plugin inside it (using Docker)

    npm run server
    
  6. Run the E2E tests (using Cypress)

    # Spins up a Grafana instance first that we tests against
    npm run server
    
    # Starts the tests
    npm run e2e
    
  7. Run the linter

    npm run lint
    
    # or
    
    npm run lint:fix
    
Developing with the Example App

The LLM example app can be a quick way to test out changes to the LLM plugin.

To use the example app in conjunction with the LLM plugin:

  1. Clone the llm example app
  2. Update the following fields in docker-compose.yaml in the llm example app
  • comment out # GF_INSTALL_PLUGINS: grafana-llm-app
  • Add the following volume:
<some-parent-path>/grafana-llm-app/dist:/var/lib/grafana/plugins/grafana-llm-app
  1. Follow the instructions in the llm example app to run the app

Release process

  • Bump version in package.json (e.g., 0.2.0 to 0.2.1)
  • Add notes to changelog describing changes since last release
  • Merge PR for a branch containing those changes into main
  • Go to drone here and identify the build corresponding to the merge into main
  • Promote to target 'publish'

Directories

Path Synopsis
llmclient module
pkg
plugin/vector
package vector provides a service for searching vector embeddings.
package vector provides a service for searching vector embeddings.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL