Grafana LLM App (Public Preview)
A Grafana plugin designed to centralize access to LLMs, providing authentication, proxying, streaming, and custom extensions.
Installing this plugin will enable various pieces of LLM-based functionality throughout Grafana.
Note: The Grafana LLM App plugin is currently in Public preview. Grafana Labs offers support on a best-effort basis, and there might be breaking changes before the feature is generally available.
Install the plugin on Grafana Cloud
Prerequisites:
- Any Grafana Cloud environment (including Free)
- API connection details from an account with OpenAI or Azure OpenAI
Steps:
- In your Grafana instance, open Administration → Plugins
- Select "All" instead of "Installed" and search for "LLM"
- Click "Install via grafana.com"
- On the LLM's plugin page, you should see your instance listed; click "Install plugin"
- Return to Grafana, and search installed plugins, reloading until the LLM plugin is listed (this may take a minute or two)
- Configuration: choose your provider (OpenAI or Azure) and fill in the fields needed
- Save settings, then click "Enable" (upper right) to enable the plugin
Install the plugin directly
To install this plugin, use the GF_INSTALL_PLUGINS
environment variable when running Grafana:
GF_INSTALL_PLUGINS=grafana-llm-app
or alternatively install using the Grafana CLI:
grafana cli plugins install grafana-llm-app
The plugin can then be configured either in the UI or using provisioning, as shown below.
Provisioning this plugin
To provision this plugin you should set the following environment variable when running Grafana:
OPENAI_API_KEY=sk-...
and use the following provisioning file (e.g. in /etc/grafana/provisioning/plugins/grafana-llm-app
, when running in Docker):
apiVersion: 1
apps:
- type: 'grafana-llm-app'
disabled: false
jsonData:
openAI:
url: https://api.openai.com
secureJsonData:
openAIKey: $OPENAI_API_KEY
Using Azure OpenAI
To provision the plugin to use Azure OpenAI, use settings similar to this:
apiVersion: 1
apps:
- type: 'grafana-llm-app'
disabled: false
jsonData:
openAI:
provider: azure
url: https://<resource>.openai.azure.com
azureModelMapping:
- ["gpt-3.5-turbo", "gpt-35-turbo"]
secureJsonData:
openAIKey: $OPENAI_API_KEY
where:
<resource>
is your Azure OpenAI resource name
- the
azureModelMapping
field contains [model, deployment]
pairs so that features know
which Azure deployment to use in place of each model you wish to be used.
Adding LLM features to your plugin or Grafana core
To make use of this plugin when adding LLM-based features, you can use the helper functions in the @grafana/experimental
package.
First, add the correct version of @grafana/experimental
to your dependencies in package.json:
{
"dependencies": {
"@grafana/experimental": "1.7.0"
}
}
Then in your components you can use the llm
object from @grafana/experimental
like so:
import React, { useState } from 'react';
import { useAsync } from 'react-use';
import { scan } from 'rxjs/operators';
import { llms } from '@grafana/experimental';
import { PluginPage } from '@grafana/runtime';
import { Button, Input, Spinner } from '@grafana/ui';
const MyComponent = (): JSX.Element => {
const [input, setInput] = React.useState('');
const [message, setMessage] = React.useState('');
const [reply, setReply] = useState('');
const { loading, error } = useAsync(async () => {
const enabled = await llms.openai.enabled();
if (!enabled) {
return false;
}
if (message === '') {
return;
}
// Stream the completions. Each element is the next stream chunk.
const stream = llms.openai.streamChatCompletions({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a cynical assistant.' },
{ role: 'user', content: message },
],
}).pipe(
// Accumulate the stream chunks into a single string.
scan((acc, delta) => acc + delta, '')
);
// Subscribe to the stream and update the state for each returned value.
return stream.subscribe(setReply);
}, [message]);
if (error) {
// TODO: handle errors.
return null;
}
return (
<div>
<Input
value={input}
onChange={(e) => setInput(e.currentTarget.value)}
placeholder="Enter a message"
/>
<br />
<Button type="submit" onClick={() => setMessage(input)}>Submit</Button>
<br />
<div>{loading ? <Spinner /> : reply}</div>
</div>
);
}
Developing this plugin
Backend
-
Update Grafana plugin SDK for Go dependency to the latest minor version:
go get -u github.com/grafana/grafana-plugin-sdk-go
go mod tidy
-
Build backend plugin binaries for Linux, Windows and Darwin:
mage -v
-
List all available Mage targets for additional commands:
mage -l
Frontend
-
Install dependencies
npm run install
-
Build plugin in development mode and run in watch mode
npm run dev
-
Build plugin in production mode
npm run build
-
Run the tests (using Jest)
# Runs the tests and watches for changes, requires git init first
npm run test
# Exits after running all the tests
npm run test:ci
-
Spin up a Grafana instance and run the plugin inside it (using Docker)
npm run server
-
Run the E2E tests (using Cypress)
# Spins up a Grafana instance first that we tests against
npm run server
# Starts the tests
npm run e2e
-
Run the linter
npm run lint
# or
npm run lint:fix
Developing with the Example App
The LLM example app can be a quick way to test out changes to the LLM plugin.
To use the example app in conjunction with the LLM plugin:
- Clone the llm example app
- Update the following fields in
docker-compose.yaml
in the llm example app
- comment out # GF_INSTALL_PLUGINS: grafana-llm-app
- Add the following volume:
<some-parent-path>/grafana-llm-app/dist:/var/lib/grafana/plugins/grafana-llm-app
- Follow the instructions in the llm example app to run the app
Release process
- Bump version in package.json (e.g., 0.2.0 to 0.2.1)
- Add notes to changelog describing changes since last release
- Merge PR for a branch containing those changes into main
- Go to drone here and identify the build corresponding to the merge into main
- Promote to target 'publish'