No description
Find a file
2024-05-10 07:30:16 -07:00
maubot_llm show list of models in !llm info 2024-05-10 07:30:16 -07:00
.gitignore Initial commit 2024-05-07 14:43:52 -07:00
.python-version data storage 2024-05-08 12:01:55 -07:00
base-config.yaml support anthropic; fix missing return in openai test 2024-05-09 21:23:26 -07:00
LICENSE Initial commit 2024-05-07 14:43:52 -07:00
maubot.yaml data storage 2024-05-08 12:01:55 -07:00
README.md show list of models in !llm info 2024-05-10 07:30:16 -07:00
requirements-test.txt data storage 2024-05-08 12:01:55 -07:00

This is a very basic maubot plugin for invoking LLMs.

It's very new and very rough. Use at your own risk, and expect problems.

The LLM must be supplied by an OpenAI-compatible server. For example, if you run LM Studio, this plugin can connect to its server. Anthropic is also supported.

You can and probably should configure it to only respond to messages from specific users.

Installation

Usage

Once it's added to a room, every message from any user on the allowlist will cause the bot to invoke the LLM and respond with its output.

You can configure multiple backends. One of them should be designated as the default, but the bot can also use a different backend in each room. You can also use different models and system prompts in different rooms.

The following commands are available for managing the bot in a room:

  • To see the current backend, model, and system prompt, along with a list of available models (note: not supported for Anthropic), use !llm info.
  • To change to a different backend, use !llm backend KEY, where KEY is the key from the backends map in the configuration.
  • To use a specific model, use !llm model NAME. Currently the name is just passed directly as the model field in the request json when invoking the server.
  • To change the system prompt, use !llm system WRITE YOUR PROMPT HERE.
  • To clear the context (forget all past messages in the room), use !llm clear.