Skip to content

Obsidian Local LLM is a plugin for Obsidian that provides access to a powerful neural network, allowing users to generate text in a wide range of styles and formats using a local LLM.

License

Notifications You must be signed in to change notification settings

zatevakhin/obsidian-local-llm

Repository files navigation

Obsidian Local LLM Plugin

Obsidian Local LLM is a plugin for Obsidian that provides access to a powerful neural network, allowing users to generate text in a wide range of styles and formats using a local LLM from the LLAMA family.

The plugin allows users to input a prompt as a canvas block and receive answers in the new block. The LLM can be configured to use a variety of models and settings, allowing users to customize the output to their specific needs.

image

See gallery below.

Dependencies

The plugin uses a server from llama-cpp-python as the API backend, which relies on the llama.cpp library. The plugin also requires Python 3.7 or later and the pip package manager to be installed on your system.

To install llama-cpp-python and its dependencies, run the following command:

pip install llama-cpp-python[server]

Large Language Models

Folow the link to the repository, click 'Show Table with models' and choose the model which is suitable for you and in the 'ggml' format.

Install plugin

For now, is available two ways how to install the plugin

Install using BRAT

  1. Install BRAT trough Community Plugins
  2. Open BRAT settings
  3. Click 'Add Beta plugin'
  4. Insert link to repository
  5. Click 'Add Plugin'
  6. Enable 'Obsidian Local LLM' plugin in Community Plugins

Install from sources

  1. Clone repository git clone https://github.com/zatevakhin/obsidian-local-llm
  2. Change directory cd obsidian-local-llm
  3. Install dependencies npm install
  4. Build plugin npm build
  5. Create obsidian-local-llm directory in your vault $HOME/MyObsidian/.obsidian/plugins
  6. Move main.js, manifest.json, and styles.css into that directory
  7. Open Obsidian > Settings > Community Plugins > Toggle Obsidian Local LLM

Usage

To use the plugin, follow these steps:

  1. Open a terminal
  2. Set environment variable with the ggml model path example: export MODEL=/.../ggml-model-name.bin
  3. Run the API server using the command python3 -m llama_cpp.server
  4. Don't close the terminal until you want to use the plugin
  5. Open a canvas in Obsidian
  6. Create a new block
  7. Type in a prompt text
  8. Use Right-click on the block to open a context menu
  9. Click on the "LLM Instruction" option
  10. Wait for the generated text to appear in the new block

Gallery

obsidian-local-llm-canvas.mp4
obsidian-local-llm-canvas-typewriter.webm

image

Contributing

Contributions to the plugin are welcome! If you would like to contribute, please fork the repository and submit a pull request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Obsidian Local LLM is a plugin for Obsidian that provides access to a powerful neural network, allowing users to generate text in a wide range of styles and formats using a local LLM.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published