Skip to content

yoziru/nextjs-vllm-ui

Repository files navigation

nextjs-vllm-ui

Fully-featured & beautiful web interface for vLLM

Get up and running with Large Language Models quickly, locally and even offline. This project aims to be the easiest way for you to get started with LLMs. No tedious and annoying setup required!

Features ✨

  • Beautiful & intuitive UI: Inspired by ChatGPT, to enhance similarity in the user experience.
  • Fully local: Stores chats in localstorage for convenience. No need to run a database.
  • Fully responsive: Use your phone to chat, with the same ease as on desktop.
  • Easy setup: No tedious and annoying setup required. Just clone the repo and you're good to go!
  • Code syntax highligting: Messages that include code, will be highlighted for easy access.
  • Copy codeblocks easily: Easily copy the highlighted code with one click.
  • Chat history: Chats are saved and easily accessed.
  • Light & Dark mode: Switch between light & dark mode.

Preview

ollama-Original.MOV

Requisites ⚙️

To use the web interface, these requisites must be met:

  1. Download vLLM and have it running. Or run it in a Docker container.
  2. Node.js (18+) and npm is required. Download

Usage 🚀

The easiest way to get started is to use the pre-built Docker image.

docker run --rm -d -p 3000:3000 -e VLLM_URL=http://host.docker.internal:8000 ghcr.io/yoziru/nextjs-vllm-ui:latest

If you're using Ollama, you need to set the VLLM_MODEL:

docker run --rm -d -p 3000:3000 -e VLLM_URL=http://host.docker.internal:11434 -e NEXT_PUBLIC_TOKEN_LIMIT=8192 -e VLLM_MODEL=llama3 ghcr.io/yoziru/nextjs-vllm-ui:latest

Then go to localhost:3000 and start chatting with your favourite model!

Development 📖

To install and run a local environment of the web interface, follow the instructions below.

1. Clone the repository to a directory on your pc via command prompt:

git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui

2. Open the folder:

cd nextjs-ollama-llm-ui

3. Rename the .example.env to .env:

mv .example.env .env

4. If your instance of vLLM is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:

VLLM_URL="http://localhost:8000"
VLLM_API_KEY="your-api-key"

5. Install dependencies:

npm install

6. Start the development server:

npm run dev

5. Go to localhost:3000 and start chatting with your favourite model!

Tech stack

NextJS - React Framework for the Web

TailwindCSS - Utility-first CSS framework

shadcn-ui - UI component built using Radix UI and Tailwind CSS

shadcn-chat - Chat components for NextJS/React projects