Skip to content

Run multiple open source large language models concurrently powered by Ollama

Notifications You must be signed in to change notification settings

ahmetkca/PolyOllama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PolyOllama

Run multiple same or different open source large language models such as Llama2, Mistral and Gemma in parallel simultaneously powered by Ollama.

Demo

Screen.Recording.April.4.mov

Instructions to run it locally

You need Ollama installed on your computer.

cmd + k (to open the chat prompt) alt + k (on Windows)

cd backend
bun install
bun run index.ts
cd frontend
bun install
bun run dev

Running in docker containers frontend + (backend + ollama)

On Windows

docker compose -f docker-compose.windows.yml up

On Linux/MacOS

docker compose -f docker-compose.unix.yml up

frontend available at http://localhost:5173

⚠️ Still work in progress

About

Run multiple open source large language models concurrently powered by Ollama

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published