llama-cpp
Here are 78 public repositories matching this topic...
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
-
Updated
Jun 10, 2024 - Dart
Local ML voice chat using high-end models.
-
Updated
Jun 9, 2024 - C
Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma
-
Updated
Jun 8, 2024 - Python
A custom framework for easy use of LLMs, VLMs, etc. supporting various modes and settings via web-ui
-
Updated
Jun 10, 2024 - Python
workbench for learing&practising AI tech in real scenario on Android device, powered by GGML(Georgi Gerganov Machine Learning) and NCNN(Tencent NCNN) and FFmpeg
-
Updated
Jun 5, 2024 - C++
A Genshin Impact Question Answer Project supported by Qwen1.5-14B-Chat
-
Updated
Jun 4, 2024 - Python
Static builds of llama.cpp (Currently only amd64 server builds are available)
-
Updated
Jun 4, 2024 - Dockerfile
Genshin Impact Character Chat Models tuned by Lora on LLM
-
Updated
Jun 3, 2024 - Python
UnOfficial Gradio Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
-
Updated
Jun 3, 2024 - Jupyter Notebook
PowerShell automation to download large language models (LLMs) from Git repositories and quantize them with llama.cpp into the GGUF format.
-
Updated
May 30, 2024 - PowerShell
Improve this page
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."