Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local LLM as backend for DemoGPT agent #41

Open
paluigi opened this issue Aug 23, 2023 · 4 comments
Open

Local LLM as backend for DemoGPT agent #41

paluigi opened this issue Aug 23, 2023 · 4 comments
Labels
enhancement New feature or request

Comments

@paluigi
Copy link

paluigi commented Aug 23, 2023

Is your feature request related to a problem? Please describe.
Using local LLMs instead than OpenAI API as backend

Describe the solution you'd like
Create a DemoGPT agent from a locally available model (ideally, a quantized Llama2 model via llama-cpp-python

Describe alternatives you've considered
If that' s already possible, a guide or some instruction about how to do it would be greatly appreciated!

Additional context
NA

@melih-unsal melih-unsal added the enhancement New feature or request label Aug 23, 2023
@melih-unsal
Copy link
Owner

Hi @paluigi,

Thank you for highlighting this feature request. We truly value your feedback and are always eager to improve DemoGPT based on our community's suggestions.

At present, our primary focus is enhancing DemoGPT's capabilities by adding more tools. That said, integrating local models is definitely on our roadmap, and Llama2 is indeed at the top of our list for such integrations.

We appreciate your input and dedication to the growth of DemoGPT. 🙏 Stay tuned for updates!

@wusiyaodiudiu
Copy link

Hi @paluigi
You might consider referring to the implementation here, which utilizes Fastchat to encapsulate other open-source models for invocation through the OpenAI API. I've employed the chatglm-6b model here, though I'm uncertain whether it supports the llama-cpp model.

Link: https://github.com/chatchat-space/Langchain-Chatchat/blob/master/server/llm_api.py

If necessary, I can submit the code to GitHub. @melih-unsal

@paluigi
Copy link
Author

paluigi commented Sep 3, 2023

Thanks @wusiyaodiudiu , I will have a look to your repo!

@wusiyaodiudiu
Copy link

Hi @paluigi
I have already forked this project and committed the relevant implementation code for the local LLM,This time, I only committed the part related to the local LLM, so there may be some minor errors. However, I believe the implementation approach is relatively easy to understand and extend.

Newly added files:
demogpt/model_config.py
demogpt/server/llm_api.py
Modified files:"
demogpt/app.py
demogpt/model.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants