Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local AI #7

Open
BBC-Esq opened this issue Oct 5, 2023 · 4 comments
Open

Local AI #7

BBC-Esq opened this issue Oct 5, 2023 · 4 comments

Comments

@BBC-Esq
Copy link

BBC-Esq commented Oct 5, 2023

Your project caught my attention. Feel free to check out my project on my github as well. Would it be possible to adjust your code to work with a local LLM instead of through gPT-4?

@vishwasg217
Copy link
Owner

Hey,

thank you for showing interest in my project.

Yes, you should be able to use a local LLM. You just need to add the code for accessing local llm in the get_model() method at the src/utils.py file. This should be enough if you're looking to use Finsight locally.

However, if you're planning to deploy. I'd suggest you have a look at the price plan of the cloud service you intend to use. Local LLM weights can be quite big in size, hence the billing can shoot up significantly.

Let me know if you have any other questions.

Thanks

@BBC-Esq
Copy link
Author

BBC-Esq commented Oct 6, 2023

If I have a local LLM that is being run on a server like localhost, where would I modify
to add my specific server information so it connects to it just like as if it were the
chat GPT/open AI model.

@BBC-Esq
Copy link
Author

BBC-Esq commented Oct 6, 2023

Also, any chance you can share some screenshots?

@styck
Copy link

styck commented Dec 26, 2023

If I have a local LLM that is being run on a server like localhost, where would I modify to add my specific server information so it connects to it just like as if it were the chat GPT/open AI model.

I'm using Windows and LM Studio which lets you start a server on a local port, I just modified the get_model() in utils.py as follows:

def get_model(model_name):
# OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
if model_name == "openai":
model = ChatOpenAI(base_url="http://192.168.50.201:1234/v1", api_key = "xxx")
return model

I wasn't actually on the same computer, my desktop computer had finsight code and was using the LLM on a laptop, so replace the IP address with your localhost, the api_key is not needed and will be ignored for a local LLM. A better solution is to allow selection of openai or local LLM and use the API key if needed.

I also modified \1_📊_Finance_Metrics_Review.py so it only asked me for the API key is it wasn't defined. I'm using Visual Studio Code so just put the API keys in my launch.json for debugging so I don't have to enter it all the time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants