-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to use llama2 or llama3 #1458
Comments
specify the API base in the ui, should be the same as GEN_AI_API_ENDPOINT. so try http://host.docker.internal:11434 |
thanks!GEN_AI_API_ENDPOINT. so try http://host.docker.internal:11434/ ,that is right |
I'm having the same issue, that Endpoint (http://host.docker.internal:11434/) does not reach the Ollama in Windows 11 Home |
As an additional test, I did a curl test querying the llama2 model and it worked
|
Seems that a configuration was needed in docker to make it work |
I install llama2 and 3 through ollama in windows,danswer is also installed in windows,
i can not use local llama2 or 3,something wrong wih my environment?
The text was updated successfully, but these errors were encountered: