-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: llama runner process no longer running: -1 #3904
Comments
Can you get logs for the server? I’ve seen this error when I couldn’t load some linear algebra libraries. |
Also happens when trying to load Phi-3 GGUF created with current llama.cpp versions:
|
@kannon92 > logs for the server? Apr 25 11:18:52 vassar-Latitude-5490 ollama[6500]: [GIN] 2024/04/25 - 11:18:52 | 200 | 345.721µs | 127.0.0.1 | HEAD "/" |
I have the save issue with Phi3 model |
I have the same issue, after updating manually (This is on a Raspberry Pi 4B), but it is with Apple's new Open ELM. I used the conversion and quantization scripts from the pr by joshcarp, and have successfully built the GGUF file of OpenELM-270M. I want to test it, but that error is preventing me. Interesting, I can still run gemma:2b, although this was pulled with an older version of ollama |
|
Initial support for phi3 was added in 0.1.32, and conversion should be working in 0.1.33. Please give the latest RC a try and let us know if you're still having problems. |
What is the issue?
Was trying to run a finetuned version of llama2 having a gguf of 13.5gb.
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
v0.1.32
The text was updated successfully, but these errors were encountered: