-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An error is reported when the model is loaded #149
Comments
Hello, thank you for your interest, can you provide more information, such as what command is used to load the model? |
I encountered same issue. I downloaded model file from https://modelscope.cn/models/AI-ModelScope/InternVL-Chat-V1-5/files. I tried to follow the instruction from https://github.com/OpenGVLab/InternVL/blob/main/document/how_to_deploy_a_local_demo.md to run gradio demo. And I executed following command: The output from console as following: Discovered apex.normalization.FusedRMSNorm - will use it instead of LlamaRMSNorm Do you wish to run the custom code? [y/N] y Could you please help with resolving this issue? Thanks in advance! P.S. , pip list output as following: accelerate 0.30.1 |
Oh, I figured it out. You should go to the python3.10 -m internvl.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000/ --port 40000 --device auto --worker http://localhost:40000/ --model-path /root/onethingai-tmp/models/InternVL-Chat-V1-5 You should use |
Yes, I missed the critical information from the documentation. It was worked after I changed the folder path and using "internvl.serve.model_worker". Thank you very much! P.S., is the any plan for supporting Apple M series CPU? It is very cost-effective choice for LLM application development. Thanks. |
Yes, I have considered Mac, but I've been quite busy with work lately and haven't had the time to try it out. On the other hand, my Mac only has 16GB of memory, which might not be sufficient to debug this 26B model. But we recently released smaller models with 2B and 4B parameters, which I might be able to deploy on my device. |
Great! |
Some weights of LlavaLlamaForCausalLM were not initialized from the model checkpoint at /data/workspace/models/InternVL-C hat-V1-5 and are newly initialized:
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
AttributeError: 'NoneType' object has no attribute 'is_loaded'
The text was updated successfully, but these errors were encountered: