Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

加载BELLE和Vicuna模型后,提问回答报错? #97

Open
emiyadavid opened this issue Jun 27, 2023 · 2 comments
Open

加载BELLE和Vicuna模型后,提问回答报错? #97

emiyadavid opened this issue Jun 27, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@emiyadavid
Copy link

可以正常加载chatglm-6B-int8并且正常问答,但是加载BELLE-7b和Vicuna-7b模型后,进行提问,页面出现ERROR,同时后台报错如下信息:
TypeError: The current model class (LlamaModel) is not compatible with .generate(), as it doesn't have a language model head. Please use one of the following classes instead: {'LlamaForCausalLM'}

代码断点定位在KnowledgeBasedChatLLM类的get_knowledge_based_answer函数的这一句上
result = knowledge_chain({"query": query})

@thomas-yanxin thomas-yanxin added the bug Something isn't working label Jan 8, 2024
@Yanllan
Copy link

Yanllan commented Jan 15, 2024

根据报错,似乎是Vicuna-7b模型不支持.generate()方法,可以的话请换一个例如chat-gpt 7B模型再尝试一下
推荐使用生成式LLM

@emiyadavid
Copy link
Author

emiyadavid commented Jan 15, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants