Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluation #422

Closed
1 of 2 tasks
ZHANGJINKUI opened this issue Apr 2, 2024 · 2 comments · Fixed by #488
Closed
1 of 2 tasks

evaluation #422

ZHANGJINKUI opened this issue Apr 2, 2024 · 2 comments · Fixed by #488

Comments

@ZHANGJINKUI
Copy link

System Info

python eval.py --model hf --model_args pretrained=/mnt/sdb/zjk/llama2/llama-recipes/Llama-2-7b-hf,dtype="float",peft=/mnt/sdb/zjk/llama2/llama2-lora --tasks hellaswag --num_fewshot 10 --device cuda:0 --batch_size 8
error:
2024-04-02:06:58:12,156 ERROR [eval.py:226] An error occurred during evaluation: module 'lm_eval.tasks' has no attribute 'initialize_tasks'

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

evaluation was changed?
how can i evaluate finetuned model with lora now?

Error logs

2024-04-02:06:58:12,156 ERROR [eval.py:226] An error occurred during evaluation: module 'lm_eval.tasks' has no attribute 'initialize_tasks'

Expected behavior

evaluate finetuned model with lora and the accuracy of specific dataset with cola

@LHQUer
Copy link

LHQUer commented Apr 24, 2024

Hello! Have you solved this issue? I have the this question as same as yours. If you have solved this question, Please tell me how to do that. Thanks bro.

@mreso
Copy link
Contributor

mreso commented May 2, 2024

Hi @LHQUer @ZHANGJINKUI

what version of lm_eval are you using?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants