-
Notifications
You must be signed in to change notification settings - Fork 803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug] LlamaHandler format #430
Comments
Thank you for catching this! This is indeed an oversight on our end. Currently, we are observing minor improvements in both the Llama3-8B and Llama3-70B models, which do not affect their positions on the leaderboard. We will verify the results and address this issue in the next release by updating the handler and adjusting the leaderboard scores accordingly. This update will ensure that the Llama handler's prompt template aligns with the official one. |
No problem! Awesome -- thanks for your great work on this benchmark. I have another (mostly unrelated) question as well -- are the benchmark numbers reported when running with temperature=0? I was unable to reproduce the scores for Llama-3-8B-Instruct and noticed the default temperature setting was 0.7. |
We are using |
If we are sampling but not seeding, then results will naturally not be deterministic. There is some unavoidable non-determinism that vLLM introduces, but I imagine most of the discrepancy stems from sampling with non-zero temperature. |
I think there is a bug in the way formatting is handled for Llama models here in this function:
Given placeholder system and user messages, if I use the Llama-3 tokenizer as follows:
then I get the following output:
'<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nSYSTEM<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nUSER<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n'
The current format misses the "\n\n" which appears after the end of each header id.
The text was updated successfully, but these errors were encountered: