Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] LlamaHandler format #430

Open
justinwangx opened this issue May 17, 2024 · 4 comments
Open

[bug] LlamaHandler format #430

justinwangx opened this issue May 17, 2024 · 4 comments

Comments

@justinwangx
Copy link
Contributor

I think there is a bug in the way formatting is handled for Llama models here in this function:

def _format_prompt(prompt, function, test_category):
    conversations = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>{SYSTEM_PROMPT_FOR_CHAT_MODEL}<|eot_id|><|start_header_id|>user<|end_header_id|>{USER_PROMPT_FOR_CHAT_MODEL.format(user_prompt=prompt, functions=str(function))}<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
    return conversations

Given placeholder system and user messages, if I use the Llama-3 tokenizer as follows:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
messages = [
    {"role": "system", "content": "SYSTEM"},
    {"role": "user", "content": "USER"}
]
conversations = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
print(conversations)

then I get the following output:

'<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nSYSTEM<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nUSER<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n'

The current format misses the "\n\n" which appears after the end of each header id.

@CharlieJCJ
Copy link
Contributor

CharlieJCJ commented May 22, 2024

Thank you for catching this! This is indeed an oversight on our end.

Currently, we are observing minor improvements in both the Llama3-8B and Llama3-70B models, which do not affect their positions on the leaderboard. We will verify the results and address this issue in the next release by updating the handler and adjusting the leaderboard scores accordingly. This update will ensure that the Llama handler's prompt template aligns with the official one.

@justinwangx
Copy link
Contributor Author

No problem! Awesome -- thanks for your great work on this benchmark.

I have another (mostly unrelated) question as well -- are the benchmark numbers reported when running with temperature=0? I was unable to reproduce the scores for Llama-3-8B-Instruct and noticed the default temperature setting was 0.7.

@CharlieJCJ
Copy link
Contributor

No problem! Awesome -- thanks for your great work on this benchmark.

I have another (mostly unrelated) question as well -- are the benchmark numbers reported when running with temperature=0? I was unable to reproduce the scores for Llama-3-8B-Instruct and noticed the default temperature setting was 0.7.

We are using temperature = 0.7 for reported llama3 numbers. If you would like to provide more information on what is different, we can investigate why you are not able to reproduce.

@justinwangx
Copy link
Contributor Author

If we are sampling but not seeding, then results will naturally not be deterministic. There is some unavoidable non-determinism that vLLM introduces, but I imagine most of the discrepancy stems from sampling with non-zero temperature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants