You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type llama. As such, there is no need to use model.to_bettertransformers() or BetterTransformer.transform(model) from the Optimum library. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it.
I updated the version of transformers to 4.36 and torch to 2.1.1. However, I am still encountering this error.
The text was updated successfully, but these errors were encountered:
Thanks @huangjf11 for opening this issue, is this on inference? its a legacy code in chat_example that need to be removed, code been updated here and in finetuning as well.
This PR fix the issue in chat_example. Also if you are facing it in the fine-tuning as you might have pip installed, pls install from src.
which is a small modification of the example code in the docs here.
Within the stack trace is
File "/usr/local/lib/python3.10/dist-packages/llama_recipes/finetuning.py", line 109, in main
model = BetterTransformer.transform(model) model = BetterTransformer.transform(model)
which seems unexpected since this is not the code I see in the source.
ValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type llama. As such, there is no need to use
model.to_bettertransformers()
orBetterTransformer.transform(model)
from the Optimum library. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it.I updated the version of transformers to 4.36 and torch to 2.1.1. However, I am still encountering this error.
The text was updated successfully, but these errors were encountered: