Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Machine don't install Flash Attention #180

Open
huilong-chen opened this issue Feb 27, 2024 · 0 comments
Open

Machine don't install Flash Attention #180

huilong-chen opened this issue Feb 27, 2024 · 0 comments

Comments

@huilong-chen
Copy link

huilong-chen commented Feb 27, 2024

image
My CUDA version is 11.2, so I can't install Flash Attention on my machine. I try to set use_flash_attn as False when executing fine-tune.py, I meet this error be like:
Traceback (most recent call last):
File "/mnt1/dataln1/xxx/repo/LongLoRA/fine-tune.py", line 26, in
from llama_attn_replace import replace_llama_attn
File "/mnt1/dataln1/xxx/repo/LongLoRA/llama_attn_replace.py", line 10, in
from flash_attn import version as flash_attn_version
ModuleNotFoundError: No module named 'flash_attn'
Looking forward to your answer:(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant