Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No module named 'transformers.models.qwen_parallel.utils_qwen' #206

Open
ifromeast opened this issue Apr 18, 2024 · 4 comments
Open

No module named 'transformers.models.qwen_parallel.utils_qwen' #206

ifromeast opened this issue Apr 18, 2024 · 4 comments

Comments

@ifromeast
Copy link

when running

python3 tools/convert_mp.py \
    --input_path meta-llama/Llama-2-7b-hf \
    --source_mp_size 1 \
    --target_mp_size 4 \
    --model_type llama2 # choose from opt and llama

get ERROR that

Traceback (most recent call last):
  File "/home/LMOps/minillm/tools/convert_mp.py", line 6, in <module>
    from transformers import (
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "/home/LMOps/minillm/transformers/src/transformers/utils/import_utils.py", line 1344, in __getattr__
    value = getattr(module, name)
  File "/home/LMOps/minillm/transformers/src/transformers/utils/import_utils.py", line 1343, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "/home/LMOps/minillm/transformers/src/transformers/utils/import_utils.py", line 1355, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import transformers.models.qwen_parallel.utils_qwen because of the following error (look up to see its traceback):
No module named 'transformers.models.qwen_parallel.utils_qwen'
@ifromeast
Copy link
Author

No utils_qwen.py in minillm.transformers.models.qwen_parallel

@haiduo
Copy link

haiduo commented Apr 18, 2024

You can comment the two lines

decrease_mp_qwen, increase_mp_qwen,
)
func_map = {
"opt": (decrease_mp_opt, increase_mp_opt),
"gptj": (decrease_mp_gptj, increase_mp_gptj),
"llama": (decrease_mp_llama, increase_mp_llama),
"llama2": (decrease_mp_llama, increase_mp_llama),
"mistral": (decrease_mp_mistral, increase_mp_mistral),
"qwen": (decrease_mp_qwen, increase_mp_qwen),

11 and 20

@ifromeast
Copy link
Author

You can comment the two lines

decrease_mp_qwen, increase_mp_qwen,
)
func_map = {
"opt": (decrease_mp_opt, increase_mp_opt),
"gptj": (decrease_mp_gptj, increase_mp_gptj),
"llama": (decrease_mp_llama, increase_mp_llama),
"llama2": (decrease_mp_llama, increase_mp_llama),
"mistral": (decrease_mp_mistral, increase_mp_mistral),
"qwen": (decrease_mp_qwen, increase_mp_qwen),

11 and 20

well, If I want to use Qwen model,where to find this file?

@haiduo
Copy link

haiduo commented Apr 18, 2024

You can comment the two lines

decrease_mp_qwen, increase_mp_qwen,
)
func_map = {
"opt": (decrease_mp_opt, increase_mp_opt),
"gptj": (decrease_mp_gptj, increase_mp_gptj),
"llama": (decrease_mp_llama, increase_mp_llama),
"llama2": (decrease_mp_llama, increase_mp_llama),
"mistral": (decrease_mp_mistral, increase_mp_mistral),
"qwen": (decrease_mp_qwen, increase_mp_qwen),

11 and 20

well, If I want to use Qwen model,where to find this file?

I don’t know either, I’m not the original author.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants