Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError #69

Open
qspang opened this issue Jan 17, 2024 · 3 comments
Open

OSError #69

qspang opened this issue Jan 17, 2024 · 3 comments

Comments

@qspang
Copy link

qspang commented Jan 17, 2024

python gen_model_answer_baseline.py --model-path /data/transformers/vicuna-7b-v1.3 --model-id vicuna-7b-v1.3-0
python gen_model_answer_medusa.py --model-path /data/transformers/medusa_vicuna-7b-v1.3 --model-id medusa-vicuna-7b-v1.3-0
My vicuna-7b-v1.3 download comes from:https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3/tree/main
My medusa-vicuna-7b-v1.3 download comes from:https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3/tree/main
I used this command to add the local model, and then an error was reported.how can I fixed it?
微信截图_20240117161746

@ctlllll
Copy link
Contributor

ctlllll commented Jan 24, 2024

Thanks for your interest! It seems to be a network issue and may be due to the GFW. Could you please check if that's the case?

@qspang
Copy link
Author

qspang commented Jan 24, 2024

Thank you for your reply!I have fixed that problem.Can you take a look at the question I asked here?:#45

@ctlllll
Copy link
Contributor

ctlllll commented Jan 25, 2024

Sorry, I haven't tried llama-chat yet, but you may find our new training env https://github.com/ctlllll/axolotl helpful. You can refer to the configs and start training with commands like accelerate launch -m axolotl.cli.train examples/medusa/your_config.yml.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants