You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry, I haven't tried llama-chat yet, but you may find our new training env https://github.com/ctlllll/axolotl helpful. You can refer to the configs and start training with commands like accelerate launch -m axolotl.cli.train examples/medusa/your_config.yml.
python gen_model_answer_baseline.py --model-path /data/transformers/vicuna-7b-v1.3 --model-id vicuna-7b-v1.3-0
python gen_model_answer_medusa.py --model-path /data/transformers/medusa_vicuna-7b-v1.3 --model-id medusa-vicuna-7b-v1.3-0
My vicuna-7b-v1.3 download comes from:https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3/tree/main
My medusa-vicuna-7b-v1.3 download comes from:https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3/tree/main
I used this command to add the local model, and then an error was reported.how can I fixed it?
The text was updated successfully, but these errors were encountered: