-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The configuration for Llama-7b on 4 RTX4090 #269
Comments
actor, critic, rm, init nodes = 1,1,1,1 with adam offload + gradient_checkpoint |
@hijkzzz Thank you for replying! But I met this problem, do you know how to solve it ?
|
Do you have more detailed logs + running envs + launch commands? |
i success on the following configuration: ` ray job submit --address="http://127.0.0.1:8265" |
@LinkyLiu Ray actor has died unexpectedly, please check ray log in |
Hello, I want to run train_ppo_llama_ray.sh on 4 RTX4090, should I modify the actor_num_gpus_per_node/critic_num_gpus_per_node in train_ppo_llama_ray.sh ? As the default script is for 8 gpus, what else should I pay attention to or should be modified?
The text was updated successfully, but these errors were encountered: