-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update _dedup_save_plans.py #126569
base: main
Are you sure you want to change the base?
Update _dedup_save_plans.py #126569
Conversation
To resolve pytorch#125740, save each tensor on the lowest rank.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/126569
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ⏳ 1 Pending, 1 Unrelated FailureAs of commit c102e7f with merge base 64c581a (): UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At minimum, we would need this to be optional and it should not replace the current deduplication logic. The current logic is a storage optimization, and this would cause a regression in terms of calling dcp.save on models which are replica heavy.
I think in the originally linked issue, we considered not applying this logic to scalars as a fix for minimizing the number of files which need to be loaded
it's "optimization" only in terms of saving balance. But it hurts the loading performance in multi-node case.
i don't think that works. The root cause is the duplicated tensors are saved in different files, no matter if it's scalar tensor or not. |
@LucasLLC , i replied to your comment. Could you please take a look ? |
@bigning , I believe this generally this isn't seen as a large issue during loading since files are all expected to live in the same NFS directory. Additionally, I think we would prioritize saving latency over loading at least in this case since users are typically saving much more often. Could we change this PR to make the de-duplication optional (and true by default)? |
# essentially ignores the storage size of anything that is not a tensor, since | ||
# we don't know how much storage they represent | ||
plan_to_size[select_plan_idx] += write_item.tensor_storage_size() or 1 | ||
select_plan_idx = min(plan_indices, key=lambda plan_idx: plan_idx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
select_plan_idx = min(plan_indices, key=lambda plan_idx: plan_idx) | |
select_plan_idx = min(plan_indices) |
If we skip the dedup, it fails here https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/default_planner.py#L386-L387.
For NFS, the issue is not about if the file exists in the NFS. It's that each node needs to download the Nx times more files. If it's using cloud, this introduces:
Can I just add a |
@bigning I understand the pain points in having to download multiple files. I think we would accept a PR which makes this behavior configurable. |
@LucasLLC , this makes sense. I added a param to |
Thanks @bigning ! This looks good to me. Will merge if tests pass |
thanks @LucasLLC , it seems two lints failed, I can't find useful error message, do you know how to fix or how to re-run? |
NVM, i just submitted another commit. Looks all tests are green now. |
@LucasLLC , can you help merge? |
@pytorchbot merge |
Pull workflow has not been scheduled for the PR yet. It could be because author doesn't have permissions to run those or skip-checks keywords were added to PR/commits, aborting merge. Please get/give approval for the workflows and/or remove skip ci decorators before next merge attempt. If you think this is a mistake, please contact PyTorch Dev Infra. |
@pytorchbot merge |
Pull workflow has not been scheduled for the PR yet. It could be because author doesn't have permissions to run those or skip-checks keywords were added to PR/commits, aborting merge. Please get/give approval for the workflows and/or remove skip ci decorators before next merge attempt. If you think this is a mistake, please contact PyTorch Dev Infra. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / macos-13-py3-arm64 / build Details for Dev Infra teamRaised by workflow job |
To resolve #125740, save each tensor on the lowest rank.
Fixes #125740
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang @d4l3k @LucasLLC