-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torchbench models that don't run in dynamo runners #1901
Comments
my dedup script has some mistakes but is mostly right but just keeping this here now so i can preprocess
|
pytorchmergebot
pushed a commit
to pytorch/pytorch
that referenced
this issue
Sep 19, 2023
Helps debug pytorch/benchmark#1901 I will wait until the ONNX beartype sev is fixed before merging Pull Request resolved: #109536 Approved by: https://github.com/xuzhao9
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There's small nuances in how the dynamo runners benchmark models that can make certain torchbench models fail
Some models might be explicitly skipped, others might fail because of some dtype conversion. This can be frustrating because if you add a model to torchbench like clip or cm3leon you won't see it in the pt2dashboard so creating this giant tracker issue to solve this
To repro: look at logs in HUD for e.g https://ossci-raw-job-status.s3.amazonaws.com/log/16535270177 and compare to model names in
models/
andcanary_models/
If something is showing up in unique to torchbench that means it's not showing up in the pt2 dashboard
There are some concrete things we could do better in dynamo runners like starting with loudly erroring but should also track what these failures are
Notably I found this problem out after investigating stable diffusion and cm3leon
The text was updated successfully, but these errors were encountered: