-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use total_L is None for dense_to_jagged #2599
Conversation
This pull request was exported from Phabricator. Differential Revision: D57465638 |
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Summary: # Could not guard on data-dependent expression Ne(u0, 0) ``` RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}): Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)). (Size-like symbols: u0) ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True. Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance. Potential framework code culprit (scroll up for full backtrace): File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward if not total_L: User Stack (most recent call last): (snipped, see stack below for prefix) File "<eval_with_key>.82", line 424, in forward dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None); matmul_22 = None ``` Differential Revision: D57465638
4154af8
to
93fc517
Compare
This pull request was exported from Phabricator. Differential Revision: D57465638 |
Summary: # Could not guard on data-dependent expression Ne(u0, 0) ``` RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}): Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)). (Size-like symbols: u0) ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True. Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance. Potential framework code culprit (scroll up for full backtrace): File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward if not total_L: User Stack (most recent call last): (snipped, see stack below for prefix) File "<eval_with_key>.82", line 424, in forward dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None); matmul_22 = None ``` Differential Revision: D57465638
93fc517
to
2711c98
Compare
This pull request was exported from Phabricator. Differential Revision: D57465638 |
2711c98
to
9c742f1
Compare
Summary: # Could not guard on data-dependent expression Ne(u0, 0) ``` RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}): Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)). (Size-like symbols: u0) ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True. Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance. Potential framework code culprit (scroll up for full backtrace): File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward if not total_L: User Stack (most recent call last): (snipped, see stack below for prefix) File "<eval_with_key>.82", line 424, in forward dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None); matmul_22 = None ``` Reviewed By: frank-wei Differential Revision: D57465638
This pull request was exported from Phabricator. Differential Revision: D57465638 |
Summary: # Could not guard on data-dependent expression Ne(u0, 0) ``` RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}): Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)). (Size-like symbols: u0) ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True. Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance. Potential framework code culprit (scroll up for full backtrace): File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward if not total_L: User Stack (most recent call last): (snipped, see stack below for prefix) File "<eval_with_key>.82", line 424, in forward dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None); matmul_22 = None ``` Reviewed By: frank-wei Differential Revision: D57465638
9c742f1
to
b433d97
Compare
This pull request was exported from Phabricator. Differential Revision: D57465638 |
Summary: # Could not guard on data-dependent expression Ne(u0, 0) ``` RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}): Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)). (Size-like symbols: u0) ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True. Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance. Potential framework code culprit (scroll up for full backtrace): File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward if not total_L: User Stack (most recent call last): (snipped, see stack below for prefix) File "<eval_with_key>.82", line 424, in forward dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None); matmul_22 = None ``` Reviewed By: frank-wei Differential Revision: D57465638
b433d97
to
c9c1bd6
Compare
This pull request was exported from Phabricator. Differential Revision: D57465638 |
…2599) Summary: # Could not guard on data-dependent expression Ne(u0, 0) ``` RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}): Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)). (Size-like symbols: u0) ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True. Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance. Potential framework code culprit (scroll up for full backtrace): File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward if not total_L: User Stack (most recent call last): (snipped, see stack below for prefix) File "<eval_with_key>.82", line 424, in forward dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None); matmul_22 = None ``` Reviewed By: frank-wei Differential Revision: D57465638
c9c1bd6
to
f1aa0ad
Compare
This pull request was exported from Phabricator. Differential Revision: D57465638 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This pull request has been merged in b633904. |
Summary:
Could not guard on data-dependent expression Ne(u0, 0)
using
total_L is None
instead will avoid this guard.Differential Revision: D57465638