Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use total_L is None for dense_to_jagged #2599

Closed
wants to merge 1 commit into from

Conversation

ColinPeppler
Copy link
Contributor

@ColinPeppler ColinPeppler commented May 16, 2024

Summary:

Could not guard on data-dependent expression Ne(u0, 0)

RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}):
Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)).  (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Potential framework code culprit (scroll up for full backtrace):
  File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward
    if not total_L:


User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<eval_with_key>.82", line 424, in forward
    dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None);  matmul_22 = None

using total_L is None instead will avoid this guard.

Differential Revision: D57465638

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57465638

Copy link

netlify bot commented May 16, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit f1aa0ad
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/66562c31924e200008203258
😎 Deploy Preview https://deploy-preview-2599--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

ColinPeppler added a commit to ColinPeppler/FBGEMM that referenced this pull request May 16, 2024
Summary:

# Could not guard on data-dependent expression Ne(u0, 0)

```
RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}):
Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)).  (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Potential framework code culprit (scroll up for full backtrace):
  File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward
    if not total_L:


User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<eval_with_key>.82", line 424, in forward
    dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None);  matmul_22 = None
```

Differential Revision: D57465638
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57465638

ColinPeppler added a commit to ColinPeppler/FBGEMM that referenced this pull request May 16, 2024
Summary:

# Could not guard on data-dependent expression Ne(u0, 0)

```
RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}):
Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)).  (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Potential framework code culprit (scroll up for full backtrace):
  File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward
    if not total_L:


User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<eval_with_key>.82", line 424, in forward
    dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None);  matmul_22 = None
```

Differential Revision: D57465638
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57465638

ColinPeppler added a commit to ColinPeppler/FBGEMM that referenced this pull request May 28, 2024
Summary:

# Could not guard on data-dependent expression Ne(u0, 0)

```
RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}):
Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)).  (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Potential framework code culprit (scroll up for full backtrace):
  File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward
    if not total_L:


User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<eval_with_key>.82", line 424, in forward
    dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None);  matmul_22 = None
```

Reviewed By: frank-wei

Differential Revision: D57465638
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57465638

ColinPeppler added a commit to ColinPeppler/FBGEMM that referenced this pull request May 28, 2024
Summary:

# Could not guard on data-dependent expression Ne(u0, 0)

```
RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}):
Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)).  (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Potential framework code culprit (scroll up for full backtrace):
  File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward
    if not total_L:


User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<eval_with_key>.82", line 424, in forward
    dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None);  matmul_22 = None
```

Reviewed By: frank-wei

Differential Revision: D57465638
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57465638

ColinPeppler added a commit to ColinPeppler/FBGEMM that referenced this pull request May 28, 2024
Summary:

# Could not guard on data-dependent expression Ne(u0, 0)

```
RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}):
Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)).  (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Potential framework code culprit (scroll up for full backtrace):
  File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward
    if not total_L:


User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<eval_with_key>.82", line 424, in forward
    dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None);  matmul_22 = None
```

Reviewed By: frank-wei

Differential Revision: D57465638
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57465638

…2599)

Summary:

# Could not guard on data-dependent expression Ne(u0, 0)

```
RuntimeError: Failed running call_function fbgemm.dense_to_jagged(*(FakeTensor(..., device='cuda:0', size=(s1, 2540, 512), dtype=torch.bfloat16), [FakeTensor(..., device='cuda:0', size=(s1 + 1,), dtype=torch.int64)]), **{'total_L': None}):
Could not guard on data-dependent expression Ne(u0, 0) (unhinted: Ne(u0, 0)).  (Size-like symbols: u0)
ATTENTION: guard_size_oblivious would fix the error, evaluating expression to True.
Maybe you need to add guard_size_oblivious to framework code, see doc below for more guidance.
Potential framework code culprit (scroll up for full backtrace):
  File "/mnt/xarfuse/uid-119376/a7f0c177-seed-nspid4026531836_cgpid3083025-ns-4026531841/fbgemm_gpu/sparse_ops.py", line 467, in dense_to_jagged_forward
    if not total_L:


User Stack (most recent call last):
  (snipped, see stack below for prefix)
  File "<eval_with_key>.82", line 424, in forward
    dense_to_jagged = torch.ops.fbgemm.dense_to_jagged(matmul_22, [asynchronous_complete_cumsum_9], total_L = None);  matmul_22 = None
```

Reviewed By: frank-wei

Differential Revision: D57465638
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57465638

Copy link

@frank-wei frank-wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in b633904.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants