Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix constant tagging in mps backend #3503

Closed
wants to merge 1 commit into from

Conversation

cccclai
Copy link
Contributor

@cccclai cccclai commented May 3, 2024

Summary:
Test with #3399 and this command passes

python -m examples.models.llama2.export_llama -kv --mps

Without this diff, it will error out

in _verify_exported_program_signature
    raise SpecViolationError(
torch._export.verifier.SpecViolationError: Buffer output getitem_1 does not point to a buffer that exists.
Dict of buffers that are mutated, in order: {'getitem_1': 'layers_0_attention_SDPA_kv_cache_k_cache', 'getitem': 'layers_0_attention_SDPA_kv_cache_v_cache', 'getitem_3': 'layers_1_attention_SDPA_kv_cache_k_cache', 'getitem_2': 'layers_1_attention_SDPA_kv_cache_v_cache', 'getitem_5': 'layers_2_attention_SDPA_kv_cache_k_cache', 'getitem_4': 'layers_2_attention_SDPA_kv_cache_v_cache', 'getitem_7': 'layers_3_attention_SDPA_kv_cache_k_cache', 'getitem_6': 'layers_3_attention_SDPA_kv_cache_v_cache', 'getitem_9': 'layers_4_attention_SDPA_kv_cache_k_cache', 'getitem_8': 'layers_4_attention_SDPA_kv_cache_v_cache'}
Buffer nodes available: []

The root cause is that by is_parameter, it tags all data including mutable buffers.

Differential Revision: D56941763

Copy link

pytorch-bot bot commented May 3, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3503

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 25eae44 with merge base a116d89 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 3, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56941763

@cccclai cccclai requested a review from DenisVieriu97 May 3, 2024 17:27
@cccclai cccclai added the module: mps Issues related to Apple's MPS delegation label May 3, 2024
Copy link
Collaborator

@DenisVieriu97 DenisVieriu97 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Thanks @cccclai

cccclai added a commit to cccclai/executorch-1 that referenced this pull request May 7, 2024
Summary:

Test with pytorch#3399 and this command passes 
```
python -m examples.models.llama2.export_llama -kv --mps
```
Without this diff, it will error out
```
in _verify_exported_program_signature
    raise SpecViolationError(
torch._export.verifier.SpecViolationError: Buffer output getitem_1 does not point to a buffer that exists.
Dict of buffers that are mutated, in order: {'getitem_1': 'layers_0_attention_SDPA_kv_cache_k_cache', 'getitem': 'layers_0_attention_SDPA_kv_cache_v_cache', 'getitem_3': 'layers_1_attention_SDPA_kv_cache_k_cache', 'getitem_2': 'layers_1_attention_SDPA_kv_cache_v_cache', 'getitem_5': 'layers_2_attention_SDPA_kv_cache_k_cache', 'getitem_4': 'layers_2_attention_SDPA_kv_cache_v_cache', 'getitem_7': 'layers_3_attention_SDPA_kv_cache_k_cache', 'getitem_6': 'layers_3_attention_SDPA_kv_cache_v_cache', 'getitem_9': 'layers_4_attention_SDPA_kv_cache_k_cache', 'getitem_8': 'layers_4_attention_SDPA_kv_cache_v_cache'}
Buffer nodes available: []
```
The root cause is that by `is_parameter`, it tags all data including mutable buffers.

Differential Revision: D56941763
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56941763

Summary:

Test with pytorch#3399 and this command passes 
```
python -m examples.models.llama2.export_llama -kv --mps
```
Without this diff, it will error out
```
in _verify_exported_program_signature
    raise SpecViolationError(
torch._export.verifier.SpecViolationError: Buffer output getitem_1 does not point to a buffer that exists.
Dict of buffers that are mutated, in order: {'getitem_1': 'layers_0_attention_SDPA_kv_cache_k_cache', 'getitem': 'layers_0_attention_SDPA_kv_cache_v_cache', 'getitem_3': 'layers_1_attention_SDPA_kv_cache_k_cache', 'getitem_2': 'layers_1_attention_SDPA_kv_cache_v_cache', 'getitem_5': 'layers_2_attention_SDPA_kv_cache_k_cache', 'getitem_4': 'layers_2_attention_SDPA_kv_cache_v_cache', 'getitem_7': 'layers_3_attention_SDPA_kv_cache_k_cache', 'getitem_6': 'layers_3_attention_SDPA_kv_cache_v_cache', 'getitem_9': 'layers_4_attention_SDPA_kv_cache_k_cache', 'getitem_8': 'layers_4_attention_SDPA_kv_cache_v_cache'}
Buffer nodes available: []
```
The root cause is that by `is_parameter`, it tags all data including mutable buffers.

Reviewed By: larryliu0820

Differential Revision: D56941763
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56941763

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 50e9ee9.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged module: mps Issues related to Apple's MPS delegation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants