Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix latents.dtype before vae.decode() at ROCm devices in StableDiffusionPipelines #7886

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

tolgacangoz
Copy link
Contributor

@tolgacangoz tolgacangoz commented May 8, 2024

Co-authored-by: Bagheera 59658056+bghira@users.noreply.github.com

What does this PR do?

This was talked about at a previous PR: #7858.

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@yiyixuxu @bghira

Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com>
@tolgacangoz tolgacangoz changed the title Fix latents.dtype before vae.decode() at ROCm Fix latents.dtype before vae.decode() at ROCm in StableDiffusionPipelines May 8, 2024
@tolgacangoz tolgacangoz changed the title Fix latents.dtype before vae.decode() at ROCm in StableDiffusionPipelines Fix latents.dtype before vae.decode() at ROCm devices in StableDiffusionPipelines May 8, 2024
@bghira
Copy link
Contributor

bghira commented May 8, 2024

so i can actually hit this one on CUDA, but it only happens during training without autocast 🤔

the weights are in bf16 precision, not fp32. maybe this is what causes it?

@tolgacangoz
Copy link
Contributor Author

tolgacangoz commented May 31, 2024

Since I don't have a ROCm device, I can't work on this PR directly :/ What to do here?
Doesn't inferencing with bf16 produce a casting error?

@bghira
Copy link
Contributor

bghira commented May 31, 2024

inferencing is when the error occurs, but that's generally during training time, as the components can be initialised with different weight dtypes. it's not really clear why the scheduler can change the dtype other than certain calculations end up with the default torch dtype of float32 when one isn't specified, and i think autocast takes care of this. but we can't rely on autocast existing or being in use, because not all platforms support it - and future training might not use it at all, instead relying on bf16 optimiser states.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants