Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It is recommended to add support for fp16 model to save memory #11

Open
junne14105 opened this issue May 3, 2024 · 6 comments
Open

Comments

@junne14105
Copy link

https://huggingface.co/IDM-VTON-F16
The previous 12 g model memory requirements can be changed to 5 g model size. Will also allow and use better! What do you think?

@TemryL
Copy link
Owner

TemryL commented May 3, 2024

hey! the link seems to be broken...
the current implementation is already loading the model in torch.float16, I will have on how to load in 8bits

@junne14105
Copy link
Author

https://huggingface.co/camenduru/IDM-VTON-F16 I tried it before and I didn't support it. That's why I'm asking.

@junne14105
Copy link
Author

iShot_2024-05-04_12 11 50

@TemryL
Copy link
Owner

TemryL commented May 4, 2024

Are you running on MPS accelerator?

@junne14105
Copy link
Author

intel x86 Macos 14.5 Beta(23F5074a) I would like to know how to use and add fp8 models and effects on your side! fp16 I give up

@deepfree2023
Copy link

This VRAM usage info might be useful:
image

From:
yisol/IDM-VTON#47

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants