Load in 4-bit only not enough for Cog with 12 Gb vram #86
-
Hi Peeps,
Now I'm always OOM whether 4bit is checked or not. (using one beam) Does anyone have tips for me how to optimize the loading to be again able to run cog-models? @jhc13 Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
What change are you talking about exactly? The Cog models always required some monkey-patching.
How did you provide these arguments? Did you edit TagGUI's source code? |
Beta Was this translation helpful? Give feedback.
Try editing line 160 of
captioning_thread.py
.