We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linux
Nvidia (CUDA)
No response
24
4.2.1
FF
When the RAM cache is set too low (presumably a value too low to store a LoRA), the LoRA may not be applied correctly.
The following examples have the same prompt and seed - only the RAM cache and LoRA settings are changed.
Base test case metadata:
{ "generation_mode": "sdxl_txt2img", "positive_prompt": "super cute tiger cub alienzkin", "negative_prompt": "", "width": 1024, "height": 1024, "seed": 3944440447, "rand_device": "cpu", "cfg_scale": 5.5, "cfg_rescale_multiplier": 0, "steps": 50, "scheduler": "dpmpp_2m_sde_k", "model": { "key" : "e790edfe-1614-48fa-9802-58b83c0159b7" , "hash": "random:6cab136d48faf77462cab64f1f810971f1a6b925c94f1c6890bf8ae748936177", "name": "Juggernaut-XL-v9" , "base": "sdxl" , "type": "main" }, "loras": [ { "model": { "key" : "3c106f7a-cdbc-4445-b11c-3915ac5886ee" , "hash": "random:bbebed7dd0694eeb822c9b7fac85d01f49738cf59f0a1c2b7116a38f8409257f", "name": "alienzkin-sdxl" , "base": "sdxl" , "type": "lora" }, "weight": 0.75 } ], "positive_style_prompt": "super cute tiger cub alienzkin", "negative_style_prompt": "", "control_layers": {"layers": [], "version": 2}, "app_version": "4.2.1" }
ram
ram: 7.5
ram: 0.25
LoRAs still work when the RAM cache setting is low.
The text was updated successfully, but these errors were encountered:
closes #6375
c775b59
532f82c
lstein
No branches or pull requests
Is there an existing issue for this problem?
Operating system
Linux
GPU vendor
Nvidia (CUDA)
GPU model
No response
GPU VRAM
24
Version number
4.2.1
Browser
FF
Python dependencies
No response
What happened
When the RAM cache is set too low (presumably a value too low to store a LoRA), the LoRA may not be applied correctly.
The following examples have the same prompt and seed - only the RAM cache and LoRA settings are changed.
Base test case metadata:
LoRA disabled (any
ram
setting)ram: 7.5
, LoRA enabledram: 0.25
, LoRA enabledWhat you expected to happen
LoRAs still work when the RAM cache setting is low.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
The text was updated successfully, but these errors were encountered: