Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: When RAM cache is too low, LoRAs are not applied correctly #6375

Closed
1 task done
psychedelicious opened this issue May 15, 2024 · 0 comments
Closed
1 task done
Assignees
Labels
bug Something isn't working

Comments

@psychedelicious
Copy link
Collaborator

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

No response

GPU VRAM

24

Version number

4.2.1

Browser

FF

Python dependencies

No response

What happened

When the RAM cache is set too low (presumably a value too low to store a LoRA), the LoRA may not be applied correctly.

The following examples have the same prompt and seed - only the RAM cache and LoRA settings are changed.

Base test case metadata:

{
    "generation_mode": "sdxl_txt2img", 
    "positive_prompt": "super cute tiger cub alienzkin", 
    "negative_prompt": "", 
    "width": 1024, 
    "height": 1024, 
    "seed": 3944440447, 
    "rand_device": "cpu", 
    "cfg_scale": 5.5, 
    "cfg_rescale_multiplier": 0, 
    "steps": 50, 
    "scheduler": "dpmpp_2m_sde_k", 
    "model": {
        "key" : "e790edfe-1614-48fa-9802-58b83c0159b7"                                   , 
        "hash": "random:6cab136d48faf77462cab64f1f810971f1a6b925c94f1c6890bf8ae748936177", 
        "name": "Juggernaut-XL-v9"                                                       , 
        "base": "sdxl"                                                                   , 
        "type": "main"                                                                     
    }, 
    "loras": [
        {
            "model": {
                "key" : "3c106f7a-cdbc-4445-b11c-3915ac5886ee"                                   , 
                "hash": "random:bbebed7dd0694eeb822c9b7fac85d01f49738cf59f0a1c2b7116a38f8409257f", 
                "name": "alienzkin-sdxl"                                                         , 
                "base": "sdxl"                                                                   , 
                "type": "lora"                                                                     
            }, 
            "weight": 0.75
        }
    ], 
    "positive_style_prompt": "super cute tiger cub alienzkin", 
    "negative_style_prompt": "", 
    "control_layers": {"layers": [], "version": 2}, 
    "app_version": "4.2.1"
}

LoRA disabled (any ram setting)

image

ram: 7.5, LoRA enabled

image

ram: 0.25, LoRA enabled

image

What you expected to happen

LoRAs still work when the RAM cache setting is low.

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

@psychedelicious psychedelicious added the bug Something isn't working label May 15, 2024
@lstein lstein self-assigned this May 18, 2024
lstein pushed a commit that referenced this issue May 19, 2024
@lstein lstein closed this as completed in 532f82c May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants