Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A small question about the requirement for Ram/VRam #50

Open
brbisheng opened this issue Jan 1, 2024 · 3 comments
Open

A small question about the requirement for Ram/VRam #50

brbisheng opened this issue Jan 1, 2024 · 3 comments

Comments

@brbisheng
Copy link

Hello, Xavier, this is a brilliant contribution, thank you very much.

I encountered a small issue:
When running run_gradio_demo.py,
my process kept on being stopped without explicit error message (as below).

such as:

...
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
DiffusionWrapper has 865.91 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 16, 16) = 1024 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
^C

Is it potentially because my system ram is too low (12G)?
when monitoring the process, I noticed that during the cross-attention phase, my CPU ram is hitting its upper limit 12G, but the GPU Vram is almost nil.

Could you provide some suggestions for potential causes of such an issue? ❤

@XavierCHEN34
Copy link
Collaborator

XavierCHEN34 commented Jan 2, 2024

This might be caused by the mask refinement module, you could set the default as False
reference_mask_refine = gr.Checkbox(label='Reference Mask Refine', value=False, interactive = True)

@brbisheng
Copy link
Author

Thank you very much, Xavier.

I checked the corresponding line and experimented with the default value, but the problem persisted.

I tried to trace the problem, it appears that I am not able to finish the following line

model = create_model(model_config ).cpu()

more precisely, my system fails loading the instaniate_from_config function,

def instantiate_from_config(config):
    if not "target" in config:
        if config == '__is_first_stage__':
            return None
        elif config == "__is_unconditional__":
            return None
        raise KeyError("Expected key `target` to instantiate.")
    return get_obj_from_str(config["target"])(**config.get("params", dict()))


def get_obj_from_str(string, reload=False):
    module, cls = string.rsplit(".", 1)
    if reload:
        module_imp = importlib.import_module(module)
        importlib.reload(module_imp)
    return getattr(importlib.import_module(module, package=None), cls)

@bbytiger
Copy link

running into a similar issue as above

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants