Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noise Inversion Error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm) #348

Open
Vigilence opened this issue Feb 8, 2024 · 0 comments

Comments

@Vigilence
Copy link

I am attempting to upscale an image using this extension. I only have this extension enabled. The functions enabled are tiled vae, noise inversion, and tiled diffusion.

If I enabled noise inversion i get the error below. If I don't enable it I get no errors.

---
Noise Inversion:   0%|                                                                          | 0/10 [00:00<?, ?it/s]
[Tiled Diffusion] upscaling image with ESRGAN-UltraSharp-4x...
Upscale script freed memory successfully.
MixtureOfDiffusers Sampling:   0%|                                                             | 0/230 [00:40<?, ?it/s]
tiled upscale: 100%|█████████████████████████████████████████████████████████████████| 112/112 [00:07<00:00, 14.20it/s]
*** Error running process: I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tilevae.pyd upscale: 100%|█████████████████████████████████████████████████████████████████| 112/112 [00:07<00:00, 17.70it/s]
    Traceback (most recent call last):
      File "I:\Stable Diffusion Forge\modules\scripts.py", line 798, in process
        script.process(p, *script_args)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tilevae.py", line 716, in process
        if devices.get_optimal_device_name().startswith('cuda') and vae.device == devices.cpu and not vae_to_gpu:
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1695, in __getattr__
        raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
    AttributeError: 'AutoencoderKL' object has no attribute 'device'

---
warn: noise inversion only supports the "Euler" sampler, switch to it sliently...
                                                                                                                       Mixture of Diffusers hooked into 'Euler' sampler, Tile size: 96x96, Tile count: 91, Batch size: 4, Tile batches: 23 (ext: NoiseInv)iffusers Sampling:   0%|                                                             | 0/230 [00:00<?, ?it/s]
Moving model(s) skipped. Freeing memory has taken 0.60 seconds
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.27 seconds
                                                                                                                       *** Error completing request
*** Arguments: ('task(6jw4y30z5o2pv2o)', 0, '(thick impasto painting:2), a vibrant textured impasto painting of a wave crashing against against the ocean in the foreground, with a colorful sky in the background, The wave is depicted in shades of blue, white, and turquoise with the foamy crest contrasting against the deep blue of the ocean, The sky is painted in hues of pink, orange, and yellow suggesting a sunrise, (oil painting:1.5), (masterpiece:1.25), 8k, cinematic lighting, (best quality:1.5), (detailed:1.5), (thick brushstrokes:1.5), (detailed brushstrokes:1.75), very high resolution, palette knife painting,   <lora:Etremely Detailed Sliders (Detail Improvement Effect) - V1.0 - SDXL- ntc:1>\n', 'ugly, (worst quality, normal quality, low quality:2.5), out of focus, bad painting, bad drawing, blurry, low resolution, (logo, text, signature, name, artist name, artist signature:2.5),  NegativeXL - A -Standard - gsdf, Pallets, wood, wood pallets, watermark, rocks, stones, (beach:1.5), sand, planks, log, (anime:2.5), (cartoon:2.5), manga, living room, bedroom, house, mountain, hill, beach, (noise:1.5)\n', [], <PIL.Image.Image image mode=RGBA size=2560x1440 at 0x1C24850DD20>, None, None, None, None, None, None, 60, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.55, 0.0, 2880, 5120, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000001C1243F5780>, 0, False, 1, 0.5, 4, 0, 0.5, 2, True, 'SDXL\\Refiner\\Stable Diffusion XL (Refiner) 1.0 - SDXL - StabilityAI.safetensors', 0.8, -1, False, -1, 0, 0, 0, UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), UiControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced'), False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0.5, 2, False, True, 'Mixture of Diffusers', False, True, 1024, 1024, 96, 96, 48, 4, 'ESRGAN-UltraSharp-4x', 2, True, 10, 1, 0.5, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 3072, 192, True, True, True, False, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "I:\Stable Diffusion Forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "I:\Stable Diffusion Forge\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "I:\Stable Diffusion Forge\modules\img2img.py", line 235, in img2img
        processed = process_images(p)
      File "I:\Stable Diffusion Forge\modules\processing.py", line 749, in process_images
        res = process_images_inner(p)
      File "I:\Stable Diffusion Forge\modules\processing.py", line 920, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "I:\Stable Diffusion Forge\modules\processing.py", line 1703, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
        return fn(*args, **kwargs)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
        return fn(*args, **kwargs)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py", line 643, in sample_img2img
        latent = self.find_noise_for_image_sigma_adjustment(sampler.model_wrap, self.noise_inverse_steps, prompts)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
        return fn(*args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\abstractdiffusion.py", line 725, in find_noise_for_image_sigma_adjustment
        eps = self.get_noise(x_in * c_in, t, cond_in, steps - i)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\mixtureofdiffusers.py", line 200, in get_noise
        return self.apply_model_hijack(x_in, sigma_in, cond=cond_in, noise_inverse_step=step)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_utils\utils.py", line 249, in wrapper
        return fn(*args, **kwargs)
      File "I:\Stable Diffusion Forge\extensions\multidiffusion-upscaler-for-automatic1111\tile_methods\mixtureofdiffusers.py", line 119, in apply_model_hijack
        x_tile_out = shared.sd_model.apply_model_original_md(x_tile, t_tile, c_tile)
      File "I:\Stable Diffusion Forge\modules\sd_models_xl.py", line 45, in apply_model
        return self.model(x, t, cond, *args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "I:\Stable Diffusion Forge\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "I:\Stable Diffusion Forge\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "I:\Stable Diffusion Forge\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "I:\Stable Diffusion Forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 847, in forward
        emb = self.time_embed(t_emb)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\container.py", line 215, in forward
        input = module(input)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "I:\Stable Diffusion Forge\ldm_patched\modules\ops.py", line 46, in forward
        return super().forward(*args, **kwargs)
      File "I:\Stable Diffusion Forge\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant