Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tiled VAE] Warning: Unknown attention optimization method . Please try to update the extension. #351

Open
Vektor8298 opened this issue Feb 18, 2024 · 0 comments

Comments

@Vektor8298
Copy link

Vektor8298 commented Feb 18, 2024

I have updated A1111 to Version: v1.7.0, Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e, and ever since then, Tiled VAE stopped working with this error:
Traceback (most recent call last): File "/mnt/ts512/stable-diffusion-webui/modules/call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "/mnt/ts512/stable-diffusion-webui/modules/call_queue.py", line 36, in f res = func(*args, **kwargs) File "/mnt/ts512/stable-diffusion-webui/modules/txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "/mnt/ts512/stable-diffusion-webui/modules/processing.py", line 734, in process_images res = process_images_inner(p) File "/mnt/ts512/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 41, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "/mnt/ts512/stable-diffusion-webui/modules/processing.py", line 868, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "/mnt/ts512/stable-diffusion-webui/modules/processing.py", line 1157, in sample return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts) File "/mnt/ts512/stable-diffusion-webui/modules/processing.py", line 1216, in sample_hr_pass samples = images_tensor_to_samples(decoded_samples, approximation_indexes.get(opts.sd_vae_encode_method)) File "/mnt/ts512/stable-diffusion-webui/modules/sd_samplers_common.py", line 110, in images_tensor_to_samples x_latent = model.get_first_stage_encoding(model.encode_first_stage(image)) File "/mnt/ts512/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "/mnt/ts512/stable-diffusion-webui/modules/sd_hijack_utils.py", line 26, in __call__ return self.__sub_func(self.__orig_func, *args, **kwargs) File "/mnt/ts512/stable-diffusion-webui/modules/sd_hijack_unet.py", line 79, in <lambda> first_stage_sub = lambda orig_func, self, x, **kwargs: orig_func(self, x.to(devices.dtype_vae), **kwargs) File "/mnt/ts512/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/mnt/ts512/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "/mnt/ts512/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 84, in encode moments = self.quant_conv(h) File "/mnt/ts512/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/mnt/ts512/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/mnt/ts512/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 501, in network_Conv2d_forward return originals.Conv2d_forward(self, input) File "/mnt/ts512/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) File "/mnt/ts512/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (float) and bias type (c10::Half) should be the same
GPU is RX 6600XT, Arch Linux, Python 3.10.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant