Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This is a great work! But the web demo has a problem shown as follows: #6

Open
QvQKing opened this issue Feb 8, 2024 · 0 comments
Open

Comments

@QvQKing
Copy link

QvQKing commented Feb 8, 2024

Traceback (most recent call last):
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/gradio/routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/gradio/blocks.py", line 1352, in process_api
result = await self.call_function(
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/gradio/blocks.py", line 1093, in call_function
prediction = await utils.async_iteration(iterator)
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/gradio/utils.py", line 341, in async_iteration
return await iterator.anext()
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/gradio/utils.py", line 334, in anext
return await anyio.to_thread.run_sync(
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async
return next(iterator)
File "/hy-tmp/LLMGA-master/llmga/serve/gradio_web_server.py", line 198, in generation_bot
image = pipe(caption,num_inference_steps=num_inference_steps).images[0]
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/hy-tmp/LLMGA-master/llmga/diffusers/pipeline_stable_diffusion_xl_lpw.py", line 903, in call
latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
File "/usr/local/miniconda3/envs/llmga/lib/python3.9/site-packages/diffusers/schedulers/scheduling_euler_discrete.py", line 498, in step
pred_original_sample = sample - sigma_hat * model_output
RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1
Uploading image.png…
1707377918865

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant