You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When i feed the out_dim argument in __init__ in Attention block it will raise the shape error, because the query_dim != out_dim. In this case, the following code try to keep the given channel of hidden_states.
to hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, -1, height, width), then it will respect the channel of hidden_states.
Maybe I will make a PR later.
Describe alternatives you've considered.
None.
Additional context.
None.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
When i feed the
out_dim
argument in__init__
in Attention block it will raise the shape error, because thequery_dim != out_dim
. In this case, the following code try to keep the given channel ofhidden_states
.Describe the solution you'd like.
I suggest the change of code base :
diffusers/src/diffusers/models/attention_processor.py
Line 1393 in b69fd99
to
hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, -1, height, width)
, then it will respect the channel ofhidden_states
.Maybe I will make a PR later.
Describe alternatives you've considered.
None.
Additional context.
None.
The text was updated successfully, but these errors were encountered: