Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can ConSinGAN apply to SR and animation task? #6

Open
danielkaifeng opened this issue Apr 9, 2020 · 7 comments
Open

Can ConSinGAN apply to SR and animation task? #6

danielkaifeng opened this issue Apr 9, 2020 · 7 comments

Comments

@danielkaifeng
Copy link

danielkaifeng commented Apr 9, 2020

This project is a very impressive job. I am wondering how to apply this model to SR and animation as SinGAN does. In SinGAN SR process, it need to up-sample several times, wherein those generators can't be run separately.

@tohinz
Copy link
Owner

tohinz commented Apr 9, 2020

Hi, we haven't tested animation and super-resolution with our model yet, but I would be interested to see how well it works.
For animation I believe it should work just as well as SinGAN, since I think the main idea is to just perturbate z_opt at test time. However, if I remember correctly SinGAN changes some hyperparameters for animation training (different --min_size, different noise padding, possibly others). An easy way to test animation would be to train a ConSinGAN model on the image of interest and then slightly perturbate the input noise z_opt at test time to see results (maybe check how SinGAN perturbates the noise exactly). I think SinGAN does something like 0.95z_opt + 0.05random_noise, but the details might be more involved.
For SR it's a bit more challenging as you observed. One idea to do this in our model would be to upsample the feature maps produced by the final generator (but before applying self.tail()) and feeding them again into the final generator block. This approach could be repeated several times until the desired resolution is reached. But I haven't tried this so not sure how well it will work and you might have to play around a little with the upsampling operation, i.e. by how much you upsample before feeding the upsampled features back into the final generator block.

@tohinz
Copy link
Owner

tohinz commented Apr 9, 2020

Edit: I just added code/examples (see readme) for image animation. Training is the same as for uncondtional image generation. At test time we simply add random noise to z_opt to create minor variations for the animation.

@danielkaifeng
Copy link
Author

Thank you! I will tried this out and give more feedback.

@Yang-Yajie
Copy link

Thank you! I will tried this out and give more feedback.

Hi, I am also working on this project recently. Have you successfully applied ConSinGAN in the super resolution task?

@marisanest
Copy link

marisanest commented Mar 14, 2021

Hi, @tohinz thanks for the great work! I would also be very interested to hear if there are any updates about the super resolution task. Or perhaps someone else has already addressed the issue?

@tohinz
Copy link
Owner

tohinz commented Mar 18, 2021

Hi, we haven't spent any time on testing ConSinGAN for super resolution. There are some ways this could be implemented but we are not planning on working on this. I'm happy to help with any specific questions or if you run into problems with the code though.

@marisanest
Copy link

Hi, ok thanks for your response! I might come back to you when I start implementing myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants