Skip to content

Translating Real Images to cartoon images using PIX2PIX - Image-to-Image Translation with Conditional Adversarial Networks

License

Notifications You must be signed in to change notification settings

rohitkuk/Cartoonify

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn

Cartoonify

Pytorch Implementation Translating Real Images to cartoon images using PIX2PIX - Image-to-Image Translation with Conditional Adversarial Networks Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros

Generated Data Animation

Abstract

We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either

Architecture

DCGAN Generator

Directory Structre

.
├── assets
├── data
├── docs
├── logs
├── pipelines
├── research
├── src
│   ├── data
│   └── models
│   └── utils
├── tests
├── weights
├── LICENSE
├── README.md
├── requirements.txt
|── train.py
└── inference.py


Run Training

python train.py \
    --wandbkey={{WANDB KEY}} \
    --projectname=Cartoonify \
    --wandbentity={{WANDB USERNAME}} \
    --tensorboard=True \
    --kaggle_user={{KAGGLE USERNAME}} \
    --kaggle_key={{KAGGLE API KEY}} \
    --batch_size=2 \
    --epoch=5 \
    --load_checkpoints=True \

References

  1. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros. Image-to-Image Translation with Conditional Adversarial Networks.[arxiv]
  2. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative adversarial nets. NIPS 2014 [arxiv]
  3. Ian Goodfellow. Tutorial: Generative Adversarial Networks. NIPS 2016 [arxiv]
  4. PyTorch Docs. [https://pytorch.org/docs/stable/index.html]

About

Translating Real Images to cartoon images using PIX2PIX - Image-to-Image Translation with Conditional Adversarial Networks

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published