Skip to content

Latest commit

 

History

History
60 lines (41 loc) · 1.93 KB

ugatit.md

File metadata and controls

60 lines (41 loc) · 1.93 KB

1 U-GAT-IT

1.1 Principle

Similar to CycleGAN, U-GAT-IT uses unpaired pictures for image translation, input two different images with different styles, and automatically perform style transfer. Differently, U-GAT-IT is a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.

1.2 How to use

1.2.1 Prepare Datasets

Selfie2anime dataset used by U-GAT-IT can be download from here. You can also use your own dataset. The structure of dataset is as following:

  ├── dataset
      └── YOUR_DATASET_NAME
          ├── trainA
          ├── trainB
          ├── testA
          └── testB

1.2.2 Train/Test

Datasets used in example is selfie2anime, you can change it to your own dataset in the config file.

Train a model:

   python -u tools/main.py --config-file configs/ugatit_selfie2anime_light.yaml

Test the model:

   python tools/main.py --config-file configs/ugatit_selfie2anime_light.yaml --evaluate-only --load ${PATH_OF_WEIGHT}

1.3 Results

1.4 模型下载

模型 数据集 下载地址
ugatit_light selfie2anime ugatit_light

References