Skip to the content.

CHAN for Image-to-Image Translation

PyTorch implementations of CHAN for Image-to-Image Translation.

Fei Gao, Xingxin Xu, Jun Yu, Meimei Shang, Xiang Li, and Dacheng Tao, Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation, IEEE Transactions on Image Processing, 2021. (Accepted)

Our Proposed Framework

Generator

Discriminator (Multi-layer Integration Discriminator, MID)

Generated Examples

Prerequisites

Getting Started

installation

The pre-trained model need to be save at ./checkpoint

Then you can test this imodel.

Results

Our final results can be downloaded here

​Our Quantitative performance in a variety of image-to-image translation tasks are shown below. We assign a score of +1, 0, or -1 to the best, mediate, and worst model according to each performance index. For each method, the total score on each dataset and that across all these datasets, are reported.

Training/Test Tips

Best practice for training and testing your models. Feel free to ask any questions about coding. Xingxin Xu, jehovahxu@gmail.com<p>

Citation

If you find this useful for your research, please cite our paper as:

Fei Gao, Xingxin Xu, Jun Yu, Meimei Shang, Xiang Li, and Dacheng Tao, Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation, IEEE Transactions on Image Processing, 2021. (Accepted)

@article{gao2021chan,
	title = {Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation},
	author = {Fei Gao and Xingxin Xu and Jun Yu and Meimei Shang and Xiang Li and Dacheng Tao},
	journal = {IEEE Transactions on Image Processing},
	year = {2021},
	url = {https://github.com/fei-hdu/chan},
}

Acknowledgments

Our code is inspired by pytorch-CycleGAN-and-pix2pix