One-Shot Generative Domain Adaptation
Ceyuan Yang1*,  Yujun Shen2*,  Zhiyi Zhang2,5Yinghao Xu1
Jiapeng Zhu3Zhirong Wu4Bolei Zhou1
1 The Chinese University of Hong Kong, 2 ByteDance Inc.
3 Hong Kong University of Science and Technology, 4 Microsoft Research Asia
5 University of Southern California
This work aims at transferring a Generative Adversarial Network (GAN) pre-trained on one image domain to a new domain referring to as few as just one target image. Different from existing approaches that adopt the vanilla fine-tuning strategy, we import two lightweight modules called attribute adaptor and attribute classifier to the generator and the discriminator respectively. By efficiently learning these two modules, we manage to reuse the prior knowledge and hence enable one-shot transfer with impressively high diversity. Our method demonstrates substantial improvements over existing baselines in a wide range of settings.
Here we provide some synthesis after one-shot adaptation.
  title   = {One-Shot Generative Domain Adaptation},
  author  = {Yang, Ceyuan and Shen, Yujun and Zhang, Zhiyi and Xu, Yinghao and Zhu, Jiapeng and Wu, Zhirong and Zhou, Bolei},
  article = {arXiv preprint arXiv:2111.09876},
  year    = {2021}
Related Work
S. Mo, M. Cho, and J. Shin. FreezeD: a Simple Baseline for Fine-tuning GANs. CVPR AI for Content Creation Workshop, 2020.
Comment: Proposes to freeze the lower layers of the discriminator for generative domain adaptation.
U. Ojha, Y. Li, J. Lu, AA. Efros, YJ. Lee, E. Shechtman, and R. Zhang. Few-shot Image Generation via Cross-domain Correspondence CVPR, 2021.
Comment: Proposed the cross-domain consistency as a regularization to maintain the diversity.