Data-Efficient Instance Generation from Instance Discrimination
Ceyuan Yang1Yujun Shen2Yinghao Xu1Bolei Zhou1
1 The Chinese University of Hong Kong
2 ByteDance Inc.
Overview
In this work, we develop a novel data-efficient Instance Generation (InsGen) method for training GANs with limited data. With the instance discrimination as an auxiliary task, our method makes the best use of both real and fake images to train the discriminator. In turn the discriminator is exploited to train the generator to synthesize as many diverse images as possible. Experiments under different data regimes show that InsGen brings a substantial improvement over the baseline in terms of both image quality and image diversity, and outperforms previous data augmentation algorithms by a large margin.
Results
Here we provide some synthesized samples with different numbers of training images and correspoding FID.
BibTeX
@article{yang2021insgen,
  title   = {Data-Efficient Instance Generation from Instance Discrimination},
  author  = {Yang, Ceyuan and Shen, Yujun and Xu, Yinghao and Zhou, Bolei},
  journal = {arXiv preprint arXiv:2106.04566},
  year    = {2021}
}
Related Work
T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, T. Aila. Training Generative Adversarial Networks with Limited Data. NeurIPS, 2020.
Comment: Proposes an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.
S. Zhao, Z. Liu, J. Lin, JY. Zhu, and S. Han. Differentiable Augmentation for Data-Efficient GAN Training. NeurIPS, 2020.
Comment: Imposes various types of differentiable augmentations on both real and fake samples.
J. Jeong, J. Shin. Training GANs with Stronger Augmentations via Contrastive Discriminator. ICLR, 2021.
Comment: Proposes a novel discriminator of GAN showing that contrastive representation learning, e.g., SimCLR, and GAN can benefit each other when they are jointly trained.