Improving GANs with A Dynamic Discriminator
Ceyuan Yang1,3,*,  Yujun Shen2,*,  Yinghao Xu1Deli Zhao2Bo Dai3Bolei Zhou4
1 CUHK, 2 Ant Group, 3 Shanghai AI Laboratory, 4 UCLA
Overview
This work aims at adjusting the capacity of a discriminator on-the-fly to better accommodate the time-varying bi-classification task. A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional computation cost or training objectives. Two capacity adjusting schemes are developed for training GANs under different data regimes: i) given a sufficient amount of training data, the discriminator benefits from a progressively increased learning capacity, and ii) when the training data is limited, gradually decreasing the layer width mitigates the over-fitting issue of the discriminator. Experiments on both 2D and 3D-aware image synthesis tasks conducted on a range of datasets substantiate the generalizability of our DynamicD as well as its substantial improvement over the baselines. Furthermore, DynamicD is synergistic to other discriminator-improving approaches (including data augmentation, regularizers, and pre-training), and brings continuous performance gain when combined for learning GANs.
Results
Here we provide synthesis with corresponding FID. Numbers in blue highlight the improvements over baselines.
BibTeX
@article{yang2022improving,
  title   = {Improving GANs with A Dynamic Discriminator},
  author  = {Yang, Ceyuan and Shen, Yujun and Xu, Yinghao and Zhao, Deli and Dai, Bo and Zhou, Bolei},
  article = {arXiv preprint arXiv:2209.09897},
  year    = {2022}
}
Related Work
H. Cai, C. Gan, J. Lin and S. Han. Network Augmentation For Tiny Deep Learning ICLR, 2022.
Comment: Introducing network augmentation for improving the performance of tiny neural networks.