Improving the Fairness of Deep Generative Models
without Retraining
Shuhan Tan1Yujun Shen2Bolei Zhou2
1 Sun Yat-sen University
2 The Chinese University of Hong Kong
Overview
We propose a simple yet effective method to improve the fairness of image generation for a pre-trained GAN model without retraining. Generative Adversarial Networks (GANs) have recently advanced face synthesis by learning the underlying distribution of observed data. However, it will lead to a biased image generation due to the imbalanced training data or the mode collapse issue. This work utilizes the recent GAN interpretation method and a Gaussian Mixture Model (GMM) to support the sampling of latent codes for producing images with a more fair attribute distribution. We call this method FairGen. Experiments show that FairGen can substantially improve the fairness of image generation. The images generated from our method are further applied to reveal and quantify the biases in commercial face classifiers and face super-resolution model.
Results
Notice that all the following images are synthesized by StyleGAN v2.

Fair Image Generation

Age - Eyeglasses

Gender - Black Hair

Identifying Bias in Existing Models

Mis-classified Images by Commercial APIs

Attribute Alternation by a Face Super-resolution Model

BibTeX
@article{tan2020fairgen,
  title   = {Improving the Fairness of Deep Generative Models without Retraining},
  author  = {Tan, Shuhan and Shen, Yujun and Zhou, Bolei},
  journal = {arXiv preprint arXiv:2012.04842},
  year    = {2020}
}
Related Work
S. Zhao, H. Ren, A. Yuan, J. Song, N. Goodman, S. Ermon. Bias and Generalization in Deep Generative Models: An Empirical Study. NeurIPS 2018.
Comment: Gives an empirical study on the bias and generalization introduced by the training process of deep generative models.
K. Choi, A. Grover, T. Singh, R. Shu, S. Ermon. Fair Generative Modeling via Weak Supervision. ICML 2020.
Comment: Offsets the bias in GAN by learning a weighting function to reweight the importance of each instance during GAN training.
N. Yu, K. Li, P. Zhou, J. Malik, L. Davis, M. Fritz Inclusive GAN: Improving Data and Minority Coverage in Generative Models. ECCV 2020.
Comment: Mitigates GAN bias by encouraging data coverage during the training with new objective functions.
Y. Shen, J. Gu, X. Tang, B. Zhou. Interpreting the Latent Space of GANs for Semantic Face Editing. CVPR 2020.
Comment: Interprets the face semantics emerging in the latent space of GANs with the help of off-the-shelf classifiers.