Interpreting the Latent Space of GANs for Semantic Face Editing
Yujun Shen1Jinjin Gu2Xiaoou Tang1Bolei Zhou1 
1The Chinese University of Hong Kong
2The Chinese University of Hong Kong, Shenzhen
Overview
We find that the latent code for well-trained generative models, such as PGGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. We manage to control the pose as well as other facial attributes, such as gender, age, eyeglasses. More importantly, we are able to correct the artifacts made by GANs.
Results
We manipulate the following attributes with PGGAN.
Pose Age Gender
Expression Eyeglasses Artifacts
Check more results in the following video.
BibTeX
@inproceedings{shen2020interpreting,
  title     = {Interpreting the Latent Space of GANs for Semantic Face Editing},
  author    = {Shen, Yujun and Gu, Jinjin and Tang, Xiaoou and Zhou, Bolei},
  booktitle = {CVPR},
  year      = {2020}
}

@article{shen2020interfacegan,
  title   = {InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs},
  author  = {Shen, Yujun and Yang, Ceyuan and Tang, Xiaoou and Zhou, Bolei},
  journal = {TPAMI},
  year    = {2020}
}
Related Work
A. Jahanian, L. Chai, P. Isola. On the "Steerability" of Generative Adversarial Networks. ICLR, 2020.
Comment: Shifts the data distribution by steering the latent code to fit camera movements and color changes.