Master Thesis

My master thesis consisted in developing a generative model for multi-view consistent full-body textured human avatars. The proposed method combined a simple differentiable point-based rendering module with a lightweight generative adversarial network to learn human appearances in the form of SMPLX UV texture maps, from single-view photographs only. When rendered on the underyling SMPLX geometry, the generated textures are passed through a discriminator and accordingly classified as real or fake images against the ground truth images.

This pipeline offers several advantages over current rival techniques, such as guaranteed robustness to novel body poses and camera views due to the view consistent nature of UV maps, or real-time rendering of novel appearances.

The generator did not converge due to many instabilities encountered during the fragile GAN training. This work accordingly primarily serves as a basis for future research. The code is available on my GitHub.


Report and Presentation