Content Tags

There are no tags.

Conditional Adversarial Synthesis of 3D Facial Action Units.

RSS Source
Authors
Zhilei Liu, Guoxian Song, Jianfei Cai, Tat-Jen Cham, Juyong Zhang

Employing deep learning-based approaches for fine-grained facial expressionanalysis, such as those involving the estimation of Action Unit (AU)intensities, is difficult due to the lack of a large-scale dataset of realfaces with sufficiently diverse AU labels for training. In this paper, weconsider how AU-level facial image synthesis can be used to substantiallyaugment such a dataset. We propose an AU synthesis framework that combines thewell-known 3D Morphable Model (3DMM), which intrinsically disentanglesexpression parameters from other face attributes, with models thatadversarially generate 3DMM expression parameters conditioned on given targetAU labels, in contrast to the more conventional approach of generating facialimages directly. In this way, we are able to synthesize new combinations ofexpression parameters and facial images from desired AU labels. Extensivequantitative and qualitative results on the benchmark DISFA dataset demonstratethe effectiveness of our method on 3DMM facial expression parameter synthesisand data augmentation for deep learning-based AU intensity estimation.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.