Representation Learning and Identity Adversarial Training for Facial Behavior Understanding

Facial Action Unit (AU) detection has gained significant attention as itenables the breakdown of complex facial expressions into individual musclemovements. In this paper, we revisit two fundamental factors in AU detection:diverse and large-scale data and subject identity regularization. Motivated byrecent advances in foundation models, we highlight the importance of data andintroduce Face9M, a diverse dataset comprising 9 million facial images frommultiple public sources. Pretraining a masked autoencoder on Face9M yieldsstrong performance in AU detection and facial expression tasks. Moreimportantly, we emphasize that the Identity Adversarial Training (IAT) has notbeen well explored in AU tasks. To fill this gap, we first show that subjectidentity in AU datasets creates shortcut learning for the model and leads tosub-optimal solutions to AU predictions. Secondly, we demonstrate that strongIAT regularization is necessary to learn identity-invariant features. Finally,we elucidate the design space of IAT and empirically show that IAT circumventsthe identity-based shortcut learning and results in a better solution. Ourproposed methods, Facial Masked Autoencoder (FMAE) and IAT, are simple, genericand effective. Remarkably, the proposed FMAE-IAT approach achieves newstate-of-the-art F1 scores on BP4D (67.1\%), BP4D+ (66.8\%), and DISFA (70.1\%)databases, significantly outperforming previous work. We release the code andmodel at https://github.com/forever208/FMAE-IAT.