MTCAE-DFER: Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognition

This paper expands the cascaded network branch of the autoencoder-basedmulti-task learning (MTL) framework for dynamic facial expression recognition,namely Multi-Task Cascaded Autoencoder for Dynamic Facial ExpressionRecognition (MTCAE-DFER). MTCAE-DFER builds a plug-and-play cascaded decodermodule, which is based on the Vision Transformer (ViT) architecture and employsthe decoder concept of Transformer to reconstruct the multi-head attentionmodule. The decoder output from the previous task serves as the query (Q),representing local dynamic features, while the Video Masked Autoencoder(VideoMAE) shared encoder output acts as both the key (K) and value (V),representing global dynamic features. This setup facilitates interactionbetween global and local dynamic features across related tasks. Additionally,this proposal aims to alleviate overfitting of complex large model. We utilizeautoencoder-based multi-task cascaded learning approach to explore the impactof dynamic face detection and dynamic face landmark on dynamic facialexpression recognition, which enhances the model's generalization ability.After we conduct extensive ablation experiments and comparison withstate-of-the-art (SOTA) methods on various public datasets for dynamic facialexpression recognition, the robustness of the MTCAE-DFER model and theeffectiveness of global-local dynamic feature interaction among related taskshave been proven.