Cascaded Dual Vision Transformer for Accurate Facial Landmark Detection

Facial landmark detection is a fundamental problem in computer vision formany downstream applications. This paper introduces a new facial landmarkdetector based on vision transformers, which consists of two unique designs:Dual Vision Transformer (D-ViT) and Long Skip Connections (LSC). Based on theobservation that the channel dimension of feature maps essentially representsthe linear bases of the heatmap space, we propose learning the interconnectionsbetween these linear bases to model the inherent geometric relations amonglandmarks via Channel-split ViT. We integrate such channel-split ViT into thestandard vision transformer (i.e., spatial-split ViT), forming our Dual VisionTransformer to constitute the prediction blocks. We also suggest using longskip connections to deliver low-level image features to all prediction blocks,thereby preventing useful information from being discarded by intermediatesupervision. Extensive experiments are conducted to evaluate the performance ofour proposal on the widely used benchmarks, i.e., WFLW, COFW, and 300W,demonstrating that our model outperforms the previous SOTAs across all threebenchmarks.