STAR Loss: Reducing Semantic Ambiguity in Facial Landmark Detection

Recently, deep learning-based facial landmark detection has achievedsignificant improvement. However, the semantic ambiguity problem degradesdetection performance. Specifically, the semantic ambiguity causes inconsistentannotation and negatively affects the model's convergence, leading to worseaccuracy and instability prediction. To solve this problem, we propose aSelf-adapTive Ambiguity Reduction (STAR) loss by exploiting the properties ofsemantic ambiguity. We find that semantic ambiguity results in the anisotropicpredicted distribution, which inspires us to use predicted distribution torepresent semantic ambiguity. Based on this, we design the STAR loss thatmeasures the anisotropism of the predicted distribution. Compared with thestandard regression loss, STAR loss is encouraged to be small when thepredicted distribution is anisotropic and thus adaptively mitigates the impactof semantic ambiguity. Moreover, we propose two kinds of eigenvalue restrictionmethods that could avoid both distribution's abnormal change and the model'spremature convergence. Finally, the comprehensive experiments demonstrate thatSTAR loss outperforms the state-of-the-art methods on three benchmarks, i.e.,COFW, 300W, and WFLW, with negligible computation overhead. Code is athttps://github.com/ZhenglinZhou/STAR.