HyperAIHyperAI
2 months ago

Pre-training strategies and datasets for facial representation learning

Bulat, Adrian ; Cheng, Shiyang ; Yang, Jing ; Garbett, Andrew ; Sanchez, Enrique ; Tzimiropoulos, Georgios
Pre-training strategies and datasets for facial representation learning
Abstract

What is the best way to learn a universal face representation? Recent work onDeep Learning in the area of face analysis has focused on supervised learningfor specific tasks of interest (e.g. face recognition, facial landmarklocalization etc.) but has overlooked the overarching question of how to find afacial representation that can be readily adapted to several facial analysistasks and datasets. To this end, we make the following 4 contributions: (a) weintroduce, for the first time, a comprehensive evaluation benchmark for facialrepresentation learning consisting of 5 important face analysis tasks. (b) Wesystematically investigate two ways of large-scale representation learningapplied to faces: supervised and unsupervised pre-training. Importantly, wefocus our evaluations on the case of few-shot facial learning. (c) Weinvestigate important properties of the training datasets including their sizeand quality (labelled, unlabelled or even uncurated). (d) To draw ourconclusions, we conducted a very large number of experiments. Our main twofindings are: (1) Unsupervised pre-training on completely in-the-wild,uncurated data provides consistent and, in some cases, significant accuracyimprovements for all facial tasks considered. (2) Many existing facial videodatasets seem to have a large amount of redundancy. We will release code, andpre-trained models to facilitate future research.

Pre-training strategies and datasets for facial representation learning | Latest Papers | HyperAI