Blind Face Restoration via Deep Multi-scale Component Dictionaries

Recent reference-based face restoration methods have received considerableattention due to their great capability in recovering high-frequency details onreal low-quality images. However, most of these methods require a high-qualityreference image of the same identity, making them only applicable in limitedscenes. To address this issue, this paper suggests a deep face dictionarynetwork (termed as DFDNet) to guide the restoration process of degradedobservations. To begin with, we use K-means to generate deep dictionaries forperceptually significant face components (\ie, left/right eyes, nose and mouth)from high-quality images. Next, with the degraded input, we match and selectthe most similar component features from their corresponding dictionaries andtransfer the high-quality details to the input via the proposed dictionaryfeature transfer (DFT) block. In particular, component AdaIN is leveraged toeliminate the style diversity between the input and dictionary features (\eg,illumination), and a confidence score is proposed to adaptively fuse thedictionary feature to the input. Finally, multi-scale dictionaries are adoptedin a progressive manner to enable the coarse-to-fine restoration. Experimentsshow that our proposed method can achieve plausible performance in bothquantitative and qualitative evaluation, and more importantly, can generaterealistic and promising results on real degraded images without requiring anidentity-belonging reference. The source code and models are available at\url{https://github.com/csxmli2016/DFDNet}.