Towards Flexible Blind JPEG Artifacts Removal

Training a single deep blind model to handle different quality factors forJPEG image artifacts removal has been attracting considerable attention due toits convenience for practical usage. However, existing deep blind methodsusually directly reconstruct the image without predicting the quality factor,thus lacking the flexibility to control the output as the non-blind methods. Toremedy this problem, in this paper, we propose a flexible blind convolutionalneural network, namely FBCNN, that can predict the adjustable quality factor tocontrol the trade-off between artifacts removal and details preservation.Specifically, FBCNN decouples the quality factor from the JPEG image via adecoupler module and then embeds the predicted quality factor into thesubsequent reconstructor module through a quality factor attention block forflexible control. Besides, we find existing methods are prone to fail onnon-aligned double JPEG images even with only a one-pixel shift, and we thuspropose a double JPEG degradation model to augment the training data. Extensiveexperiments on single JPEG images, more general double JPEG images, andreal-world JPEG images demonstrate that our proposed FBCNN achieves favorableperformance against state-of-the-art methods in terms of both quantitativemetrics and visual quality.