Bringing Old Films Back to Life

We present a learning-based framework, recurrent transformer network (RTN),to restore heavily degraded old films. Instead of performing frame-wiserestoration, our method is based on the hidden knowledge learned from adjacentframes that contain abundant information about the occlusion, which isbeneficial to restore challenging artifacts of each frame while ensuringtemporal coherency. Moreover, contrasting the representation of the currentframe and the hidden knowledge makes it possible to infer the scratch positionin an unsupervised manner, and such defect localization generalizes well toreal-world degradations. To better resolve mixed degradation and compensate forthe flow estimation error during frame alignment, we propose to leverage moreexpressive transformer blocks for spatial restoration. Experiments on bothsynthetic dataset and real-world old films demonstrate the significantsuperiority of the proposed RTN over existing solutions. In addition, the sameframework can effectively propagate the color from keyframes to the wholevideo, ultimately yielding compelling restored films. The implementation andmodel will be released athttps://github.com/raywzy/Bringing-Old-Films-Back-to-Life.