HDR image reconstruction from a single exposure using deep CNNs

Camera sensors can only capture a limited range of luminance simultaneously,and in order to create high dynamic range (HDR) images a set of differentexposures are typically combined. In this paper we address the problem ofpredicting information that have been lost in saturated image areas, in orderto enable HDR reconstruction from a single exposure. We show that this problemis well-suited for deep learning algorithms, and propose a deep convolutionalneural network (CNN) that is specifically designed taking into account thechallenges in predicting HDR values. To train the CNN we gather a large datasetof HDR images, which we augment by simulating sensor saturation for a range ofcameras. To further boost robustness, we pre-train the CNN on a simulated HDRdataset created from a subset of the MIT Places database. We demonstrate thatour approach can reconstruct high-resolution visually convincing HDR results ina wide range of situations, and that it generalizes well to reconstruction ofimages captured with arbitrary and low-end cameras that use unknown cameraresponse functions and post-processing. Furthermore, we compare to existingmethods for HDR expansion, and show high quality results also for image basedlighting. Finally, we evaluate the results in a subjective experiment performedon an HDR display. This shows that the reconstructed HDR images are visuallyconvincing, with large improvements as compared to existing methods.