HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting

High dynamic range (HDR) novel view synthesis (NVS) aims to createphotorealistic images from novel viewpoints using HDR imaging techniques. Therendered HDR images capture a wider range of brightness levels containing moredetails of the scene than normal low dynamic range (LDR) images. Existing HDRNVS methods are mainly based on NeRF. They suffer from long training time andslow inference speed. In this paper, we propose a new framework, High DynamicRange Gaussian Splatting (HDR-GS), which can efficiently render novel HDR viewsand reconstruct LDR images with a user input exposure time. Specifically, wedesign a Dual Dynamic Range (DDR) Gaussian point cloud model that usesspherical harmonics to fit HDR color and employs an MLP-based tone-mapper torender LDR color. The HDR and LDR colors are then fed into two ParallelDifferentiable Rasterization (PDR) processes to reconstruct HDR and LDR views.To establish the data foundation for the research of 3D Gaussiansplatting-based methods in HDR NVS, we recalibrate the camera parameters andcompute the initial positions for Gaussian point clouds. Experimentsdemonstrate that our HDR-GS surpasses the state-of-the-art NeRF-based method by3.84 and 1.91 dB on LDR and HDR NVS while enjoying 1000x inference speed andonly requiring 6.3% training time.