You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement

Low-Light Image Enhancement (LLIE) task tends to restore the details andvisual information from corrupted low-light images. Most existing methods learnthe mapping function between low/normal-light images by Deep Neural Networks(DNNs) on sRGB and HSV color space. Nevertheless, enhancement involvesamplifying image signals, and applying these color spaces to low-light imageswith a low signal-to-noise ratio can introduce sensitivity and instability intothe enhancement process. Consequently, this results in the presence of colorartifacts and brightness artifacts in the enhanced images. To alleviate thisproblem, we propose a novel trainable color space, namedHorizontal/Vertical-Intensity (HVI). It not only decouples brightness and colorfrom RGB channels to mitigate the instability during enhancement but alsoadapts to low-light images in different illumination ranges due to thetrainable parameters. Further, we design a novel Color and Intensity DecouplingNetwork (CIDNet) with two branches dedicated to processing the decoupled imagebrightness and color in the HVI space. Within CIDNet, we introduce theLightweight Cross-Attention (LCA) module to facilitate interaction betweenimage structure and content information in both branches, while alsosuppressing noise in low-light images. Finally, we conducted 22 quantitativeand qualitative experiments to show that the proposed CIDNet outperforms thestate-of-the-art methods on 11 datasets. The code is available athttps://github.com/Fediory/HVI-CIDNet.