LYT-NET: Lightweight YUV Transformer-based Network for Low-light Image Enhancement

This letter introduces LYT-Net, a novel lightweight transformer-based modelfor low-light image enhancement (LLIE). LYT-Net consists of several layers anddetachable blocks, including our novel blocks--Channel-Wise Denoiser (CWD) andMulti-Stage Squeeze & Excite Fusion (MSEF)--along with the traditionalTransformer block, Multi-Headed Self-Attention (MHSA). In our method we adopt adual-path approach, treating chrominance channels U and V and luminance channelY as separate entities to help the model better handle illumination adjustmentand corruption restoration. Our comprehensive evaluation on established LLIEdatasets demonstrates that, despite its low complexity, our model outperformsrecent LLIE methods. The source code and pre-trained models are available athttps://github.com/albrateanu/LYT-Net