Multispectral Fusion for Object Detection with Cyclic Fuse-and-Refine Blocks

Multispectral images (e.g. visible and infrared) may be particularly usefulwhen detecting objects with the same model in different environments (e.g.day/night outdoor scenes). To effectively use the different spectra, the maintechnical problem resides in the information fusion process. In this paper, wepropose a new halfway feature fusion method for neural networks that leveragesthe complementary/consistency balance existing in multispectral features byadding to the network architecture, a particular module that cyclically fusesand refines each spectral feature. We evaluate the effectiveness of our fusionmethod on two challenging multispectral datasets for object detection. Ourresults show that implementing our Cyclic Fuse-and-Refine module in any networkimproves the performance on both datasets compared to other state-of-the-artmultispectral object detection methods.