HS3-Bench: A Benchmark and Strong Baseline for Hyperspectral Semantic Segmentation in Driving Scenarios

Semantic segmentation is an essential step for many vision applications inorder to understand a scene and the objects within. Recent progress inhyperspectral imaging technology enables the application in driving scenariosand the hope is that the devices perceptive abilities provide an advantage overRGB-cameras. Even though some datasets exist, there is no standard benchmarkavailable to systematically measure progress on this task and evaluate thebenefit of hyperspectral data. In this paper, we work towards closing this gapby providing the HyperSpectral Semantic Segmentation benchmark (HS3-Bench). Itcombines annotated hyperspectral images from three driving scenario datasetsand provides standardized metrics, implementations, and evaluation protocols.We use the benchmark to derive two strong baseline models that surpass theprevious state-of-the-art performances with and without pre-training on theindividual datasets. Further, our results indicate that the existinglearning-based methods benefit more from leveraging additional RGB trainingdata than from leveraging the additional hyperspectral channels. This posesimportant questions for future research on hyperspectral imaging for semanticsegmentation in driving scenarios. Code to run the benchmark and the strongbaseline approaches are available underhttps://github.com/nickstheisen/hyperseg.