MonoLSS: Learnable Sample Selection For Monocular 3D Detection

In the field of autonomous driving, monocular 3D detection is a critical taskwhich estimates 3D properties (depth, dimension, and orientation) of objects ina single RGB image. Previous works have used features in a heuristic way tolearn 3D properties, without considering that inappropriate features could haveadverse effects. In this paper, sample selection is introduced that onlysuitable samples should be trained to regress the 3D properties. To selectsamples adaptively, we propose a Learnable Sample Selection (LSS) module, whichis based on Gumbel-Softmax and a relative-distance sample divider. The LSSmodule works under a warm-up strategy leading to an improvement in trainingstability. Additionally, since the LSS module dedicated to 3D property sampleselection relies on object-level features, we further develop a dataaugmentation method named MixUp3D to enrich 3D property samples which conformsto imaging principles without introducing ambiguity. As two orthogonal methods,the LSS module and MixUp3D can be utilized independently or in conjunction.Sufficient experiments have shown that their combined use can lead tosynergistic effects, yielding improvements that transcend the mere sum of theirindividual applications. Leveraging the LSS module and the MixUp3D, without anyextra data, our method named MonoLSS ranks 1st in all three categories (Car,Cyclist, and Pedestrian) on KITTI 3D object detection benchmark, and achievescompetitive results on both the Waymo dataset and KITTI-nuScenes cross-datasetevaluation. The code is included in the supplementary material and will bereleased to facilitate related academic and industrial studies.