HyperAIHyperAI
2 months ago

MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection

Zhang, Renrui ; Qiu, Han ; Wang, Tai ; Guo, Ziyu ; Tang, Yiwen ; Xu, Xuanzhuo ; Cui, Ziteng ; Qiao, Yu ; Gao, Peng ; Li, Hongsheng
MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection
Abstract

Monocular 3D object detection has long been a challenging task in autonomousdriving. Most existing methods follow conventional 2D detectors to firstlocalize object centers, and then predict 3D attributes by neighboringfeatures. However, only using local visual features is insufficient tounderstand the scene-level 3D spatial structures and ignores the long-rangeinter-object depth relations. In this paper, we introduce the first DETRframework for Monocular DEtection with a depth-guided TRansformer, namedMonoDETR. We modify the vanilla transformer to be depth-aware and guide thewhole detection process by contextual depth cues. Specifically, concurrent tothe visual encoder that captures object appearances, we introduce to predict aforeground depth map, and specialize a depth encoder to extract non-local depthembeddings. Then, we formulate 3D object candidates as learnable queries andpropose a depth-guided decoder to conduct object-scene depth interactions. Inthis way, each object query estimates its 3D attributes adaptively from thedepth-guided regions on the image and is no longer constrained to local visualfeatures. On KITTI benchmark with monocular images as input, MonoDETR achievesstate-of-the-art performance and requires no extra dense depth annotations.Besides, our depth-guided modules can also be plug-and-play to enhancemulti-view 3D object detectors on nuScenes dataset, demonstrating our superiorgeneralization capacity. Code is available athttps://github.com/ZrrSkywalker/MonoDETR.