HyperAIHyperAI
2 months ago

VAD: Vectorized Scene Representation for Efficient Autonomous Driving

Jiang, Bo ; Chen, Shaoyu ; Xu, Qing ; Liao, Bencheng ; Chen, Jiajie ; Zhou, Helong ; Zhang, Qian ; Liu, Wenyu ; Huang, Chang ; Wang, Xinggang
VAD: Vectorized Scene Representation for Efficient Autonomous Driving
Abstract

Autonomous driving requires a comprehensive understanding of the surroundingenvironment for reliable trajectory planning. Previous works rely on denserasterized scene representation (e.g., agent occupancy and semantic map) toperform planning, which is computationally intensive and misses theinstance-level structure information. In this paper, we propose VAD, anend-to-end vectorized paradigm for autonomous driving, which models the drivingscene as a fully vectorized representation. The proposed vectorized paradigmhas two significant advantages. On one hand, VAD exploits the vectorized agentmotion and map elements as explicit instance-level planning constraints whicheffectively improves planning safety. On the other hand, VAD runs much fasterthan previous end-to-end planning methods by getting rid ofcomputation-intensive rasterized representation and hand-designedpost-processing steps. VAD achieves state-of-the-art end-to-end planningperformance on the nuScenes dataset, outperforming the previous best method bya large margin. Our base model, VAD-Base, greatly reduces the average collisionrate by 29.0% and runs 2.5x faster. Besides, a lightweight variant, VAD-Tiny,greatly improves the inference speed (up to 9.3x) while achieving comparableplanning performance. We believe the excellent performance and the highefficiency of VAD are critical for the real-world deployment of an autonomousdriving system. Code and models are available at https://github.com/hustvl/VADfor facilitating future research.