Bench2Drive: Towards Multi-Ability Benchmarking of Closed-Loop End-To-End Autonomous Driving

In an era marked by the rapid scaling of foundation models, autonomousdriving technologies are approaching a transformative threshold whereend-to-end autonomous driving (E2E-AD) emerges due to its potential of scalingup in the data-driven manner. However, existing E2E-AD methods are mostlyevaluated under the open-loop log-replay manner with L2 errors and collisionrate as metrics (e.g., in nuScenes), which could not fully reflect the drivingperformance of algorithms as recently acknowledged in the community. For thoseE2E-AD methods evaluated under the closed-loop protocol, they are tested infixed routes (e.g., Town05Long and Longest6 in CARLA) with the driving score asmetrics, which is known for high variance due to the unsmoothed metric functionand large randomness in the long route. Besides, these methods usually collecttheir own data for training, which makes algorithm-level fair comparisoninfeasible. To fulfill the paramount need of comprehensive, realistic, and fair testingenvironments for Full Self-Driving (FSD), we present Bench2Drive, the firstbenchmark for evaluating E2E-AD systems' multiple abilities in a closed-loopmanner. Bench2Drive's official training data consists of 2 million fullyannotated frames, collected from 13638 short clips uniformly distributed under44 interactive scenarios (cut-in, overtaking, detour, etc), 23 weathers (sunny,foggy, rainy, etc), and 12 towns (urban, village, university, etc) in CARLA v2.Its evaluation protocol requires E2E-AD models to pass 44 interactive scenariosunder different locations and weathers which sums up to 220 routes and thusprovides a comprehensive and disentangled assessment about their drivingcapability under different situations. We implement state-of-the-art E2E-ADmodels and evaluate them in Bench2Drive, providing insights regarding currentstatus and future directions.