Lifting Multi-View Detection and Tracking to the Bird's Eye View

Taking advantage of multi-view aggregation presents a promising solution totackle challenges such as occlusion and missed detection in multi-objecttracking and detection. Recent advancements in multi-view detection and 3Dobject recognition have significantly improved performance by strategicallyprojecting all views onto the ground plane and conducting detection analysisfrom a Bird's Eye View. In this paper, we compare modern lifting methods, bothparameter-free and parameterized, to multi-view aggregation. Additionally, wepresent an architecture that aggregates the features of multiple times steps tolearn robust detection and combines appearance- and motion-based cues fortracking. Most current tracking approaches either focus on pedestrians orvehicles. In our work, we combine both branches and add new challenges tomulti-view detection with cross-scene setups. Our method generalizes to threepublic datasets across two domains: (1) pedestrian: Wildtrack and MultiviewX,and (2) roadside perception: Synthehicle, achieving state-of-the-artperformance in detection and tracking. https://github.com/tteepe/TrackTacular