HyperAIHyperAI
2 months ago

Number it: Temporal Grounding Videos like Flipping Manga

Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, Xu Yang
Number it: Temporal Grounding Videos like Flipping Manga
Abstract

Video Large Language Models (Vid-LLMs) have made remarkable advancements incomprehending video content for QA dialogue. However, they struggle to extendthis visual understanding to tasks requiring precise temporal localization,known as Video Temporal Grounding (VTG). To address this gap, we introduceNumber-Prompt (NumPro), a novel method that empowers Vid-LLMs to bridge visualcomprehension with temporal grounding by adding unique numerical identifiers toeach video frame. Treating a video as a sequence of numbered frame images,NumPro transforms VTG into an intuitive process: flipping through manga panelsin sequence. This allows Vid-LLMs to "read" event timelines, accurately linkingvisual content with corresponding temporal information. Our experimentsdemonstrate that NumPro significantly boosts VTG performance of top-tierVid-LLMs without additional computational cost. Furthermore, fine-tuning on aNumPro-enhanced dataset defines a new state-of-the-art for VTG, surpassingprevious top-performing methods by up to 6.9% in mIoU for moment retrieval and8.5% in mAP for highlight detection. The code will be available athttps://github.com/yongliang-wu/NumPro.