HyperAIHyperAI
2 months ago

VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs

Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, Lidong Bing
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio
  Understanding in Video-LLMs
Abstract

In this paper, we present the VideoLLaMA 2, a set of Video Large LanguageModels (Video-LLMs) designed to enhance spatial-temporal modeling and audiounderstanding in video and audio-oriented tasks. Building upon its predecessor,VideoLLaMA 2 incorporates a tailor-made Spatial-Temporal Convolution (STC)connector, which effectively captures the intricate spatial and temporaldynamics of video data. Additionally, we integrate an Audio Branch into themodel through joint training, thereby enriching the multimodal understandingcapabilities of the model by seamlessly incorporating audio cues. Comprehensiveevaluations on multiple-choice video question answering (MC-VQA), open-endedvideo question answering (OE-VQA), and video captioning (VC) tasks demonstratethat VideoLLaMA 2 consistently achieves competitive results among open-sourcemodels and even gets close to some proprietary models on several benchmarks.Furthermore, VideoLLaMA 2 exhibits reasonable improvements in audio-only andaudio-video question-answering (AQA & OE-AVQA) benchmarks over existing models.These advancements underline VideoLLaMA 2's superior performance in multimodalcomprehension, setting a new standard for intelligent video analysis systems.All models are public to facilitate further research.

VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs | Latest Papers | HyperAI