HyperAIHyperAI
2 months ago

LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval

Lu, Weiheng ; Li, Jian ; Yu, An ; Chang, Ming-Ching ; Ji, Shengpeng ; Xia, Min
LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval
Abstract

Multimodal Large Language Models (MLLMs) are widely used for visualperception, understanding, and reasoning. However, long video processing andprecise moment retrieval remain challenging due to LLMs' limited context sizeand coarse frame extraction. We propose the Large Language-and-Vision Assistantfor Moment Retrieval (LLaVA-MR), which enables accurate moment retrieval andcontextual grounding in videos using MLLMs. LLaVA-MR combines Dense Frame andTime Encoding (DFTE) for spatial-temporal feature extraction, Informative FrameSelection (IFS) for capturing brief visual and motion patterns, and DynamicToken Compression (DTC) to manage LLM context limitations. Evaluations onbenchmarks like Charades-STA and QVHighlights demonstrate that LLaVA-MRoutperforms 11 state-of-the-art methods, achieving an improvement of 1.82% [email protected] and 1.29% in [email protected] on the QVHighlights dataset. Our implementationwill be open-sourced upon acceptance.

LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval | Latest Papers | HyperAI