HyperAIHyperAI

Command Palette

Search for a command to run...

MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding

Bo He Hengduo Li Young Kyun Jang Menglin Jia Xuefei Cao Ashish Shah Abhinav Shrivastava Ser-Nam Lim

Abstract

With the success of large language models (LLMs), integrating the visionmodel into LLMs to build vision-language foundation models has gained much moreinterest recently. However, existing LLM-based large multimodal models (e.g.,Video-LLaMA, VideoChat) can only take in a limited number of frames for shortvideo understanding. In this study, we mainly focus on designing an efficientand effective model for long-term video understanding. Instead of trying toprocess more frames simultaneously like most existing work, we propose toprocess videos in an online manner and store past video information in a memorybank. This allows our model to reference historical video content for long-termanalysis without exceeding LLMs' context length constraints or GPU memorylimits. Our memory bank can be seamlessly integrated into current multimodalLLMs in an off-the-shelf manner. We conduct extensive experiments on variousvideo understanding tasks, such as long-video understanding, video questionanswering, and video captioning, and our model can achieve state-of-the-artperformances across multiple datasets. Code available athttps://boheumd.github.io/MA-LMM/.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp