HyperAIHyperAI

Command Palette

Search for a command to run...

Understanding Long Videos with Multimodal Language Models

Kanchana Ranasinghe Xiang Li Kumara Kahatapitiya Michael S. Ryoo

Abstract

Large Language Models (LLMs) have allowed recent LLM-based approaches toachieve excellent performance on long-video understanding benchmarks. Weinvestigate how extensive world knowledge and strong reasoning skills ofunderlying LLMs influence this strong performance. Surprisingly, we discoverthat LLM-based approaches can yield surprisingly good accuracy on long-videotasks with limited video information, sometimes even with no video specificinformation. Building on this, we explore injecting video-specific informationinto an LLM-based framework. We utilize off-the-shelf vision tools to extractthree object-centric information modalities from videos, and then leveragenatural language as a medium for fusing this information. Our resultingMultimodal Video Understanding (MVU) framework demonstrates state-of-the-artperformance across multiple video understanding benchmarks. Strong performancealso on robotics domain tasks establish its strong generality. Code:https://github.com/kahnchana/mvu


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp