InternVideo2: Scaling Foundation Models for Multimodal Video Understanding

We introduce InternVideo2, a new family of video foundation models (ViFM)that achieve the state-of-the-art results in video recognition, video-texttasks, and video-centric dialogue. Our core design is a progressive trainingapproach that unifies the masked video modeling, crossmodal contrastivelearning, and next token prediction, scaling up the video encoder size to 6Bparameters. At the data level, we prioritize spatiotemporal consistency bysemantically segmenting videos and generating video-audio-speech captions. Thisimproves the alignment between video and text. Through extensive experiments,we validate our designs and demonstrate superior performance on over 60 videoand audio tasks. Notably, our model outperforms others on various video-relateddialogue and long video understanding benchmarks, highlighting its ability toreason and comprehend longer contexts. Code and models are available athttps://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2/.