Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams

Benefiting from the advancements in large language models and cross-modalalignment, existing multi-modal video understanding methods have achievedprominent performance in offline scenario. However, online video streams, asone of the most common media forms in the real world, have seldom receivedattention. Compared to offline videos, the 'dynamic' nature of online videostreams poses challenges for the direct application of existing models andintroduces new problems, such as the storage of extremely long-terminformation, interaction between continuous visual content and 'asynchronous'user questions. Therefore, in this paper we present Flash-VStream, avideo-language model that simulates the memory mechanism of human. Our model isable to process extremely long video streams in real-time and respond to userqueries simultaneously. Compared to existing models, Flash-VStream achievessignificant reductions in inference latency and VRAM consumption, which isintimately related to performing understanding of online streaming video. Inaddition, given that existing video understanding benchmarks predominantlyconcentrate on offline scenario, we propose VStream-QA, a novel questionanswering benchmark specifically designed for online video streamingunderstanding. Comparisons with popular existing methods on the proposedbenchmark demonstrate the superiority of our method for such challengingsetting. To verify the generalizability of our approach, we further evaluate iton existing video understanding benchmarks and achieves state-of-the-artperformance in offline scenarios as well. All code, models, and datasets areavailable at the https://invinciblewyq.github.io/vstream-page/