HyperAIHyperAI

Command Palette

Search for a command to run...

خط أساس بسيط لفهم الفيديو المتدفق

Yujiao Shen Shulin Tian Jingkang Yang Ziwei Liu

الملخص

تُعتمد أساليب فهم الفيديو المتدفق (streaming video understanding) الحديثة بشكل متزايد على آليات ذاكرة معقدة للتعامل مع تدفقات الفيديو الطويلة. ن挑战 هذه الاتجاه من خلال اكتشاف بسيط: إن خط أساس يعتمد على نافذة منزلقة (sliding-window baseline) يقدم فقط آخر N إطار إلى نموذج لغوي مرئي جاهز (off-the-shelf VLM) يتطابق بالفعل مع نماذج البث المنشورة أو يتفوق عليها. قمنا بصياغة هذا الخط الأساس باسم SimpleStream وقمنا بتقييمه مقابل 13 خط أساس رئيسيًا لنماذج لغوية للفيديو (video LLMs) تعمل في وضعين: غير متصل (offline) ومتصل (online)، وذلك باستخدام ميزتي OVO-Bench و StreamingBench. وعلى الرغم من بساطته، يُظهر SimpleStream أداءً قويًا ومتسقًا. فباستخدام أربعة إطارات حديثة فقط، يحقق متوسط دقة قدره 67.7% على OVO-Bench و80.59% على StreamingBench. كما تُظهر التجارب المقيدة (controlled ablations) أن قيمة السياق الأطول تعتمد على العمود الفقري للنموذج (backbone-dependent) وليست في تزايد موحد مع مقياس النموذج، وتكشف عن مقايضة متسقة بين الإدراك والذاكرة: فإضافة سياق تاريخي أكبر قد يحسن الاستدعاء (recall)، لكنه غالبًا ما يضعف الإدراك في الوقت الفعلي. وهذا يشير إلى أنه لا ينبغي اعتبار وحدات ذاكرة أو استرجاع أو ضغط أقوى دليلاً على التقدم ما لم تتفوق بوضوح على SimpleStream تحت نفس البروتوكول. وبناءً على ذلك، نحن نؤكد أن مقاييس التقييم المستقبلية للبث يجب أن تفصل بين إدراك المشهد الحديث والذاكرة طويلة المدى، وذلك لتمكين التقييم الأكثر وضوحًا للتحسينات في الأداء الناتجة عن زيادة التعقيد.

One-sentence Summary

Researchers from Nanyang Technological University introduce SIMPLESTREAM, a minimal baseline that feeds only recent frames to off-the-shelf VLMs, outperforming complex memory-centric models on OVO-Bench and StreamingBench while revealing a critical perception-memory trade-off.

Key Contributions

  • The paper introduces SIMPLESTREAM, a minimal streaming baseline that processes only the most recent NNN frames with an off-the-shelf VLM, eliminating the need for complex memory banks, retrieval systems, or compression modules.
  • Comprehensive evaluations on OVO-Bench and StreamingBench demonstrate that this simple recent-context approach achieves state-of-the-art performance while maintaining lower peak GPU memory usage and competitive latency compared to prior streaming methods.
  • Controlled ablation studies reveal that the benefit of longer context is backbone-dependent rather than uniform across model scales, and that adding historical context often improves memory recall at the expense of real-time perception.

Introduction

Streaming video understanding is critical for real-time applications where models must process continuous video feeds under strict causal and memory constraints. Prior research has increasingly relied on complex memory mechanisms, such as external banks, retrieval systems, or compression modules, based on the assumption that managing long-term history requires elaborate architectural designs. However, these sophisticated approaches often yield modest gains while introducing significant computational overhead and a trade-off where enhanced memory recall can degrade real-time scene perception. The authors introduce SIMPLESTREAM, a minimal baseline that feeds only the most recent N frames directly to an off-the-shelf VLM without additional memory or training. They demonstrate that this simple recency-based approach matches or surpasses complex streaming models on major benchmarks like OVO-Bench and StreamingBench, revealing that longer context benefits are backbone-dependent rather than universal and arguing for a new evaluation standard that separates perception from memory performance.

Method

The authors introduce SimpleStream as a deliberately simple baseline designed to isolate the capabilities of current off-the-shelf Vision Language Models (VLMs) using only recent visual context. Unlike prior streaming systems that incorporate mechanisms for managing long-range history, SimpleStream relies on a sliding window approach. Refer to the framework diagram below, which illustrates how the system processes a continuous video stream by selecting a "Recent N-frames window" centered around the current frame to feed into the Vision Language Model.

Let the video stream be represented as a sequence of frames where fif_ifi denotes the visual frame at time step iii. Given a question qtq_tqt at time ttt, the method feeds the base VLM only the most recent NNN frames and the text query. This process is formalized as:

SIMPLESTREAM(t)=VLM({ftN+1,,ft},qt)\mathbf { S I M P L E S T R E A M } ( t ) = \mathrm { V L M } \big ( \{ f _ { t - N + 1 } , \ldots , f _ { t } \} , \, q _ { t } \big )SIMPLESTREAM(t)=VLM({ftN+1,,ft},qt)

By construction, SimpleStream omits additional memory mechanisms, meaning frames outside the sliding window are discarded. Consequently, per-query memory and computation remain bounded by NNN and do not grow with the stream length. The method introduces no architectural modification, memory module, or additional training; it functions strictly as an inference-time input policy applied to an off-the-shelf VLM.

The architectural comparison below highlights how SimpleStream differs from other context management strategies. While alternative approaches utilize External Memory, Retrieval, Compression, or Latent Memory to handle long-term dependencies, SimpleStream bypasses these components entirely. It serves as a controlled reference baseline to determine how much streaming performance can be obtained from recent visual context alone while minimizing confounding effects from additional training or system-level engineering.

Experiment

  • Experiments on OVO-Bench and StreamingBench validate that SIMPLESTREAM, a minimalist approach using only a fixed recent frame window, outperforms complex streaming systems with dedicated memory banks or retrieval modules, particularly in real-time visual perception tasks.
  • Ablation studies on recency window size and model scale demonstrate that performance does not improve monotonically with longer context; while modest window expansions help, further increases often yield diminishing returns or degradation, indicating that more historical context is not universally beneficial.
  • Visual-RAG analysis reveals a distinct perception-memory trade-off where retrieving historical chunks improves episodic memory recall but consistently degrades real-time perception, suggesting that current memory injection techniques often corrupt the model's understanding of the present scene.
  • Efficiency evaluations confirm that SIMPLESTREAM maintains low latency and stable GPU memory usage regardless of stream length, proving that persistent historical state is not required for competitive streaming inference.
  • Overall conclusions indicate that current benchmarks heavily favor recent perception capabilities, and future progress requires methods that can leverage historical evidence without sacrificing the clarity of current-scene understanding.

بناء الذكاء الاصطناعي بالذكاء الاصطناعي

من الفكرة إلى الإطلاق — سرّع تطوير الذكاء الاصطناعي الخاص بك مع المساعدة البرمجية المجانية بالذكاء الاصطناعي، وبيئة جاهزة للاستخدام، وأفضل أسعار لوحدات معالجة الرسومات.

البرمجة التعاونية باستخدام الذكاء الاصطناعي
وحدات GPU جاهزة للعمل
أفضل الأسعار

HyperAI Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp