HyperAIHyperAI

Command Palette

Search for a command to run...

コンテキストフォースティング:長文コンテキストを用いた一貫性のある自己回帰型動画生成

Shuo Chen Cong Wei Sun Sun Ping Nie Kai Zhou Ge Zhang Ming-Hsuan Yang Wenhu Chen

概要

最近のリアルタイム長時間動画生成に関するアプローチは、通常、ストリーミングチューニング戦略を採用しており、短時間(メモリレス)の教師モデルを用いて長時間文脈を持つ学生モデルを訓練しようとしている。このようなフレームワークでは、学生モデルが長時間のロールアウトを実行する一方で、教師モデルは5秒程度の短い時間窓に限定された情報しか提供できず、その結果、学生と教師の構造的不一致が生じる。すなわち、教師モデルが長期的な履歴にアクセスできないため、時間的なグローバル依存関係を学生に適切に指導できず、結果として学生モデルの文脈長が本質的に制限されてしまう。この問題を解決するために、本研究では「コンテキストフォースティング(Context Forcing)」と呼ばれる新規フレームワークを提案する。本手法は、長時間文脈を持つ教師モデルを用いて長時間文脈を持つ学生モデルを訓練するものであり、教師が生成履歴全体を把握できるようにすることで、指導情報の不一致を解消する。これにより、長期的な一貫性を確保できるモデルの堅牢な訓練が可能となる。さらに、極めて長い生成時間(例:2分間)を対象とする計算上の課題を克服するため、線形に増大する文脈を「スローファストメモリ(Slow-Fast Memory)」アーキテクチャに変換する文脈管理システムを導入し、視覚的冗長性を著しく低減する。広範な実験結果から、本手法により20秒を超える有効な文脈長が実現可能であることが示された。これは、LongLiveやInfinite-RoPEといった最先端手法と比較して2〜10倍の長さに相当する。この拡張された文脈を活用することで、Context Forcingは長時間にわたる動画生成において優れた一貫性を維持し、様々な長時間動画評価指標において最先端のベースラインを上回ることを実証した。

One-sentence Summary

Shuo Chen, Cong Wei, and colleagues from UC Merced and Tsinghua propose Context Forcing, a framework using long-context teachers to train students for 20s+ video generation, overcoming forgetting-drifting via Slow-Fast Memory, outperforming LongLive and Infinite-RoPE in long-term consistency.

Key Contributions

  • We identify and resolve a critical student-teacher mismatch in long video generation, where short-context teachers fail to supervise long-context students on global temporal dependencies, by introducing Context Forcing—a framework that trains students using long-context teachers aware of full generation history.
  • To enable computationally efficient training for extreme durations (e.g., 2 minutes), we design a Slow-Fast Memory architecture that compresses linearly growing context by reducing visual redundancy, allowing stable training and inference with 20+ seconds of effective context.
  • Evaluated on long video benchmarks, Context Forcing achieves 2–10× longer usable context than state-of-the-art methods like LongLive and Infinite-RoPE, significantly improving long-term consistency and outperforming baselines on key temporal coherence metrics.

Introduction

The authors leverage causal video diffusion models to tackle the challenge of generating long, temporally consistent videos—critical for applications like digital storytelling and professional editing—where prior methods suffer from either forgetting past context or drifting due to error accumulation. Existing approaches rely on short-context teachers to train long-context students, creating a mismatch that limits learnable temporal dependencies and forces a trade-off between memory and stability. Their main contribution is Context Forcing, a framework that trains a long-context student using a long-context teacher, eliminating this mismatch and enabling robust generation over 20+ seconds via a Slow-Fast Memory architecture that compresses redundant visual information while preserving global coherence.

Method

The authors leverage a two-stage curriculum within a causal autoregressive framework to train a long-context video diffusion model capable of maintaining temporal consistency over extended durations. The overall objective is to minimize the global KL divergence between the student’s induced distribution pθ(X1:N)p_{\theta}(X_{1:N})pθ(X1:N) and the real data distribution pdata(X1:N)p_{\text{data}}(X_{1:N})pdata(X1:N), where NNN spans tens to hundreds of seconds. Direct optimization of this global objective is computationally infeasible, so the authors decompose it into local dynamics Llocal\mathcal{L}_{\text{local}}Llocal and global continuation dynamics Lcontext\mathcal{L}_{\text{context}}Lcontext, enabling a tractable, staged training procedure.

In Stage 1, the student is warmed up by minimizing Llocal\mathcal{L}_{\text{local}}Llocal, which aligns the distribution of short video windows X1:kX_{1:k}X1:k (typically 1–5 seconds) with a high-quality teacher distribution pT(X1:k)p_T(X_{1:k})pT(X1:k). This is achieved via Distribution Matching Distillation (DMD), where gradients are estimated using score matching between the student and teacher models on diffused versions of generated frames. This stage ensures the student generates high-fidelity short sequences, providing stable context for the subsequent stage.

Stage 2 targets Lcontext\mathcal{L}_{\text{context}}Lcontext, which enforces alignment between the student’s continuation pθ(Xk+1:NX1:k)p_{\theta}(X_{k+1:N} \mid X_{1:k})pθ(Xk+1:NX1:k) and the true data continuation pdata(Xk+1:NX1:k)p_{\text{data}}(X_{k+1:N} \mid X_{1:k})pdata(Xk+1:NX1:k). Since the true data continuation is inaccessible for arbitrary student-generated contexts, the authors introduce a pretrained Context Teacher TTT that provides a reliable proxy pT(Xk+1:NX1:k)p_T(X_{k+1:N} \mid X_{1:k})pT(Xk+1:NX1:k). This is justified under two assumptions: (1) the teacher remains accurate when conditioned on contexts near the real data manifold, and (2) Stage 1 ensures the student’s rollouts remain within this reliable region. The resulting Contextual DMD (CDMD) objective is optimized using a conditional score-based gradient estimator, where both student and teacher scores are computed on the same student-generated context, mitigating exposure bias.

To handle the computational burden of long contexts, the authors design a Context Management System that organizes the KV cache into three functional components: an Attention Sink, Slow Memory, and Fast Memory. The Attention Sink retains initial tokens to stabilize attention, while Fast Memory acts as a rolling FIFO queue for immediate local context. Slow Memory stores high-entropy keyframes selected via a surprisal-based consolidation policy: a new token xtx_txt is promoted to Slow Memory if the similarity between its key vector ktk_tkt and the preceding key kt1k_{t-1}kt1 falls below a threshold τ\tauτ, ensuring only salient temporal transitions are retained. This architecture enables efficient context retention without linear growth in memory or attention cost.

Refer to the framework diagram, which illustrates the evolution from short-context to long-context training with memory management. The diagram shows how the student progressively learns to generate longer sequences by leveraging the teacher’s supervision and the structured memory system. The memory components are dynamically updated: Fast Memory slides through recent frames, while Slow Memory compresses salient events into a fixed-size buffer. Bounded positional encoding is applied to all tokens, constraining their RoPE indices to a fixed range regardless of generation step, thereby stabilizing attention over long sequences.

The training process further incorporates a Long Self-Rollout Curriculum, where the context horizon kkk grows linearly with training steps to gradually expose the model to long-range dependencies. A Clean Context Policy ensures that context frames X1:kX_{1:k}X1:k are fully denoised, while target frames Xk+1:NX_{k+1:N}Xk+1:N are supervised via random timestep selection, preserving gradient coverage across all diffusion steps. To enhance the robustness of the Context Teacher, the authors employ Error-Recycling Fine-Tuning, injecting realistic accumulated errors into the teacher’s context during training to ensure it can correct for student drift during inference.

Experiment

  • The robust context teacher successfully generates coherent video continuations from student-generated contexts, validating its ability to maintain long-term consistency across 10-second sequences.
  • The method achieves competitive performance on short video generation (5s) while significantly outperforming baselines in 60-second generation, particularly in preserving subject and background consistency over extended durations.
  • Ablation studies confirm that similarity-based slow memory sampling, Context DMD distillation, and bounded positional encoding are each critical for maintaining semantic and temporal coherence in long videos.
  • Error-Recycling Fine-Tuning enhances the context teacher’s robustness to accumulated generation errors, leading to cleaner rollouts and improved distillation quality.
  • Compared to LongLive and other long-video baselines, the proposed method avoids abrupt scene resets and cyclic motion artifacts, demonstrating superior qualitative stability despite comparable quantitative scores.

The authors evaluate ablation components of their video generation system, showing that their full method outperforms variants lacking key mechanisms like contextual distillation or bounded positional encoding. Results indicate that similarity-based slow memory sampling and bounded positional encoding significantly improve background and subject consistency over long sequences. The full model achieves the highest overall score, confirming the combined effectiveness of its architectural choices in maintaining temporal coherence.

The authors use a robust context teacher and student framework to generate long videos, achieving high consistency across 60-second sequences as measured by DINOv2, CLIP-F, and CLIP-T scores. Results show their method outperforms baselines like FramePack, LongLive, and Infinity-RoPE in maintaining subject and background stability over time, particularly beyond 20 seconds. Ablation studies confirm that key components—including similarity-based memory sampling, context distillation, and bounded positional encoding—are critical to sustaining long-term coherence.

The authors use a two-stage training approach with a robust context teacher to enable long video generation, achieving high consistency in both short and extended sequences. Results show their student model outperforms most baselines in background and subject consistency for 60-second videos, particularly excelling in maintaining stable semantics and structure over time. Ablation studies confirm that key components like similarity-based memory sampling and bounded positional encoding are critical for sustaining long-term coherence.


AIでAIを構築

アイデアからローンチまで — 無料のAIコーディング支援、すぐに使える環境、最高のGPU価格でAI開発を加速。

AI コーディング補助
すぐに使える GPU
最適な料金体系

HyperAI Newsletters

最新情報を購読する
北京時間 毎週月曜日の午前9時 に、その週の最新情報をメールでお届けします
メール配信サービスは MailChimp によって提供されています