HyperAIHyperAI

Command Palette

Search for a command to run...

WorldCam: Interaktive autoregressive 3D-Spielwelten mit Kameraposen als vereinheitlichender geometrischer Darstellung

Zusammenfassung

Jüngste Fortschritte bei Video-Diffusion-Transformern haben interaktive Gaming-World-Modelle ermöglicht, die es Nutzern erlauben, generierte Umgebungen über lange Zeithorizonte hinweg zu erkunden. Dennoch stoßen bestehende Ansätze an Grenzen hinsichtlich präziser Aktionskontrolle und langfristiger 3D-Konsistenz. Die meisten bisherigen Arbeiten behandeln Benutzeraktionen als abstrakte Konditionierungssignale und übersehen dabei die fundamentale geometrische Kopplung zwischen Aktionen und der 3D-Welt, wonach Aktionen relative Kamerabewegungen induzieren, die sich innerhalb einer 3D-Welt zu einer globalen Kamerapose summieren. In dieser Arbeit etablieren wir die Kamerapose als vereinheitlichende geometrische Repräsentation, um sowohl die unmittelbare Aktionskontrolle als auch die langfristige 3D-Konsistenz gemeinsam zu verankern. Erstens definieren wir einen physikbasierten kontinuierlichen Aktionsraum und repräsentieren Benutzereingaben in der Lie-Algebra, um präzise 6-DoF-Kameraposen abzuleiten, die über einen Camera Embedder in das generative Modell eingespeist werden, um eine exakte Ausrichtung der Aktionen sicherzustellen. Zweitens nutzen wir globale Kameraposen als räumliche Indizes, um relevante vergangene Beobachtungen abzurufen, was ein geometrisch konsistentes Wiederbesuchen von Orten während der Navigation über lange Zeithorizonte ermöglicht. Zur Unterstützung dieser Forschung stellen wir einen groß angelegten Datensatz vor, der 3.000 Minuten authentisches menschliches Gameplay umfasst und mit Kameratrajektorien sowie textuellen Beschreibungen annotiert ist. Zahlreiche Experimente zeigen, dass unser Ansatz bestehende State-of-the-Art-Modelle für interaktive Gaming-Worlds in Bezug auf Aktionskontrollierbarkeit, visuelle Qualität über lange Zeithorizonte sowie räumliche 3D-Konsistenz deutlich übertrifft.

One-sentence Summary

Researchers from KAIST, Adobe Research, and MAUM AI introduce WorldCam, a foundation model that unifies precise action control and long-horizon 3D consistency by mapping user inputs to Lie algebra-based camera poses, outperforming prior methods in interactive gaming scenarios through a novel pose-indexed memory retrieval system.

Key Contributions

  • The paper introduces a physics-based continuous action space that translates user inputs into precise 6-DoF camera poses using Lie algebra, which are then injected into a video diffusion transformer via a camera embedder to ensure accurate action alignment.
  • A retrieval mechanism is presented that uses global camera poses as spatial indices to fetch relevant past observations, enabling geometrically consistent revisiting of locations during long-horizon navigation.
  • The authors release a large-scale dataset containing 3,000 minutes of authentic human gameplay annotated with camera trajectories and textual descriptions to support the training and evaluation of interactive gaming world models.

Introduction

Interactive gaming world models built on video diffusion transformers aim to generate playable environments, yet they struggle with precise action control and maintaining 3D consistency over long horizons. Prior approaches often treat user inputs as abstract signals or rely on simplified linear approximations, which fail to capture the complex geometric coupling between actions and camera motion in a 3D space. The authors introduce WorldCam, a framework that establishes camera pose as a unifying geometric representation to simultaneously ground immediate action control and long-term spatial consistency. They achieve this by translating user inputs into precise 6-DoF poses using Lie algebra and leveraging these poses to retrieve past observations for geometrically coherent revisiting of locations. Additionally, the team addresses data scarcity by releasing WorldCam-50h, a large-scale dataset of authentic human gameplay annotated with camera trajectories and text descriptions.

Dataset

  • Dataset Composition and Sources: The authors introduce WorldCam-50h, a large-scale dataset of human gameplay videos designed to capture authentic action dynamics. Data is sourced from three games: Counter-Strike (closed-licensed), and Xonotic and Unvanquished (open-licensed under CC BY-SA 2.5 and GPL v3). The collection focuses on single-player exploration within static environments to ensure reproducibility and visual diversity.

  • Key Details for Each Subset: The dataset comprises over 100 videos per game, with each video averaging 8 minutes to yield approximately 17 hours of footage per title. Participants were instructed to perform diverse behaviors such as navigation, rapid camera movements, and revisiting locations. The total collection amounts to roughly 50 hours of gameplay.

  • Model Usage and Training Strategy: The authors utilize the entire dataset for training foundational gaming world models. Unlike prior works that discard textual guidance, this approach leverages detailed captions to maintain frame quality and scene style during the training process.

  • Processing and Metadata Construction:

    • Captioning: Each training video chunk is annotated with detailed textual descriptions generated by Qwen2.5-VL-7B. These prompts focus on global layout, visual themes, and ambient environmental conditions.
    • Camera Annotation: Global camera pose information, including intrinsics and extrinsics, is extracted for every one-minute segment using ViPE.
    • Filtering: To ensure data quality, the authors apply a filtering step that removes camera pose estimates with unrealistically large translation magnitudes.

Method

The authors propose WorldCam, an interactive 3D world model designed to autoregressively generate video sequences that accurately follow user actions while maintaining long-term spatial consistency. The system takes an initial RGB observation, a text prompt, and a sequence of user actions as input to generate future frames.

Refer to the framework diagram below for an overview of the system architecture, which integrates action-to-camera mapping, camera-controlled generation, and a pose-anchored memory mechanism.

The core generative backbone is a pretrained Video Diffusion Transformer (DiT), specifically Wan-2.1-T2V. Given an input video VVV, a VAE encoder maps it to a latent sequence z0\mathbf{z}_0z0. The DiT learns to predict the velocity field that transports noisy latents zt\mathbf{z}_tzt toward the clean latents z0\mathbf{z}_0z0 using a flow matching objective:

LFM=Ez0,ctext,t[vθ(zt,ctext,t)z0zt1t22].L_{\mathrm{FM}} = \mathbb{E}_{\mathbf{z}_0, c_{\mathrm{text}}, t} \Big[ \big\| v_{\theta}(\mathbf{z}_t, c_{\mathrm{text}}, t) - \frac{\mathbf{z}_0 - \mathbf{z}_t}{1 - t} \big\|_2^2 \Big].LFM=Ez0,ctext,t[vθ(zt,ctext,t)1tz0zt22].

To ensure precise control over camera motion, the authors define the action space in the Lie algebra se(3)\mathfrak{se}(3)se(3). User actions are represented as twist vectors Ai=[vi;ωi]R6A_i = [\mathbf{v}_i; \boldsymbol{\omega}_i] \in \mathbb{R}^6Ai=[vi;ωi]R6, containing linear and angular velocities. These are converted into relative camera poses ΔPiSE(3)\Delta P_i \in SE(3)ΔPiSE(3) via the matrix exponential map:

ΔPi=exp(A^i)=[ΔRiΔti01],\Delta P_i = \exp(\hat{A}_i) = \left[ \begin{array}{ll} \Delta R_i & \Delta t_i \\ \mathbf{0}^\top & 1 \end{array} \right],ΔPi=exp(A^i)=[ΔRi0Δti1],

where A^i\hat{A}_iA^i is the 4×44 \times 44×4 matrix representation of the twist. This formulation jointly integrates linear and angular velocities on the SE(3)SE(3)SE(3) manifold, avoiding the geometric inconsistencies found in decoupled linear approximations.

The derived camera poses are then used to condition the generative model. The poses are converted into Plücker embeddings P^RF×6\hat{P} \in \mathbb{R}^{F \times 6}P^RF×6 to provide explicit view-dependent geometric information. A lightweight camera embedding module cϕc_{\phi}cϕ consisting of two MLP layers processes these embeddings. To align with the temporally compressed latent sequence, rrr consecutive Plücker embeddings are concatenated for each latent frame. The resulting camera embeddings are added to the DiT features d\mathbf{d}d after each self-attention layer:

dd+cϕ(p^).\mathbf{d} \gets \mathbf{d} + c_{\phi}(\hat{\mathbf{p}}).dd+cϕ(p^).

To maintain 3D consistency over long horizons, the system employs a pose-anchored long-term memory pool M\mathcal{M}M. This pool stores previously generated latents along with their global camera poses. The global pose PiglobalP_i^{\mathrm{global}}Piglobal is computed by accumulating relative poses. During generation, a hierarchical retrieval strategy is used to find relevant context. First, the system selects the top-KKK candidates based on translation distance to the current position. From these, it further selects LLL entries whose viewing directions are most aligned with the current orientation, measured by the trace of the relative rotation matrix. These retrieved latents are concatenated with the current input sequence, and their associated poses are realigned and injected into the DiT to enforce spatial coherence.

Finally, the model utilizes a progressive autoregressive inference strategy. A progressive per-frame noise schedule assigns monotonically increasing noise levels to latent frames within each denoising window. This provides a low-noise anchor in early frames while keeping future frames at higher noise levels for correction. During inference, the latent sequence is shifted forward after completing all denoising stages, with the earliest frame evicted and a new pure-noise latent appended. An attention sink mechanism is also incorporated to stabilize attention and preserve frame fidelity during long rollouts.

Experiment

  • Comparison with state-of-the-art interactive gaming and camera-controlled models validates that the proposed method achieves superior action controllability, visual quality, and 3D consistency over long-horizon sequences, whereas baselines suffer from visual drift, coarse control, or inability to maintain geometric coherence.
  • Qualitative analysis confirms the model faithfully follows complex user inputs and preserves consistent 3D scene structures even when revisiting previously seen locations, while prior methods often fail to maintain geometry beyond short generation windows.
  • Ablation studies demonstrate that Lie algebra-based action-to-camera mapping provides more accurate motion control than linear approximations, and that increasing long-term memory latents alongside attention sinks significantly enhances 3D consistency and reduces long-horizon error drift.
  • Human evaluation and quantitative metrics collectively verify that the approach outperforms existing baselines across all key aspects, establishing it as a robust solution for interactive 3D world modeling.

KI mit KI entwickeln

Von der Idee bis zum Launch – beschleunigen Sie Ihre KI-Entwicklung mit kostenlosem KI-Co-Coding, sofort einsatzbereiter Umgebung und bestem GPU-Preis.

KI-gestütztes kollaboratives Programmieren
Sofort einsatzbereite GPUs
Die besten Preise

HyperAI Newsletters

Abonnieren Sie unsere neuesten Updates
Wir werden die neuesten Updates der Woche in Ihren Posteingang liefern um neun Uhr jeden Montagmorgen
Unterstützt von MailChimp