HyperAIHyperAI

Command Palette

Search for a command to run...

لا يمكنني الرد باللغة العربية لأنك طلبت مني أن أكون مترجمًا محترفًا للغة العربية، ولكنك طلبت مني أيضًا ترجمة النص إلى اللغة الصينية مع الحفاظ على المصطلحات التقنية باللغة الإنجليزية. يرجى توضيح لغتك المطلوبة للرد.

Zhifei Yang Guangyao Zhai Keyang Lu YuYang Yin Chao Zhang Zhen Xiao Jieyi Long Nassir Navab Yikai Wang

الملخص

توليد المشاهد (Scene Generation) يمتد ليشمل تطبيقات صناعية واسعة النطاق، مما يستدعي تحقيق واقعية عالية مع تحكم دقيق في الهندسة والمظهر. وتعتمد أساليب الاسترجاع المدفوعة باللغة في تجميع مشاهد معقولة من قواعد بيانات ضخمة للكائنات، غير أنها تتجاهل التحكم على مستوى الكائن الفردية، وغالبًا ما تفشل في فرض تماسك النمط على مستوى المشهد ككل. وفي المقابل، تقدم الصياغات القائمة على الرسوم البيانية (Graph-based formulations) قابلية تحكم أعلى في الكائنات، وتعزز الاتساق الشامل من خلال نمذجة العلاقات بشكل صريح؛ ومع ذلك، فإن الأساليب الحالية تواجه صعوبات في إنتاج نتائج ذات نسيج عالي الدقة (High-fidelity textured results)، مما يحد من فائدتها العملية.نقدم في هذا العمل نموذج FlowScene، وهو نموذج توليدي ثلاثي الفروع (Tri-branch) مشروط برسوم بيانية متعددة الوسائط (Multimodal Graphs)، يقوم بتوليد تخطيطات المشاهد، وأشكال الكائنات، ونسيجها بشكل تعاوني. وفي صميم هذا النموذج يكمن نموذج تدفق مُصحح (Rectified Flow Model) عالي التكامل، يتبادل معلومات الكائنات أثناء عملية التوليد، مما يتيح الاستدلال التعاوني عبر الرسم البياني. ويُمكّن هذا النهج من تحقيق تحكم دقيق في أشكال الكائنات، ونسيجها، وعلاقاتها، مع فرض تماسك النمط على مستوى المشهد عبر البنية والمظهر معًا. وتُظهر التجارب الموسعة أن نموذج FlowScene يتفوق على الأسس المقارنة (Baselines) المشروطة باللغة والمشروطة بالرسوم البيانية من حيث واقعية التوليد، واتساق النمط، ومواءمة التفضيلات البشرية.

One-sentence Summary

Researchers from Peking University and Technical University of Munich introduce FlowScene, a tri-branch generative model that leverages a tight-coupled rectified flow mechanism to collaboratively synthesize layouts, shapes, and textures from multimodal graphs, achieving superior style coherence and object-level control compared to existing language or graph-driven baselines.

Key Contributions

  • The paper introduces FlowScene, a tri-branch generative model conditioned on multimodal graphs that collaboratively produces scene layouts, object shapes, and object textures to ensure fine-grained control and scene-level style coherence.
  • A tight-coupled rectified flow mechanism is presented as the core engine, which exchanges node information during the sampling process to satisfy both individual object conditions and holistic scene constraints while accelerating generation compared to diffusion-based methods.
  • Extensive experiments demonstrate that the method outperforms language-conditioned and graph-conditioned baselines in generation realism, style consistency, and alignment with human preferences, supported by a workflow for diverse input sources.

Introduction

Scene generation is critical for industries like interior design, VR/AR, and robotics, where applications demand high realism alongside precise control over geometry and appearance. Prior language-driven methods often fail to enforce scene-level style coherence or provide granular object control, while existing graph-based approaches struggle to produce high-fidelity textured results in an end-to-end manner. The authors introduce FlowScene, a tri-branch generative model that leverages Multimodal Graph Rectified Flow to collaboratively generate scene layouts, object shapes, and textures. By tightly coupling node information exchange during the sampling process, this approach enables fine-grained control over individual objects while ensuring consistent style across the entire scene structure and appearance.

Method

The proposed framework, FlowScene, operates as a tri-branch generator conditioned on a multimodal graph to synthesize indoor scenes. The system accepts diverse inputs, including natural language descriptions and reference images, which are parsed into a structured scene graph. This graph serves as the central conditioning mechanism for the generation process, ensuring consistency across object layouts, shapes, and textures.

The process begins with the construction of a multimodal scene graph GM=(VM,E)\mathcal{G}_M = (\mathcal{V}_M, \mathcal{E})GM=(VM,E). Nodes in VM\mathcal{V}_MVM represent objects and can be text-only, image-only, or multimodal, aggregating learnable embeddings with foundation features from CLIP or DINOv2. Edges in E\mathcal{E}E encode spatial and semantic relationships such as "left of" or "bigger than." An LLM or VLM parses user inputs to populate this graph, allowing for fine-grained control over the scene configuration.

To generate the scene, the authors employ a Multimodal Graph Rectified Flow backbone. This module adapts rectified flow models to handle multiple content generation jointly. The core of this mechanism is the InfoExchangeUnit, which utilizes a Triplet-Graph Convolutional Network (Triplet-GCN) to perform message passing and feature aggregation across the graph edges. During the denoising process, this unit incorporates temporal denoising data Dt\mathcal{D}_tDt and projects it alongside node features to produce time-dependent conditions CtC_tCt. This ensures that the generation of each object respects the global constraints imposed by the graph structure.

The training objective minimizes the least-squares error between the predicted velocity field vθv_{\theta}vθ and the target velocity derived from linear interpolation between data and noise. The loss function is defined as:

LGRF=ED,C,t[ΘD(Dt,Ct,t)v22]\mathcal { L } _ { \mathrm { G R F } } = \mathbb { E } _ { \mathcal { D } , C , t } \left[ \| \Theta _ { \mathcal { D } } ( \mathcal { D } _ { t } , C _ { t } , t ) - v \| _ { 2 } ^ { 2 } \right]LGRF=ED,C,t[ΘD(Dt,Ct,t)v22]

where ΘD\Theta_{\mathcal{D}}ΘD is the denoiser network. During inference, the model integrates the reverse-time ODE starting from Gaussian noise to recover the target data distribution.

FlowScene decomposes the generation task into three coordinated branches, each backed by the graph rectified flow module. The Layout Branch generates 3D bounding boxes defined by location, size, and rotation, utilizing a specialized LayoutExchangeUnit to enforce spatial constraints. The Shape Branch operates in parallel to generate voxelized object shapes. It employs a Shape VQ-VAE to encode sparse voxel structures into compact latent codes, which are then denoised and decoded. The Texture Branch is subordinate to the shape branch, anchoring Gaussian noise to the geometric structure to generate textures. It uses a Texture VQ-VAE and a TextureExchangeUnit to ensure style consistency across objects, which is particularly crucial for text-only nodes where appearance is inferred from relational context.

Prior to training the generative branches, objects undergo preprocessing to prepare the data for the VQ-VAEs. Objects are voxelized into sparse structures, and multi-view images are rendered to extract DINOv2 features. These features are reprojected onto the voxel grid and averaged to create feature voxels. The Shape VQ-VAE learns to reconstruct the voxelized geometry, while the Texture VQ-VAE learns to reconstruct the object appearance from the feature voxels. This preprocessing ensures that the generative models operate on efficient latent representations rather than raw high-dimensional data.

The system supports two primary application modes for user interaction. In the language-driven mode, users provide natural language descriptions which are parsed into a scene graph by an LLM. In the interactive GUI mode, users select object candidates and define relations through a visual interface. Both modes feed into the multimodal graph, which drives the FlowScene backend to generate a high-fidelity textured 3D scene consistent with the specified configuration.

Experiment

  • Experiments on SG-FRONT and 3D-FRONT datasets validate that FlowScene outperforms training-free language-based methods and graph-conditioned generative models in scene-level realism, object-level geometric fidelity, and style consistency.
  • Quantitative and qualitative comparisons demonstrate that FlowScene achieves superior alignment with human preferences, producing scenes with higher visual quality and more accurate layout adherence to textual and graph constraints.
  • Ablation studies confirm that the InfoExchangeUnit is critical for spatial coherence and appearance consistency, while the graph flow backbone significantly enhances shape generation quality compared to diffusion baselines.
  • Robustness tests show that training with multi-view inputs enables flexible adaptation to varying visual conditions, and the model maintains high performance even with sparse relational information in the input graph.
  • Additional evaluations highlight FlowScene's ability to preserve inter-shape consistency for identical objects and effectively propagate local graph updates to maintain holistic scene structure.

بناء الذكاء الاصطناعي بالذكاء الاصطناعي

من الفكرة إلى الإطلاق — سرّع تطوير الذكاء الاصطناعي الخاص بك مع المساعدة البرمجية المجانية بالذكاء الاصطناعي، وبيئة جاهزة للاستخدام، وأفضل أسعار لوحدات معالجة الرسومات.

البرمجة التعاونية باستخدام الذكاء الاصطناعي
وحدات GPU جاهزة للعمل
أفضل الأسعار

HyperAI Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp