BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion

Recent text-to-image diffusion models have demonstrated an astonishingcapacity to generate high-quality images. However, researchers mainly studiedthe way of synthesizing images with only text prompts. While some works haveexplored using other modalities as conditions, considerable paired data, e.g.,box/mask-image pairs, and fine-tuning time are required for nurturing models.As such paired data is time-consuming and labor-intensive to acquire andrestricted to a closed set, this potentially becomes the bottleneck forapplications in an open world. This paper focuses on the simplest form ofuser-provided conditions, e.g., box or scribble. To mitigate the aforementionedproblem, we propose a training-free method to control objects and contexts inthe synthesized images adhering to the given spatial conditions. Specifically,three spatial constraints, i.e., Inner-Box, Outer-Box, and Corner Constraints,are designed and seamlessly integrated into the denoising step of diffusionmodels, requiring no additional training and massive annotated layout data.Extensive experimental results demonstrate that the proposed constraints cancontrol what and where to present in the images while retaining the ability ofDiffusion models to synthesize with high fidelity and diverse concept coverage.The code is publicly available at https://github.com/showlab/BoxDiff.