HyperAIHyperAI
2 months ago

High-Resolution Image Synthesis with Latent Diffusion Models

Rombach, Robin ; Blattmann, Andreas ; Lorenz, Dominik ; Esser, Patrick ; Ommer, Björn
High-Resolution Image Synthesis with Latent Diffusion Models
Abstract

By decomposing the image formation process into a sequential application ofdenoising autoencoders, diffusion models (DMs) achieve state-of-the-artsynthesis results on image data and beyond. Additionally, their formulationallows for a guiding mechanism to control the image generation process withoutretraining. However, since these models typically operate directly in pixelspace, optimization of powerful DMs often consumes hundreds of GPU days andinference is expensive due to sequential evaluations. To enable DM training onlimited computational resources while retaining their quality and flexibility,we apply them in the latent space of powerful pretrained autoencoders. Incontrast to previous work, training diffusion models on such a representationallows for the first time to reach a near-optimal point between complexityreduction and detail preservation, greatly boosting visual fidelity. Byintroducing cross-attention layers into the model architecture, we turndiffusion models into powerful and flexible generators for general conditioninginputs such as text or bounding boxes and high-resolution synthesis becomespossible in a convolutional manner. Our latent diffusion models (LDMs) achievea new state of the art for image inpainting and highly competitive performanceon various tasks, including unconditional image generation, semantic scenesynthesis, and super-resolution, while significantly reducing computationalrequirements compared to pixel-based DMs. Code is available athttps://github.com/CompVis/latent-diffusion .