DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation

Large text-to-image models achieved a remarkable leap in the evolution of AI,enabling high-quality and diverse synthesis of images from a given text prompt.However, these models lack the ability to mimic the appearance of subjects in agiven reference set and synthesize novel renditions of them in differentcontexts. In this work, we present a new approach for "personalization" oftext-to-image diffusion models. Given as input just a few images of a subject,we fine-tune a pretrained text-to-image model such that it learns to bind aunique identifier with that specific subject. Once the subject is embedded inthe output domain of the model, the unique identifier can be used to synthesizenovel photorealistic images of the subject contextualized in different scenes.By leveraging the semantic prior embedded in the model with a new autogenousclass-specific prior preservation loss, our technique enables synthesizing thesubject in diverse scenes, poses, views and lighting conditions that do notappear in the reference images. We apply our technique to severalpreviously-unassailable tasks, including subject recontextualization,text-guided view synthesis, and artistic rendering, all while preserving thesubject's key features. We also provide a new dataset and evaluation protocolfor this new task of subject-driven generation. Project page:https://dreambooth.github.io/