HyperAIHyperAI
2 months ago

FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

Xiao, Guangxuan ; Yin, Tianwei ; Freeman, William T. ; Durand, Frédo ; Han, Song
FastComposer: Tuning-Free Multi-Subject Image Generation with Localized
  Attention
Abstract

Diffusion models excel at text-to-image generation, especially insubject-driven generation for personalized images. However, existing methodsare inefficient due to the subject-specific fine-tuning, which iscomputationally intensive and hampers efficient deployment. Moreover, existingmethods struggle with multi-subject generation as they often blend featuresamong subjects. We present FastComposer which enables efficient, personalized,multi-subject text-to-image generation without fine-tuning. FastComposer usessubject embeddings extracted by an image encoder to augment the generic textconditioning in diffusion models, enabling personalized image generation basedon subject images and textual instructions with only forward passes. To addressthe identity blending problem in the multi-subject generation, FastComposerproposes cross-attention localization supervision during training, enforcingthe attention of reference subjects localized to the correct regions in thetarget images. Naively conditioning on subject embeddings results in subjectoverfitting. FastComposer proposes delayed subject conditioning in thedenoising step to maintain both identity and editability in subject-drivenimage generation. FastComposer generates images of multiple unseen individualswith different styles, actions, and contexts. It achieves300$\times$-2500$\times$ speedup compared to fine-tuning-based methods andrequires zero extra storage for new subjects. FastComposer paves the way forefficient, personalized, and high-quality multi-subject image creation. Code,model, and dataset are available athttps://github.com/mit-han-lab/fastcomposer.

FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention | Latest Papers | HyperAI