InstanceDiffusion: Instance-level Control for Image Generation

Text-to-image diffusion models produce high quality images but do not offercontrol over individual instances in the image. We introduce InstanceDiffusionthat adds precise instance-level control to text-to-image diffusion models.InstanceDiffusion supports free-form language conditions per instance andallows flexible ways to specify instance locations such as simple singlepoints, scribbles, bounding boxes or intricate instance segmentation masks, andcombinations thereof. We propose three major changes to text-to-image modelsthat enable precise instance-level control. Our UniFusion block enablesinstance-level conditions for text-to-image models, the ScaleU block improvesimage fidelity, and our Multi-instance Sampler improves generations formultiple instances. InstanceDiffusion significantly surpasses specializedstate-of-the-art models for each location condition. Notably, on the COCOdataset, we outperform previous state-of-the-art by 20.4% AP$_{50}^\text{box}$for box inputs, and 25.4% IoU for mask inputs.