HyperAIHyperAI
2 months ago

ReCo: Region-Controlled Text-to-Image Generation

Yang, Zhengyuan ; Wang, Jianfeng ; Gan, Zhe ; Li, Linjie ; Lin, Kevin ; Wu, Chenfei ; Duan, Nan ; Liu, Zicheng ; Liu, Ce ; Zeng, Michael ; Wang, Lijuan
ReCo: Region-Controlled Text-to-Image Generation
Abstract

Recently, large-scale text-to-image (T2I) models have shown impressiveperformance in generating high-fidelity images, but with limitedcontrollability, e.g., precisely specifying the content in a specific regionwith a free-form text description. In this paper, we propose an effectivetechnique for such regional control in T2I generation. We augment T2I models'inputs with an extra set of position tokens, which represent the quantizedspatial coordinates. Each region is specified by four position tokens torepresent the top-left and bottom-right corners, followed by an open-endednatural language regional description. Then, we fine-tune a pre-trained T2Imodel with such new input interface. Our model, dubbed as ReCo(Region-Controlled T2I), enables the region control for arbitrary objectsdescribed by open-ended regional texts rather than by object labels from aconstrained category set. Empirically, ReCo achieves better image quality thanthe T2I model strengthened by positional words (FID: 8.82->7.36, SceneFID:15.54->6.51 on COCO), together with objects being more accurately placed,amounting to a 20.40% region classification accuracy improvement on COCO.Furthermore, we demonstrate that ReCo can better control the object count,spatial relationship, and region attributes such as color/size, with thefree-form regional description. Human evaluation on PaintSkill shows that ReCois +19.28% and +17.21% more accurate in generating images with correct objectcount and spatial relationship than the T2I model.

ReCo: Region-Controlled Text-to-Image Generation | Latest Papers | HyperAI