HyperAIHyperAI
2 months ago

Context-Aware Layout to Image Generation with Enhanced Object Appearance

He, Sen ; Liao, Wentong ; Yang, Michael Ying ; Yang, Yongxin ; Song, Yi-Zhe ; Rosenhahn, Bodo ; Xiang, Tao
Context-Aware Layout to Image Generation with Enhanced Object Appearance
Abstract

A layout to image (L2I) generation model aims to generate a complicated imagecontaining multiple objects (things) against natural background (stuff),conditioned on a given layout. Built upon the recent advances in generativeadversarial networks (GANs), existing L2I models have made great progress.However, a close inspection of their generated images reveals two majorlimitations: (1) the object-to-object as well as object-to-stuff relations areoften broken and (2) each object's appearance is typically distorted lackingthe key defining characteristics associated with the object class. We arguethat these are caused by the lack of context-aware object and stuff featureencoding in their generators, and location-sensitive appearance representationin their discriminators. To address these limitations, two new modules areproposed in this work. First, a context-aware feature transformation module isintroduced in the generator to ensure that the generated feature encoding ofeither object or stuff is aware of other co-existing objects/stuff in thescene. Second, instead of feeding location-insensitive image features to thediscriminator, we use the Gram matrix computed from the feature maps of thegenerated object images to preserve location-sensitive information, resultingin much enhanced object appearance. Extensive experiments show that theproposed method achieves state-of-the-art performance on the COCO-Thing-Stuffand Visual Genome benchmarks.