Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis

With the remarkable recent progress on learning deep generative models, itbecomes increasingly interesting to develop models for controllable imagesynthesis from reconfigurable inputs. This paper focuses on a recent emergedtask, layout-to-image, to learn generative models that are capable ofsynthesizing photo-realistic images from spatial layout (i.e., object boundingboxes configured in an image lattice) and style (i.e., structural andappearance variations encoded by latent vectors). This paper first proposes anintuitive paradigm for the task, layout-to-mask-to-image, to learn to unfoldobject masks of given bounding boxes in an input layout to bridge the gapbetween the input layout and synthesized images. Then, this paper presents amethod built on Generative Adversarial Networks for the proposedlayout-to-mask-to-image with style control at both image and mask levels.Object masks are learned from the input layout and iteratively refined alongstages in the generator network. Style control at the image level is the sameas in vanilla GANs, while style control at the object mask level is realized bya proposed novel feature normalization scheme, Instance-Sensitive andLayout-Aware Normalization. In experiments, the proposed method is tested inthe COCO-Stuff dataset and the Visual Genome dataset with state-of-the-artperformance obtained.