StarGAN v2: Diverse Image Synthesis for Multiple Domains

A good image-to-image translation model should learn a mapping betweendifferent visual domains while satisfying the following properties: 1)diversity of generated images and 2) scalability over multiple domains.Existing methods address either of the issues, having limited diversity ormultiple models for all domains. We propose StarGAN v2, a single framework thattackles both and shows significantly improved results over the baselines.Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate oursuperiority in terms of visual quality, diversity, and scalability. To betterassess image-to-image translation models, we release AFHQ, high-quality animalfaces with large inter- and intra-domain differences. The code, pretrainedmodels, and dataset can be found at https://github.com/clovaai/stargan-v2.