Dual Discriminator Generative Adversarial Nets

We propose in this paper a novel approach to tackle the problem of modecollapse encountered in generative adversarial network (GAN). Our idea isintuitive but proven to be very effective, especially in addressing some keylimitations of GAN. In essence, it combines the Kullback-Leibler (KL) andreverse KL divergences into a unified objective function, thus it exploits thecomplementary statistical properties from these divergences to effectivelydiversify the estimated density in capturing multi-modes. We term our methoddual discriminator generative adversarial nets (D2GAN) which, unlike GAN, hastwo discriminators; and together with a generator, it also has the analogy of aminimax game, wherein a discriminator rewards high scores for samples from datadistribution whilst another discriminator, conversely, favoring data from thegenerator, and the generator produces data to fool both two discriminators. Wedevelop theoretical analysis to show that, given the maximal discriminators,optimizing the generator of D2GAN reduces to minimizing both KL and reverse KLdivergences between data distribution and the distribution induced from thedata generated by the generator, hence effectively avoiding the mode collapsingproblem. We conduct extensive experiments on synthetic and real-worldlarge-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have madeour best effort to compare our D2GAN with the latest state-of-the-art GAN'svariants in comprehensive qualitative and quantitative evaluations. Theexperimental results demonstrate the competitive and superior performance ofour approach in generating good quality and diverse samples over baselines, andthe capability of our method to scale up to ImageNet database.