HyperAIHyperAI
2 months ago

Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models

Nair, Nithin Gopalakrishnan ; Bandara, Wele Gedara Chaminda ; Patel, Vishal M.
Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion
  Models
Abstract

Generating photos satisfying multiple constraints find broad utility in thecontent creation industry. A key hurdle to accomplishing this task is the needfor paired data consisting of all modalities (i.e., constraints) and theircorresponding output. Moreover, existing methods need retraining using paireddata across all modalities to introduce a new condition. This paper proposes asolution to this problem based on denoising diffusion probabilistic models(DDPMs). Our motivation for choosing diffusion models over other generativemodels comes from the flexible internal structure of diffusion models. Sinceeach sampling step in the DDPM follows a Gaussian distribution, we show thatthere exists a closed-form solution for generating an image given variousconstraints. Our method can unite multiple diffusion models trained on multiplesub-tasks and conquer the combined task through our proposed sampling strategy.We also introduce a novel reliability parameter that allows using differentoff-the-shelf diffusion models trained across various datasets during samplingtime alone to guide it to the desired outcome satisfying multiple constraints.We perform experiments on various standard multimodal tasks to demonstrate theeffectiveness of our approach. More details can be found inhttps://nithin-gk.github.io/projectpages/Multidiff/index.html

Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models | Latest Papers | HyperAI