HyperAIHyperAI

Command Palette

Search for a command to run...

Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models

Nithin Gopalakrishnan Nair Wele Gedara Chaminda Bandara Vishal M. Patel

Abstract

Generating photos satisfying multiple constraints find broad utility in thecontent creation industry. A key hurdle to accomplishing this task is the needfor paired data consisting of all modalities (i.e., constraints) and theircorresponding output. Moreover, existing methods need retraining using paireddata across all modalities to introduce a new condition. This paper proposes asolution to this problem based on denoising diffusion probabilistic models(DDPMs). Our motivation for choosing diffusion models over other generativemodels comes from the flexible internal structure of diffusion models. Sinceeach sampling step in the DDPM follows a Gaussian distribution, we show thatthere exists a closed-form solution for generating an image given variousconstraints. Our method can unite multiple diffusion models trained on multiplesub-tasks and conquer the combined task through our proposed sampling strategy.We also introduce a novel reliability parameter that allows using differentoff-the-shelf diffusion models trained across various datasets during samplingtime alone to guide it to the desired outcome satisfying multiple constraints.We perform experiments on various standard multimodal tasks to demonstrate theeffectiveness of our approach. More details can be found inhttps://nithin-gk.github.io/projectpages/Multidiff/index.html


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp