MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models

Music is a universal language that can communicate emotions and feelings. Itforms an essential part of the whole spectrum of creative media, ranging frommovies to social media posts. Machine learning models that can synthesize musicare predominantly conditioned on textual descriptions of it. Inspired by howmusicians compose music not just from a movie script, but also throughvisualizations, we propose MeLFusion, a model that can effectively use cuesfrom a textual description and the corresponding image to synthesize music.MeLFusion is a text-to-music diffusion model with a novel "visual synapse",which effectively infuses the semantics from the visual modality into thegenerated music. To facilitate research in this area, we introduce a newdataset MeLBench, and propose a new evaluation metric IMSM. Our exhaustiveexperimental evaluation suggests that adding visual information to the musicsynthesis pipeline significantly improves the quality of generated music,measured both objectively and subjectively, with a relative gain of up to67.98% on the FAD score. We hope that our work will gather attention to thispragmatic, yet relatively under-explored research area.