HyperAIHyperAI

Command Palette

Search for a command to run...

Co-Separating Sounds of Visual Objects

Gao Ruohan ; Grauman Kristen

Abstract

Learning how objects sound from video is challenging, since they oftenheavily overlap in a single audio channel. Current methods for visually-guidedaudio source separation sidestep the issue by training with artificially mixedvideo clips, but this puts unwieldy restrictions on training data collectionand may even prevent learning the properties of "true" mixed sounds. Weintroduce a co-separation training paradigm that permits learning object-levelsounds from unlabeled multi-source videos. Our novel training objectiverequires that the deep neural network's separated audio for similar-lookingobjects be consistently identifiable, while simultaneously reproducing accuratevideo-level audio tracks for each source training pair. Our approachdisentangles sounds in realistic test videos, even in cases where an object wasnot observed individually during training. We obtain state-of-the-art resultson visually-guided audio source separation and audio denoising for the MUSIC,AudioSet, and AV-Bench datasets.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Co-Separating Sounds of Visual Objects | Papers | HyperAI