
Learning how objects sound from video is challenging, since they oftenheavily overlap in a single audio channel. Current methods for visually-guidedaudio source separation sidestep the issue by training with artificially mixedvideo clips, but this puts unwieldy restrictions on training data collectionand may even prevent learning the properties of "true" mixed sounds. Weintroduce a co-separation training paradigm that permits learning object-levelsounds from unlabeled multi-source videos. Our novel training objectiverequires that the deep neural network's separated audio for similar-lookingobjects be consistently identifiable, while simultaneously reproducing accuratevideo-level audio tracks for each source training pair. Our approachdisentangles sounds in realistic test videos, even in cases where an object wasnot observed individually during training. We obtain state-of-the-art resultson visually-guided audio source separation and audio denoising for the MUSIC,AudioSet, and AV-Bench datasets.