HyperAIHyperAI
2 months ago

Masked Generative Video-to-Audio Transformers with Enhanced Synchronicity

Santiago Pascual, Chunghsin Yeh, Ioannis Tsiamas, Joan Serrà
Masked Generative Video-to-Audio Transformers with Enhanced
  Synchronicity
Abstract

Video-to-audio (V2A) generation leverages visual-only video features torender plausible sounds that match the scene. Importantly, the generated soundonsets should match the visual actions that are aligned with them, otherwiseunnatural synchronization artifacts arise. Recent works have explored theprogression of conditioning sound generators on still images and then videofeatures, focusing on quality and semantic matching while ignoringsynchronization, or by sacrificing some amount of quality to focus on improvingsynchronization only. In this work, we propose a V2A generative model, namedMaskVAT, that interconnects a full-band high-quality general audio codec with asequence-to-sequence masked generative model. This combination allows modelingboth high audio quality, semantic matching, and temporal synchronicity at thesame time. Our results show that, by combining a high-quality codec with theproper pre-trained audio-visual features and a sequence-to-sequence parallelstructure, we are able to yield highly synchronized results on one hand, whilstbeing competitive with the state of the art of non-codec generative audiomodels. Sample videos and generated audios are available athttps://maskvat.github.io .

Masked Generative Video-to-Audio Transformers with Enhanced Synchronicity | Latest Papers | HyperAI