Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and Language

Learning to classify video data from classes not included in the trainingdata, i.e. video-based zero-shot learning, is challenging. We conjecture thatthe natural alignment between the audio and visual modalities in video dataprovides a rich training signal for learning discriminative multi-modalrepresentations. Focusing on the relatively underexplored task of audio-visualzero-shot learning, we propose to learn multi-modal representations fromaudio-visual data using cross-modal attention and exploit textual labelembeddings for transferring knowledge from seen classes to unseen classes.Taking this one step further, in our generalised audio-visual zero-shotlearning setting, we include all the training classes in the test-time searchspace which act as distractors and increase the difficulty while making thesetting more realistic. Due to the lack of a unified benchmark in this domain,we introduce a (generalised) zero-shot learning benchmark on three audio-visualdatasets of varying sizes and difficulty, VGGSound, UCF, and ActivityNet,ensuring that the unseen test classes do not appear in the dataset used forsupervised training of the backbone deep models. Comparing multiple relevantand recent methods, we demonstrate that our proposed AVCA model achievesstate-of-the-art performance on all three datasets. Code and data are availableat \url{https://github.com/ExplainableML/AVCA-GZSL}.