Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos

Multimodal self-supervised learning is getting more and more attention as itallows not only to train large networks without human supervision but also tosearch and retrieve data across various modalities. In this context, this paperproposes a self-supervised training framework that learns a common multimodalembedding space that, in addition to sharing representations across differentmodalities, enforces a grouping of semantically similar instances. To this end,we extend the concept of instance-level contrastive learning with a multimodalclustering step in the training pipeline to capture semantic similaritiesacross modalities. The resulting embedding space enables retrieval of samplesacross all modalities, even from unseen datasets and different domains. Toevaluate our approach, we train our model on the HowTo100M dataset and evaluateits zero-shot retrieval capabilities in two challenging domains, namelytext-to-video retrieval, and temporal action localization, showingstate-of-the-art results on four different datasets.