HyperAIHyperAI

Command Palette

Search for a command to run...

Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos

Abstract

Multimodal self-supervised learning is getting more and more attention as itallows not only to train large networks without human supervision but also tosearch and retrieve data across various modalities. In this context, this paperproposes a self-supervised training framework that learns a common multimodalembedding space that, in addition to sharing representations across differentmodalities, enforces a grouping of semantically similar instances. To this end,we extend the concept of instance-level contrastive learning with a multimodalclustering step in the training pipeline to capture semantic similaritiesacross modalities. The resulting embedding space enables retrieval of samplesacross all modalities, even from unseen datasets and different domains. Toevaluate our approach, we train our model on the HowTo100M dataset and evaluateits zero-shot retrieval capabilities in two challenging domains, namelytext-to-video retrieval, and temporal action localization, showingstate-of-the-art results on four different datasets.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp