HyperAIHyperAI

Command Palette

Search for a command to run...

VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding

Hu Xu Gargi Ghosh Po-Yao Huang Dmytro Okhonko Armen Aghajanyan Florian Metze Luke Zettlemoyer Christoph Feichtenhofer

Abstract

We present VideoCLIP, a contrastive approach to pre-train a unified model forzero-shot video and text understanding, without using any labels on downstreamtasks. VideoCLIP trains a transformer for video and text by contrastingtemporally overlapping positive video-text pairs with hard negatives fromnearest neighbor retrieval. Our experiments on a diverse series of downstreamtasks, including sequence-level text-video retrieval, VideoQA, token-levelaction localization, and action segmentation reveal state-of-the-artperformance, surpassing prior work, and in some cases even outperformingsupervised approaches. Code is made available athttps://github.com/pytorch/fairseq/tree/main/examples/MMPT.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp