HyperAIHyperAI
2 months ago

End-to-End Learning of Visual Representations from Uncurated Instructional Videos

Miech, Antoine ; Alayrac, Jean-Baptiste ; Smaira, Lucas ; Laptev, Ivan ; Sivic, Josef ; Zisserman, Andrew
End-to-End Learning of Visual Representations from Uncurated
  Instructional Videos
Abstract

Annotating videos is cumbersome, expensive and not scalable. Yet, many strongvideo models still rely on manually annotated data. With the recentintroduction of the HowTo100M dataset, narrated videos now offer thepossibility of learning video representations without manual supervision. Inthis work we propose a new learning approach, MIL-NCE, capable of addressingmisalignments inherent to narrated videos. With this approach we are able tolearn strong video representations from scratch, without the need for anymanual annotation. We evaluate our representations on a wide range of fourdownstream tasks over eight datasets: action recognition (HMDB-51, UCF-101,Kinetics-700), text-to-video retrieval (YouCook2, MSR-VTT), action localization(YouTube-8M Segments, CrossTask) and action segmentation (COIN). Our methodoutperforms all published self-supervised approaches for these tasks as well asseveral fully supervised baselines.

End-to-End Learning of Visual Representations from Uncurated Instructional Videos | Latest Papers | HyperAI