Moments in Time Dataset: one million videos for event understanding

We present the Moments in Time Dataset, a large-scale human-annotatedcollection of one million short videos corresponding to dynamic eventsunfolding within three seconds. Modeling the spatial-audio-temporal dynamicseven for actions occurring in 3 second videos poses many challenges: meaningfulevents do not include only people, but also objects, animals, and naturalphenomena; visual and auditory events can be symmetrical in time ("opening" is"closing" in reverse), and either transient or sustained. We describe theannotation process of our dataset (each video is tagged with one action oractivity label among 339 different classes), analyze its scale and diversity incomparison to other large-scale video datasets for action recognition, andreport results of several baseline models addressing separately, and jointly,three modalities: spatial, temporal and auditory. The Moments in Time dataset,designed to have a large coverage and diversity of events in both visual andauditory modalities, can serve as a new challenge to develop models that scaleto the level of complexity and abstract reasoning that a human processes on adaily basis.