VideoGraph: Recognizing Minutes-Long Human Activities in Videos

Many human activities take minutes to unfold. To represent them, relatedworks opt for statistical pooling, which neglects the temporal structure.Others opt for convolutional methods, as CNN and Non-Local. While successful inlearning temporal concepts, they are short of modeling minutes-long temporaldependencies. We propose VideoGraph, a method to achieve the best of twoworlds: represent minutes-long human activities and learn their underlyingtemporal structure. VideoGraph learns a graph-based representation for humanactivities. The graph, its nodes and edges are learned entirely from videodatasets, making VideoGraph applicable to problems without node-levelannotation. The result is improvements over related works on benchmarks:Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able tolearn the temporal structure of human activities in minutes-long videos.