Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors

Visual features are of vital importance for human action understanding invideos. This paper presents a new video representation, calledtrajectory-pooled deep-convolutional descriptor (TDD), which shares the meritsof both hand-crafted features and deep-learned features. Specifically, weutilize deep architectures to learn discriminative convolutional feature maps,and conduct trajectory-constrained pooling to aggregate these convolutionalfeatures into effective descriptors. To enhance the robustness of TDDs, wedesign two normalization methods to transform convolutional feature maps,namely spatiotemporal normalization and channel normalization. The advantagesof our features come from (i) TDDs are automatically learned and contain highdiscriminative capacity compared with those hand-crafted features; (ii) TDDstake account of the intrinsic characteristics of temporal dimension andintroduce the strategies of trajectory-constrained sampling and pooling foraggregating deep-learned features. We conduct experiments on two challengingdatasets: HMDB51 and UCF101. Experimental results show that TDDs outperformprevious hand-crafted features and deep-learned features. Our method alsoachieves superior performance to the state of the art on these datasets (HMDB5165.9%, UCF101 91.5%).