ST-HOI: A Spatial-Temporal Baseline for Human-Object Interaction Detection in Videos

Detecting human-object interactions (HOI) is an important step toward acomprehensive visual understanding of machines. While detecting non-temporalHOIs (e.g., sitting on a chair) from static images is feasible, it is unlikelyeven for humans to guess temporal-related HOIs (e.g., opening/closing a door)from a single video frame, where the neighboring frames play an essential role.However, conventional HOI methods operating on only static images have beenused to predict temporal-related interactions, which is essentially guessingwithout temporal contexts and may lead to sub-optimal performance. In thispaper, we bridge this gap by detecting video-based HOIs with explicit temporalinformation. We first show that a naive temporal-aware variant of a commonaction detection baseline does not work on video-based HOIs due to afeature-inconsistency issue. We then propose a simple yet effectivearchitecture named Spatial-Temporal HOI Detection (ST-HOI) utilizing temporalinformation such as human and object trajectories, correctly-localized visualfeatures, and spatial-temporal masking pose features. We construct a new videoHOI benchmark dubbed VidHOI where our proposed approach serves as a solidbaseline.