Human-Object Interaction Prediction in Videos through Gaze Following

Understanding the human-object interactions (HOIs) from a video is essentialto fully comprehend a visual scene. This line of research has been addressed bydetecting HOIs from images and lately from videos. However, the video-based HOIanticipation task in the third-person view remains understudied. In this paper,we design a framework to detect current HOIs and anticipate future HOIs invideos. We propose to leverage human gaze information since people often fixateon an object before interacting with it. These gaze features together with thescene contexts and the visual appearances of human-object pairs are fusedthrough a spatio-temporal transformer. To evaluate the model in the HOIanticipation task in a multi-person scenario, we propose a set of person-wisemulti-label metrics. Our model is trained and validated on the VidHOI dataset,which contains videos capturing daily life and is currently the largest videoHOI dataset. Experimental results in the HOI detection task show that ourapproach improves the baseline by a great margin of 36.3% relatively. Moreover,we conduct an extensive ablation study to demonstrate the effectiveness of ourmodifications and extensions to the spatio-temporal transformer. Our code ispublicly available on https://github.com/nizhf/hoi-prediction-gaze-transformer.