Visual Compositional Learning for Human-Object Interaction Detection

Human-Object interaction (HOI) detection aims to localize and inferrelationships between human and objects in an image. It is challenging becausean enormous number of possible combinations of objects and verbs types forms along-tail distribution. We devise a deep Visual Compositional Learning (VCL)framework, which is a simple yet efficient framework to effectively addressthis problem. VCL first decomposes an HOI representation into object and verbspecific features, and then composes new interaction samples in the featurespace via stitching the decomposed features. The integration of decompositionand composition enables VCL to share object and verb features among differentHOI samples and images, and to generate new interaction samples and new typesof HOI, and thus largely alleviates the long-tail distribution problem andbenefits low-shot or zero-shot HOI detection. Extensive experiments demonstratethat the proposed VCL can effectively improve the generalization of HOIdetection on HICO-DET and V-COCO and outperforms the recent state-of-the-artmethods on HICO-DET. Code is available at https://github.com/zhihou7/VCL.