HyperAIHyperAI
2 months ago

DVANet: Disentangling View and Action Features for Multi-View Action Recognition

Siddiqui, Nyle ; Tirupattur, Praveen ; Shah, Mubarak
DVANet: Disentangling View and Action Features for Multi-View Action
  Recognition
Abstract

In this work, we present a novel approach to multi-view action recognitionwhere we guide learned action representations to be separated fromview-relevant information in a video. When trying to classify action instancescaptured from multiple viewpoints, there is a higher degree of difficulty dueto the difference in background, occlusion, and visibility of the capturedaction from different camera angles. To tackle the various problems introducedin multi-view action recognition, we propose a novel configuration of learnabletransformer decoder queries, in conjunction with two supervised contrastivelosses, to enforce the learning of action features that are robust to shifts inviewpoints. Our disentangled feature learning occurs in two stages: thetransformer decoder uses separate queries to separately learn action and viewinformation, which are then further disentangled using our two contrastivelosses. We show that our model and method of training significantly outperformsall other uni-modal models on four multi-view action recognition datasets: NTURGB+D, NTU RGB+D 120, PKU-MMD, and N-UCLA. Compared to previous RGB works, wesee maximal improvements of 1.5\%, 4.8\%, 2.2\%, and 4.8\% on each dataset,respectively.