HyperAIHyperAI
4 months ago

Enhancing Video Transformers for Action Understanding with VLM-aided Training

Lu, Hui ; Jian, Hu ; Poppe, Ronald ; Salah, Albert Ali
Enhancing Video Transformers for Action Understanding with VLM-aided
  Training
Abstract

Owing to their ability to extract relevant spatio-temporal video embeddings,Vision Transformers (ViTs) are currently the best performing models in videoaction understanding. However, their generalization over domains or datasets issomewhat limited. In contrast, Visual Language Models (VLMs) have demonstratedexceptional generalization performance, but are currently unable to processvideos. Consequently, they cannot extract spatio-temporal patterns that arecrucial for action understanding. In this paper, we propose the Four-tieredPrompts (FTP) framework that takes advantage of the complementary strengths ofViTs and VLMs. We retain ViTs' strong spatio-temporal representation abilitybut improve the visual encodings to be more comprehensive and general byaligning them with VLM outputs. The FTP framework adds four feature processorsthat focus on specific aspects of human action in videos: action category,action components, action description, and context information. The VLMs areonly employed during training, and inference incurs a minimal computation cost.Our approach consistently yields state-of-the-art performance. For instance, weachieve remarkable top-1 accuracy of 93.8% on Kinetics-400 and 83.4% onSomething-Something V2, surpassing VideoMAEv2 by 2.8% and 2.6%, respectively.