HyperAIHyperAI
2 months ago

Syntactically Guided Generative Embeddings for Zero-Shot Skeleton Action Recognition

Gupta, Pranay ; Sharma, Divyanshu ; Sarvadevabhatla, Ravi Kiran
Syntactically Guided Generative Embeddings for Zero-Shot Skeleton Action
  Recognition
Abstract

We introduce SynSE, a novel syntactically guided generative approach forZero-Shot Learning (ZSL). Our end-to-end approach learns progressively refinedgenerative embedding spaces constrained within and across the involvedmodalities (visual, language). The inter-modal constraints are defined betweenaction sequence embedding and embeddings of Parts of Speech (PoS) tagged wordsin the corresponding action description. We deploy SynSE for the task ofskeleton-based action sequence recognition. Our design choices enable SynSE togeneralize compositionally, i.e., recognize sequences whose action descriptionscontain words not encountered during training. We also extend our approach tothe more challenging Generalized Zero-Shot Learning (GZSL) problem via aconfidence-based gating mechanism. We are the first to present zero-shotskeleton action recognition results on the large-scale NTU-60 and NTU-120skeleton action datasets with multiple splits. Our results demonstrate SynSE'sstate of the art performance in both ZSL and GZSL settings compared to strongbaselines on the NTU-60 and NTU-120 datasets. The code and pretrained modelsare available at https://github.com/skelemoa/synse-zsl

Syntactically Guided Generative Embeddings for Zero-Shot Skeleton Action Recognition | Latest Papers | HyperAI