HyperAIHyperAI
2 months ago

MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network

Mehraban, Soroush ; Adeli, Vida ; Taati, Babak
MotionAGFormer: Enhancing 3D Human Pose Estimation with a
  Transformer-GCNFormer Network
Abstract

Recent transformer-based approaches have demonstrated excellent performancein 3D human pose estimation. However, they have a holistic view and by encodingglobal relationships between all the joints, they do not capture the localdependencies precisely. In this paper, we present a novel Attention-GCNFormer(AGFormer) block that divides the number of channels by using two paralleltransformer and GCNFormer streams. Our proposed GCNFormer module exploits thelocal relationship between adjacent joints, outputting a new representationthat is complementary to the transformer output. By fusing these tworepresentation in an adaptive way, AGFormer exhibits the ability to betterlearn the underlying 3D structure. By stacking multiple AGFormer blocks, wepropose MotionAGFormer in four different variants, which can be chosen based onthe speed-accuracy trade-off. We evaluate our model on two popular benchmarkdatasets: Human3.6M and MPI-INF-3DHP. MotionAGFormer-B achievesstate-of-the-art results, with P1 errors of 38.4mm and 16.2mm, respectively.Remarkably, it uses a quarter of the parameters and is three times morecomputationally efficient than the previous leading model on Human3.6M dataset.Code and models are available at https://github.com/TaatiTeam/MotionAGFormer.

MotionAGFormer: Enhancing 3D Human Pose Estimation with a Transformer-GCNFormer Network | Latest Papers | HyperAI