MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose Estimation in Video

Recent transformer-based solutions have been introduced to estimate 3D humanpose from 2D keypoint sequence by considering body joints among all framesglobally to learn spatio-temporal correlation. We observe that the motions ofdifferent joints differ significantly. However, the previous methods cannotefficiently model the solid inter-frame correspondence of each joint, leadingto insufficient learning of spatial-temporal correlation. We propose MixSTE(Mixed Spatio-Temporal Encoder), which has a temporal transformer block toseparately model the temporal motion of each joint and a spatial transformerblock to learn inter-joint spatial correlation. These two blocks are utilizedalternately to obtain better spatio-temporal feature encoding. In addition, thenetwork output is extended from the central frame to entire frames of the inputvideo, thereby improving the coherence between the input and output sequences.Extensive experiments are conducted on three benchmarks (Human3.6M,MPI-INF-3DHP, and HumanEva). The results show that our model outperforms thestate-of-the-art approach by 10.9% P-MPJPE and 7.6% MPJPE. The code isavailable at https://github.com/JinluZhang1126/MixSTE.