GaitMixer: Skeleton-based Gait Representation Learning via Wide-spectrum Multi-axial Mixer

Most existing gait recognition methods are appearance-based, which rely onthe silhouettes extracted from the video data of human walking activities. Theless-investigated skeleton-based gait recognition methods directly learn thegait dynamics from 2D/3D human skeleton sequences, which are theoretically morerobust solutions in the presence of appearance changes caused by clothes,hairstyles, and carrying objects. However, the performance of skeleton-basedsolutions is still largely behind the appearance-based ones. This paper aims toclose such performance gap by proposing a novel network model, GaitMixer, tolearn more discriminative gait representation from skeleton sequence data. Inparticular, GaitMixer follows a heterogeneous multi-axial mixer architecture,which exploits the spatial self-attention mixer followed by the temporallarge-kernel convolution mixer to learn rich multi-frequency signals in thegait feature maps. Experiments on the widely used gait database, CASIA-B,demonstrate that GaitMixer outperforms the previous SOTA skeleton-based methodsby a large margin while achieving a competitive performance compared with therepresentative appearance-based solutions. Code will be available athttps://github.com/exitudio/gaitmixer