GaitMM: Multi-Granularity Motion Sequence Learning for Gait Recognition

Gait recognition aims to identify individual-specific walking patterns byobserving the different periodic movements of each body part. However, mostexisting methods treat each part equally and fail to account for the dataredundancy caused by the different step frequencies and sampling rates of gaitsequences. In this study, we propose a multi-granularity motion representationnetwork (GaitMM) for gait sequence learning. In GaitMM, we design a combinedfull-body and fine-grained sequence learning module (FFSL) to explorepart-independent spatio-temporal representations. Moreover, we utilize aframe-wise compression strategy, referred to as multi-scale motion aggregation(MSMA), to capture discriminative information in the gait sequence. Experimentson two public datasets, CASIA-B and OUMVLP, show that our approach reachesstate-of-the-art performances.