Learning Clothing and Pose Invariant 3D Shape Representation for Long-Term Person Re-Identification

Long-Term Person Re-Identification (LT-ReID) has become increasingly crucialin computer vision and biometrics. In this work, we aim to extend LT-ReIDbeyond pedestrian recognition to include a wider range of real-world humanactivities while still accounting for cloth-changing scenarios over large timegaps. This setting poses additional challenges due to the geometricmisalignment and appearance ambiguity caused by the diversity of human pose andclothing. To address these challenges, we propose a new approach 3DInvarReIDfor (i) disentangling identity from non-identity components (pose, clothingshape, and texture) of 3D clothed humans, and (ii) reconstructing accurate 3Dclothed body shapes and learning discriminative features of naked body shapesfor person ReID in a joint manner. To better evaluate our study of LT-ReID, wecollect a real-world dataset called CCDA, which contains a wide variety ofhuman activities and clothing changes. Experimentally, we show the superiorperformance of our approach for person ReID.