HyperAIHyperAI
2 months ago

Clothes-Changing Person Re-identification with RGB Modality Only

Gu, Xinqian ; Chang, Hong ; Ma, Bingpeng ; Bai, Shutao ; Shan, Shiguang ; Chen, Xilin
Clothes-Changing Person Re-identification with RGB Modality Only
Abstract

The key to address clothes-changing person re-identification (re-id) is toextract clothes-irrelevant features, e.g., face, hairstyle, body shape, andgait. Most current works mainly focus on modeling body shape frommulti-modality information (e.g., silhouettes and sketches), but do not makefull use of the clothes-irrelevant information in the original RGB images. Inthis paper, we propose a Clothes-based Adversarial Loss (CAL) to mineclothes-irrelevant features from the original RGB images by penalizing thepredictive power of re-id model w.r.t. clothes. Extensive experimentsdemonstrate that using RGB images only, CAL outperforms all state-of-the-artmethods on widely-used clothes-changing person re-id benchmarks. Besides,compared with images, videos contain richer appearance and additional temporalinformation, which can be used to model proper spatiotemporal patterns toassist clothes-changing re-id. Since there is no publicly availableclothes-changing video re-id dataset, we contribute a new dataset named CCVIDand show that there exists much room for improvement in modeling spatiotemporalinformation. The code and new dataset are available at:https://github.com/guxinqian/Simple-CCReID.

Clothes-Changing Person Re-identification with RGB Modality Only | Latest Papers | HyperAI