Benchmarks for Corruption Invariant Person Re-identification

When deploying person re-identification (ReID) model in safety-criticalapplications, it is pivotal to understanding the robustness of the modelagainst a diverse array of image corruptions. However, current evaluations ofperson ReID only consider the performance on clean datasets and ignore imagesin various corrupted scenarios. In this work, we comprehensively establish sixReID benchmarks for learning corruption invariant representation. In the fieldof ReID, we are the first to conduct an exhaustive study on corruptioninvariant learning in single- and cross-modality datasets, includingMarket-1501, CUHK03, MSMT17, RegDB, SYSU-MM01. After reproducing and examiningthe robustness performance of 21 recent ReID methods, we have someobservations: 1) transformer-based models are more robust towards corruptedimages, compared with CNN-based models, 2) increasing the probability of randomerasing (a commonly used augmentation method) hurts model corruptionrobustness, 3) cross-dataset generalization improves with corruption robustnessincreases. By analyzing the above observations, we propose a strong baseline onboth single- and cross-modality ReID datasets which achieves improvedrobustness against diverse corruptions. Our codes are available onhttps://github.com/MinghuiChen43/CIL-ReID.