Noisy-Correspondence Learning for Text-to-Image Person Re-identification

Text-to-image person re-identification (TIReID) is a compelling topic in thecross-modal community, which aims to retrieve the target person based on atextual query. Although numerous TIReID methods have been proposed and achievedpromising performance, they implicitly assume the training image-text pairs arecorrectly aligned, which is not always the case in real-world scenarios. Inpractice, the image-text pairs inevitably exist under-correlated or evenfalse-correlated, a.k.a noisy correspondence (NC), due to the low quality ofthe images and annotation errors. To address this problem, we propose a novelRobust Dual Embedding method (RDE) that can learn robust visual-semanticassociations even with NC. Specifically, RDE consists of two main components:1) A Confident Consensus Division (CCD) module that leverages the dual-graineddecisions of dual embedding modules to obtain a consensus set of clean trainingdata, which enables the model to learn correct and reliable visual-semanticassociations. 2) A Triplet Alignment Loss (TAL) relaxes the conventionalTriplet Ranking loss with the hardest negative samples to a log-exponentialupper bound over all negative ones, thus preventing the model collapse under NCand can also focus on hard-negative samples for promising performance. Weconduct extensive experiments on three public benchmarks, namely CUHK-PEDES,ICFG-PEDES, and RSTPReID, to evaluate the performance and robustness of ourRDE. Our method achieves state-of-the-art results both with and withoutsynthetic noisy correspondences on all three datasets. Code is available athttps://github.com/QinYang79/RDE.