Learning to Rematch Mismatched Pairs for Robust Cross-Modal Retrieval

Collecting well-matched multimedia datasets is crucial for trainingcross-modal retrieval models. However, in real-world scenarios, massivemultimodal data are harvested from the Internet, which inevitably containsPartially Mismatched Pairs (PMPs). Undoubtedly, such semantical irrelevant datawill remarkably harm the cross-modal retrieval performance. Previous effortstend to mitigate this problem by estimating a soft correspondence todown-weight the contribution of PMPs. In this paper, we aim to address thischallenge from a new perspective: the potential semantic similarity amongunpaired samples makes it possible to excavate useful knowledge from mismatchedpairs. To achieve this, we propose L2RM, a general framework based on OptimalTransport (OT) that learns to rematch mismatched pairs. In detail, L2RM aims togenerate refined alignments by seeking a minimal-cost transport plan acrossdifferent modalities. To formalize the rematching idea in OT, first, we proposea self-supervised cost function that automatically learns from explicitsimilarity-cost mapping relation. Second, we present to model a partial OTproblem while restricting the transport among false positives to further boostrefined alignments. Extensive experiments on three benchmarks demonstrate ourL2RM significantly improves the robustness against PMPs for existing models.The code is available at https://github.com/hhc1997/L2RM.