Cross Modal Retrieval With Noisy 3
Metrics
Image-to-text R@1
Image-to-text R@10
Image-to-text R@5
R-Sum
Text-to-image R@1
Text-to-image R@10
Text-to-image R@5
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Image-to-text R@1 | Image-to-text R@10 | Image-to-text R@5 | R-Sum | Text-to-image R@1 | Text-to-image R@10 | Text-to-image R@5 |
---|---|---|---|---|---|---|---|
nac-mitigating-noisy-correspondence-in-cross | 80.3 | 98.5 | 96.2 | 524.5 | 63.2 | 96.0 | 90.3 |
learning-with-noisy-correspondence | 78.2 | 98.5 | 95.8 | 519.9 | 62.6 | 95.4 | 89.4 |
repair-rank-correlation-and-noisy-pair-half | 78.3 | 98.3 | 96.8 | 521.2 | 62.5 | 95.5 | 89.8 |
bicro-noisy-correspondence-rectification-for | 78.8 | 98.6 | 96.1 | 523.2 | 63.7 | 95.7 | 90.3 |
noisy-correspondence-learning-with-self | 78.5 | 98.8 | 96.8 | 524.1 | 63.8 | 95.8 | 90.4 |
cross-modal-active-complementary-learning-1 | 79.6 | 98.7 | 96.1 | 525.6 | 64.7 | 95.9 | 90.6 |
recon-enhancing-true-correspondence-1 | 80.9 | 98.8 | 96.6 | 528.6 | 65.2 | 96.0 | 91.0 |
learning-to-rematch-mismatched-pairs-for | 80.2 | 98.5 | 96.3 | 524.7 | 64.2 | 95.4 | 90.1 |
ugncl-uncertainty-guided-noisy-correspondence | 79.5 | 99.0 | 97.2 | 526.3 | 63.7 | 96.0 | 90.9 |
learning-from-noisy-correspondence-with-tri | 79.8 | 98.9 | 96.6 | 527 | 63.8 | 96.7 | 91.2 |
deep-evidential-learning-with-noisy | 77.5 | 98.4 | 95.9 | 518.2 | 61.7 | 95.4 | 89.3 |
noisy-correspondence-learning-with-meta | 78.1 | 98.8 | 97.2 | 524.6 | 64.3 | 95.8 | 90.4 |
integrating-language-guidance-into-image-text | 79.6 | 98.5 | 96.5 | 524.9 | 64.4 | 95.9 | 90.0 |
cross-modal-retrieval-with-noisy | 78.9 | 98.6 | 96.3 | 523 | 63.3 | 95.8 | 90.1 |
cross-modal-retrieval-with-partially | 77.0 | 98.1 | 95.5 | 515.5 | 61.3 | 94.8 | 88.8 |
mitigating-noisy-correspondence-by | 79.5 | 98.9 | 96.4 | 525.7 | 64.4 | 95.9 | 90.6 |
learning-with-noisy-correspondence-for-cross | 77.7 | 98.2 | 95.5 | 518.5 | 62.5 | 95.3 | 89.3 |