Image Classification On Mini Webvision 1 0
Metriken
ImageNet Top-1 Accuracy
ImageNet Top-5 Accuracy
Top-1 Accuracy
Top-5 Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | ImageNet Top-1 Accuracy | ImageNet Top-5 Accuracy | Top-1 Accuracy | Top-5 Accuracy |
---|---|---|---|---|
co-teaching-robust-training-of-deep-neural | 61.48 | 84.70 | 63.58 | 85.20 |
contrast-to-divide-self-supervised-pre-1 | 78.57 ± 0.37 | 93.04 ± 0.10 | 79.42 ± 0.34 | 92.32 ± 0.33 |
psscl-a-progressive-sample-selection | 79.68 | 95.16 | 79.56 | 94.84 |
robust-temporal-ensembling-for-learning-with | 80.84 | 97.24 | - | - |
dynamic-loss-for-robust-learning | 74.76 | 93.08 | 80.12 | 93.64 |
longremix-robust-learning-with-high | - | - | 78.92 | 92.32 |
label-retrieval-augmented-diffusion-models-1 | 82.56 | - | 84.16 | - |
noisy-concurrent-training-for-efficient | 71.73 | 91.61 | 75.16 | 90.77 |
robust-long-tailed-learning-under-label-noise | 74.64 | 92.48 | 77.64 | 92.44 |
dimensionality-driven-learning-with-noisy | 57.80 | 81.36 | 62.68 | 84.00 |
centrality-and-consistency-two-stage-clean | 76.08 | 93.86 | 79.36 | 93.64 |
sample-prior-guided-robust-model-learning-to | 75.45 | 93.11 | 81.47 | 94.03 |
psscl-a-progressive-sample-selection | 79.40 | 94.84 | 78.52 | 93.80 |
dividemix-learning-with-noisy-labels-as-semi-1 | 74.42 ±0.29 | 91.21 ±0.12 | 76.32 ±0.36 | 90.65 ±0.16 |
making-deep-neural-networks-robust-to-label | 57.36 | 82.36 | 61.12 | 82.68 |
coresets-for-robust-training-of-neural | 67.36 | 87.84 | 72.40 | 89.56 |
twin-contrastive-learning-with-noisy-labels | 75.4 | 92.4 | 79.1 | 92.3 |
multi-objective-interpolation-training-for | - | - | 78.76 | - |
faster-meta-update-strategy-for-noise-robust | 77 | 92.76 | 79.4 | 92.80 |
robust-and-on-the-fly-dataset-denoising-for | 66.7 | 86.3 | 74.6 | 90.6 |
learning-with-neighbor-consistency-for-noisy-1 | - | - | 80.5 | - |
codim-learning-with-noisy-labels-via | 77.24 | 92.48 | 80.12 | 93.52 |
dividemix-learning-with-noisy-labels-as-semi-1 | - | - | 76.08 | - |
bootstrapping-the-relationship-between-images | 75.96 | 92.20 | 80.88 | 92.76 |
dividemix-learning-with-noisy-labels-as-semi-1 | 75.20 | 91.64 | 77.32 | 91.64 |
robust-early-learning-hindering-the | 61.85 | - | - | - |
two-wrongs-don-t-make-a-right-combating | 75.48 | 93.76 | 81.84 | 94.12 |
confidence-adaptive-regularization-for-deep | 74.09 | 92.09 | 77.41 | 92.25 |
learning-with-neighbor-consistency-for-noisy-1 | - | - | 79.4 | - |
understanding-and-utilizing-deep-neural | 61.6 | 85.0 | 65.2 | 85.3 |
scanmix-learning-from-severe-label-noise-via | - | - | 77.72 | - |
early-learning-regularization-prevents | 70.29 | 89.76 | 77.78 | 91.68 |
learning-with-neighbor-consistency-for-noisy-1 | - | - | 77.1 | - |
generalized-jensen-shannon-divergence-loss | 75.50 | 91.27 | 79.28 | 91.22 |
s3-supervised-self-supervised-learning-under-1 | 75.76 | 91.76 | 80.92 | 92.80 |
mentornet-learning-data-driven-curriculum-for | 63.8 | 85.8 | - | - |
ngc-a-unified-framework-for-learning-with | 74.44 | 91.04 | 79.16 | 91.84 |
synthetic-vs-real-deep-learning-on-controlled-1 | 72.9 | 91.1 | 76.0 | 90.2 |
sample-selection-with-uncertainty-of-losses | - | - | 77.53 | - |
cmw-net-learning-a-class-aware-sample | 75.72 | 92.52 | 78.08 | 92.96 |
hard-sample-aware-noise-robust-learning-for | - | - | 77.52 | - |
cmw-net-learning-a-class-aware-sample | 77.36 | 93.48 | 80.44 | 93.36 |
class-prototype-based-cleaner-for-label-noise | 75.75±0.14 | 93.49±0.25 | 79.63±0.08 | 93.46±0.10 |
normalized-loss-functions-for-deep-learning | 62.64 | - | - | - |
codim-learning-with-noisy-labels-via | 76.52 | 91.96 | 80.88 | 92.48 |
selective-supervised-contrastive-learning | 76.84 | 93.04 | 79.96 | 92.64 |
normalized-loss-functions-for-deep-learning | 62.36 | - | - | - |