Learning With Noisy Labels On Cifar 10N 2
Metrics
Accuracy (mean)
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy (mean) |
---|---|
generative-noisy-label-learning-by-implicit | 91.42 |
co-teaching-robust-training-of-deep-neural | 90.30 |
peer-loss-functions-learning-from-noisy | 88.76 |
generalized-cross-entropy-loss-for-training | 87.70 |
learning-with-instance-dependent-label-noise-1 | 94.88 |
early-learning-regularization-prevents | 94.20 |
making-deep-neural-networks-robust-to-label | 86.28 |
190600189 | 87.71 |
combating-noisy-labels-by-agreement-a-joint | 90.21 |
robust-training-under-label-noise-by-over | 95.31 |
learning-with-instance-dependent-label-noise-1 | 89.91 |
making-deep-neural-networks-robust-to-label | 86.14 |
early-learning-regularization-prevents | 91.61 |
psscl-a-progressive-sample-selection | 96.21 |
dividemix-learning-with-noisy-labels-as-semi-1 | 90.90 |
Model 16 | 86.46 |
clusterability-as-an-alternative-to-anchor | 90.75 |
does-label-smoothing-mitigate-label-noise | 89.35 |
understanding-generalized-label-smoothing | 90.37 |
imprecise-label-learning-a-unified-framework | 95.04 |
when-optimizing-f-divergence-is-robust-with-1 | 89.79 |
provably-end-to-end-label-noise-learning | 88.27 |
how-does-disagreement-help-generalization | 89.47 |