Learning With Noisy Labels On Cifar 100N
Métriques
Accuracy (mean)
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | Accuracy (mean) |
---|---|
peer-loss-functions-learning-from-noisy | 57.59 |
imprecise-label-learning-a-unified-framework | 65.84 |
when-optimizing-f-divergence-is-robust-with-1 | 57.10 |
sample-prior-guided-robust-model-learning-to | 74.08 |
190600189 | 51.55 |
provably-end-to-end-label-noise-learning | 57.80 |
Modèle 7 | 55.50 |
making-deep-neural-networks-robust-to-label | 57.14 |
co-teaching-robust-training-of-deep-neural | 60.37 |
does-label-smoothing-mitigate-label-noise | 55.84 |
clusterability-as-an-alternative-to-anchor | 61.73 |
early-learning-regularization-prevents | 66.72 |
dividemix-learning-with-noisy-labels-as-semi-1 | 71.13 |
understanding-generalized-label-smoothing | 58.59 |
learning-with-instance-dependent-label-noise-1 | 55.72 |
early-learning-regularization-prevents | 58.94 |
making-deep-neural-networks-robust-to-label | 57.01 |
combating-noisy-labels-by-agreement-a-joint | 59.97 |
learning-with-instance-dependent-label-noise-1 | 61.15 |
generalized-cross-entropy-loss-for-training | 56.73 |
how-does-disagreement-help-generalization | 57.88 |
robust-training-under-label-noise-by-over | 67.81 |
psscl-a-progressive-sample-selection | 72.00 |
promix-combating-label-noise-via-maximizing | 73.39 |