Image Classification On Clothing1M
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | Accuracy |
---|---|
unsupervised-label-noise-modeling-and-loss | 71% |
centrality-and-consistency-two-stage-clean | 75.4% |
knockoffs-spr-clean-sample-selection-in | 75.20% |
learning-advisor-networks-for-noisy-image-1 | 75.35% |
longremix-robust-learning-with-high | 74.38% |
contrastive-learning-improves-model | 73.27% |
learning-to-learn-from-noisy-labeled-data | 73.47% |
contrastive-learning-improves-model | 73.36% |
co-teaching-robust-training-of-deep-neural | 70.15% |
scalable-penalized-regression-for-noise | 71.16% |
winning-ticket-in-noisy-image-classification | 74.37% |
jigsaw-vit-learning-jigsaw-puzzles-in-vision | 75.4% |
label-retrieval-augmented-diffusion-models-1 | 75.7% |
improving-mae-against-cce-under-label-noise | 73.2% |
push-the-student-to-learn-right-progressive | 73.72% |
early-learning-regularization-prevents | 74.81% |
when-optimizing-f-divergence-is-robust-with-1 | 73.09% |
combating-noisy-labels-by-agreement-a-joint | 70.3% |
error-bounded-correction-of-noisy-labels-1 | 71.74% |
learning-with-instance-dependent-label-noise-1 | 73.24% |
s3-supervised-self-supervised-learning-under-1 | 74.91 |
probabilistic-end-to-end-noise-correction-for | 73.49% |
compressing-features-for-learning-with-noisy | 75% |
understanding-generalized-label-smoothing | 74.24% |
dividemix-learning-with-noisy-labels-as-semi-1 | 74.76% |
emphasis-regularisation-by-gradient-rescaling | 73.3% |
symmetric-cross-entropy-for-robust-learning | 71.02% |
a-second-order-approach-to-learning-with | 74.17% |
dimensionality-driven-learning-with-noisy | 69.47% |
masking-a-new-perspective-of-noisy | 71.1% |
instance-dependent-noisy-label-learning-via | 74.40% |
safeguarded-dynamic-label-regression-for | 73.07% |
l_dmi-an-information-theoretic-noise-robust | 72.46% |
class-prototype-based-cleaner-for-label-noise | 75.40±0.10% |
joint-optimization-framework-for-learning | 72.23% |
adaptive-sample-selection-for-robust-learning | 72.28% |
beyond-class-conditional-assumption-a-primary | 70.63% |
contrast-to-divide-self-supervised-pre-1 | 74.58 ± 0.15% |
contrastive-learning-improves-model | 73.35% |
boosting-co-teaching-with-compression | 74.9% |
generalized-cross-entropy-loss-for-training | 69.75% |
augmentation-strategies-for-learning-with | 75.11% |
sample-prior-guided-robust-model-learning-to | 75.19% |
noiserank-unsupervised-label-noise-reduction | 73.82% |
which-strategies-matter-for-noisy-label | 73.8% |
adaptive-sample-selection-for-robust-learning | 68.94% |
clusterability-as-an-alternative-to-anchor | 73.39% |
learning-with-noisy-labels-via-self | 75.63% |
l_dmi-a-novel-information-theoretic-loss | 72.46% |
cross-to-merge-training-with-class-balance | 74.61% |