HyperAI

Image Classification On Mini Webvision 1 0

Metrics

ImageNet Top-1 Accuracy
ImageNet Top-5 Accuracy
Top-1 Accuracy
Top-5 Accuracy

Results

Performance results of various models on this benchmark

Comparison Table
Model NameImageNet Top-1 AccuracyImageNet Top-5 AccuracyTop-1 AccuracyTop-5 Accuracy
co-teaching-robust-training-of-deep-neural61.4884.7063.5885.20
contrast-to-divide-self-supervised-pre-178.57 ± 0.3793.04 ± 0.1079.42 ± 0.3492.32 ± 0.33
psscl-a-progressive-sample-selection79.6895.1679.5694.84
robust-temporal-ensembling-for-learning-with80.8497.24--
dynamic-loss-for-robust-learning74.7693.0880.1293.64
longremix-robust-learning-with-high--78.9292.32
label-retrieval-augmented-diffusion-models-182.56-84.16-
noisy-concurrent-training-for-efficient71.7391.6175.1690.77
robust-long-tailed-learning-under-label-noise74.6492.4877.6492.44
dimensionality-driven-learning-with-noisy57.8081.3662.6884.00
centrality-and-consistency-two-stage-clean76.0893.8679.3693.64
sample-prior-guided-robust-model-learning-to75.4593.1181.4794.03
psscl-a-progressive-sample-selection79.4094.8478.5293.80
dividemix-learning-with-noisy-labels-as-semi-174.42 ±0.2991.21 ±0.1276.32 ±0.3690.65 ±0.16
making-deep-neural-networks-robust-to-label57.3682.3661.1282.68
coresets-for-robust-training-of-neural67.3687.8472.4089.56
twin-contrastive-learning-with-noisy-labels75.492.479.192.3
multi-objective-interpolation-training-for--78.76-
faster-meta-update-strategy-for-noise-robust7792.7679.492.80
robust-and-on-the-fly-dataset-denoising-for66.786.374.690.6
learning-with-neighbor-consistency-for-noisy-1--80.5-
codim-learning-with-noisy-labels-via77.2492.4880.1293.52
dividemix-learning-with-noisy-labels-as-semi-1--76.08-
bootstrapping-the-relationship-between-images75.9692.2080.8892.76
dividemix-learning-with-noisy-labels-as-semi-175.2091.6477.3291.64
robust-early-learning-hindering-the61.85---
two-wrongs-don-t-make-a-right-combating75.4893.7681.8494.12
confidence-adaptive-regularization-for-deep74.0992.0977.4192.25
learning-with-neighbor-consistency-for-noisy-1--79.4-
understanding-and-utilizing-deep-neural61.685.065.285.3
scanmix-learning-from-severe-label-noise-via--77.72-
early-learning-regularization-prevents70.2989.7677.7891.68
learning-with-neighbor-consistency-for-noisy-1--77.1-
generalized-jensen-shannon-divergence-loss75.5091.2779.2891.22
s3-supervised-self-supervised-learning-under-175.7691.7680.9292.80
mentornet-learning-data-driven-curriculum-for63.885.8--
ngc-a-unified-framework-for-learning-with74.44 91.0479.1691.84
synthetic-vs-real-deep-learning-on-controlled-172.991.176.090.2
sample-selection-with-uncertainty-of-losses--77.53-
cmw-net-learning-a-class-aware-sample75.7292.5278.0892.96
hard-sample-aware-noise-robust-learning-for--77.52-
cmw-net-learning-a-class-aware-sample77.3693.4880.4493.36
class-prototype-based-cleaner-for-label-noise75.75±0.1493.49±0.2579.63±0.0893.46±0.10
normalized-loss-functions-for-deep-learning62.64---
codim-learning-with-noisy-labels-via76.5291.9680.8892.48
selective-supervised-contrastive-learning76.8493.0479.9692.64
normalized-loss-functions-for-deep-learning62.36---