HyperAI
HyperAI
الرئيسية
المنصة
الوثائق
الأخبار
الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
شروط الخدمة
سياسة الخصوصية
العربية
HyperAI
HyperAI
Toggle Sidebar
البحث في الموقع...
⌘
K
Command Palette
Search for a command to run...
المنصة
الرئيسية
SOTA
التعلم مع العلامات الضوضائية
Learning With Noisy Labels On Cifar 100N
Learning With Noisy Labels On Cifar 100N
المقاييس
Accuracy (mean)
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Accuracy (mean)
Paper Title
PGDF
74.08
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
ProMix
73.39
ProMix: Combating Label Noise via Maximizing Clean Sample Utility
PSSCL
72.00
PSSCL: A progressive sample selection framework with contrastive loss designed for noisy labels
Divide-Mix
71.13
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
SOP+
67.81
Robust Training under Label Noise by Over-parameterization
ELR+
66.72
Early-Learning Regularization Prevents Memorization of Noisy Labels
ILL
65.84
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations
CAL
61.73
Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
CORES
61.15
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
Co-Teaching
60.37
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
JoCoR
59.97
Combating noisy labels by agreement: A joint training method with co-regularization
ELR
58.94
Early-Learning Regularization Prevents Memorization of Noisy Labels
Negative-LS
58.59
To Smooth or Not? When Label Smoothing Meets Noisy Labels
Co-Teaching+
57.88
How does Disagreement Help Generalization against Label Corruption?
VolMinNet
57.80
Provably End-to-end Label-Noise Learning without Anchor Points
Peer Loss
57.59
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
Backward-T
57.14
Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
F-div
57.10
When Optimizing $f$-divergence is Robust with Label Noise
Forward-T
57.01
Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
GCE
56.73
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
0 of 24 row(s) selected.
Previous
Next
Learning With Noisy Labels On Cifar 100N | SOTA | HyperAI