HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Lernen mit verrauschten Etiketten
Learning With Noisy Labels On Cifar 10N Worst
Learning With Noisy Labels On Cifar 10N Worst
Metriken
Accuracy (mean)
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy (mean)
Paper Title
ProMix
96.16
ProMix: Combating Label Noise via Maximizing Clean Sample Utility
PSSCL
95.12
PSSCL: A progressive sample selection framework with contrastive loss designed for noisy labels
PGDF
93.65
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
ILL
93.58
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations
SOP+
93.24
Robust Training under Label Noise by Over-parameterization
Divide-Mix
92.56
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
CORES*
91.66
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
ELR+
91.09
Early-Learning Regularization Prevents Memorization of Noisy Labels
GNL
86.99
Partial Label Supervision for Agnostic Generative Noisy Label Learning
CAL
85.36
Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
Co-Teaching
83.83
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
CORES
83.60
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
ELR
83.58
Early-Learning Regularization Prevents Memorization of Noisy Labels
JoCoR
83.37
Combating noisy labels by agreement: A joint training method with co-regularization
Co-Teaching+
83.26
How does Disagreement Help Generalization against Label Corruption?
Negative-LS
82.99
Understanding Generalized Label Smoothing when Learning with Noisy Labels
Positive-LS
82.76
Does label smoothing mitigate label noise?
F-div
82.53
When Optimizing $f$-divergence is Robust with Label Noise
Peer Loss
82.53
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
GCE
80.66
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
0 of 25 row(s) selected.
Previous
Next
Learning With Noisy Labels On Cifar 10N Worst | SOTA | HyperAI