HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Lernen mit verrauschten Etiketten
Learning With Noisy Labels On Cifar 10N
Learning With Noisy Labels On Cifar 10N
Metriken
Accuracy (mean)
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy (mean)
Paper Title
ProMix
97.39
ProMix: Combating Label Noise via Maximizing Clean Sample Utility
PSSCL
96.41
PSSCL: A progressive sample selection framework with contrastive loss designed for noisy labels
PGDF
96.11
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
SOP+
95.61
Robust Training under Label Noise by Over-parameterization
ILL
95.47
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations
CORES*
95.25
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
Divide-Mix
95.01
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
ELR+
94.83
Early-Learning Regularization Prevents Memorization of Noisy Labels
PES (Semi)
94.66
Understanding and Improving Early Stopping for Learning with Noisy Labels
GNL
92.57
Partial Label Supervision for Agnostic Generative Noisy Label Learning
ELR
92.38
Early-Learning Regularization Prevents Memorization of Noisy Labels
CAL
91.97
Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
Negative-LS
91.97
To Smooth or Not? When Label Smoothing Meets Noisy Labels
F-div
91.64
When Optimizing $f$-divergence is Robust with Label Noise
Positive-LS
91.57
Does label smoothing mitigate label noise?
JoCoR
91.44
Combating noisy labels by agreement: A joint training method with co-regularization
CORES
91.23
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
Co-Teaching
91.20
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
Peer Loss
90.75
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
Co-Teaching+
90.61
How does Disagreement Help Generalization against Label Corruption?
0 of 26 row(s) selected.
Previous
Next
Learning With Noisy Labels On Cifar 10N | SOTA | HyperAI