HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
Learning With Noisy Labels
Learning With Noisy Labels On Cifar 100N
Learning With Noisy Labels On Cifar 100N
평가 지표
Accuracy (mean)
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy (mean)
Paper Title
Repository
Peer Loss
57.59
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
ILL
65.84
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations
F-div
57.10
When Optimizing $f$-divergence is Robust with Label Noise
PGDF
74.08
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
T-Revision
51.55
Are Anchor Points Really Indispensable in Label-Noise Learning?
VolMinNet
57.80
Provably End-to-end Label-Noise Learning without Anchor Points
CE
55.50
-
-
Backward-T
57.14
Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
Co-Teaching
60.37
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
Positive-LS
55.84
Does label smoothing mitigate label noise?
-
CAL
61.73
Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
ELR+
66.72
Early-Learning Regularization Prevents Memorization of Noisy Labels
Divide-Mix
71.13
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
Negative-LS
58.59
To Smooth or Not? When Label Smoothing Meets Noisy Labels
CORES*
55.72
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
ELR
58.94
Early-Learning Regularization Prevents Memorization of Noisy Labels
Forward-T
57.01
Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
JoCoR
59.97
Combating noisy labels by agreement: A joint training method with co-regularization
CORES
61.15
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
GCE
56.73
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
0 of 24 row(s) selected.
Previous
Next