HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
노이즈 있는 라벨로 학습
Learning With Noisy Labels On Cifar 100N
Learning With Noisy Labels On Cifar 100N
평가 지표
Accuracy (mean)
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy (mean)
Paper Title
PGDF
74.08
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
ProMix
73.39
ProMix: Combating Label Noise via Maximizing Clean Sample Utility
PSSCL
72.00
PSSCL: A progressive sample selection framework with contrastive loss designed for noisy labels
Divide-Mix
71.13
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
SOP+
67.81
Robust Training under Label Noise by Over-parameterization
ELR+
66.72
Early-Learning Regularization Prevents Memorization of Noisy Labels
ILL
65.84
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations
CAL
61.73
Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
CORES
61.15
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
Co-Teaching
60.37
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
JoCoR
59.97
Combating noisy labels by agreement: A joint training method with co-regularization
ELR
58.94
Early-Learning Regularization Prevents Memorization of Noisy Labels
Negative-LS
58.59
To Smooth or Not? When Label Smoothing Meets Noisy Labels
Co-Teaching+
57.88
How does Disagreement Help Generalization against Label Corruption?
VolMinNet
57.80
Provably End-to-end Label-Noise Learning without Anchor Points
Peer Loss
57.59
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
Backward-T
57.14
Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
F-div
57.10
When Optimizing $f$-divergence is Robust with Label Noise
Forward-T
57.01
Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
GCE
56.73
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
0 of 24 row(s) selected.
Previous
Next
Learning With Noisy Labels On Cifar 100N | SOTA | HyperAI초신경