Sparse Learning On Imagenet
평가 지표
Top-1 Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | Top-1 Accuracy | Paper Title | Repository |
---|---|---|---|
Resnet-50: 90% Sparse 100 epochs | 73.82 | Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training | |
SINDy | 6 | Sparse learning of stochastic dynamic equations | |
Resnet-50: 90% Sparse 100 epochs | 74.5 | Sparse Training via Boosting Pruning Plasticity with Neuroregeneration | |
MobileNet-v1: 75% Sparse | 71.9 | Rigging the Lottery: Making All Tickets Winners | |
Resnet-50: 80% Sparse 100 epochs | 76 | Sparse Training via Boosting Pruning Plasticity with Neuroregeneration | |
Resnet-50: 80% Sparse | 77.1 | Rigging the Lottery: Making All Tickets Winners | |
Resnet-50: 80% Sparse 100 epochs | 75.84 | Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training | |
MobileNet-v1: 90% Sparse | 68.1 | Rigging the Lottery: Making All Tickets Winners | |
Resnet-50: 90% Sparse | 76.4 | Rigging the Lottery: Making All Tickets Winners |
0 of 9 row(s) selected.