Sparse Learning On Imagenet
Metrics
Top-1 Accuracy
Results
Performance results of various models on this benchmark
Model Name | Top-1 Accuracy | Paper Title | Repository |
---|---|---|---|
Resnet-50: 90% Sparse 100 epochs | 73.82 | Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training | |
SINDy | 6 | Sparse learning of stochastic dynamic equations | |
Resnet-50: 90% Sparse 100 epochs | 74.5 | Sparse Training via Boosting Pruning Plasticity with Neuroregeneration | |
MobileNet-v1: 75% Sparse | 71.9 | Rigging the Lottery: Making All Tickets Winners | |
Resnet-50: 80% Sparse 100 epochs | 76 | Sparse Training via Boosting Pruning Plasticity with Neuroregeneration | |
Resnet-50: 80% Sparse | 77.1 | Rigging the Lottery: Making All Tickets Winners | |
Resnet-50: 80% Sparse 100 epochs | 75.84 | Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training | |
MobileNet-v1: 90% Sparse | 68.1 | Rigging the Lottery: Making All Tickets Winners | |
Resnet-50: 90% Sparse | 76.4 | Rigging the Lottery: Making All Tickets Winners |
0 of 9 row(s) selected.