HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Netzwerkpruning
Network Pruning On Imagenet
Network Pruning On Imagenet
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
ResNet50-2.3 GFLOPs
78.79
Pruning Filters for Efficient ConvNets
ResNet50-1.5 GFLOPs
78.07
Pruning Filters for Efficient ConvNets
ResNet50 2.5 GFLOPS
78.0
Knapsack Pruning with Inner Distillation
RegX-1.6G
77.97
Group Fisher Pruning for Practical Network Compression
ResNet50 2.0 GFLOPS
77.70
Knapsack Pruning with Inner Distillation
ResNet50-3G FLOPs
77.1
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
ResNet50-2G FLOPs
76.4
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
ResNet50-1G FLOPs
76.376
Pruning Filters for Efficient ConvNets
TAS-pruned ResNet-50
76.20
Network Pruning via Transformable Architecture Search
ResNet50
75.59
Network Pruning That Matters: A Case Study on Retraining Variants
ResNet50-1G FLOPs
74.2
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
ResNet50-1G FLOPs
74.2
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
MobileNetV2
73.42
Group Fisher Pruning for Practical Network Compression
ResNet50
73.14
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
MobileNetV1-50% FLOPs
70.7
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
SqueezeNet (6-bit Deep Compression)
57.5%
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
0 of 16 row(s) selected.
Previous
Next
Network Pruning On Imagenet | SOTA | HyperAI