HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Quantization
Quantization On Imagenet
Quantization On Imagenet
Métriques
Top-1 Accuracy (%)
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Top-1 Accuracy (%)
Paper Title
Repository
UniQ (Ours)
71.5
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer
EfficientNet-B0-W4A4
76
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
ResNet50-W3A4
75.45
HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
FQ-ViT (DeiT-T)
71.61
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
FQ-ViT (Swin-S)
82.71
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
FQ-ViT (ViT-B)
83.31
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
FQ-ViT (DeiT-B)
81.20
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
DenseNet-121 W8A8
73.356
HPTQ: Hardware-Friendly Post Training Quantization
ResNet-18 + PACT + R2Loss
68.45
R2 Loss: Range Restriction Loss for Model Compression and Quantization
-
MobileNetV2 W8A8
71.46
HPTQ: Hardware-Friendly Post Training Quantization
ResNet50-W4A4 (paper)
76.7
Learned Step Size Quantization
EfficientNet-B0 W8A8
74.216
HPTQ: Hardware-Friendly Post Training Quantization
FQ-ViT (Swin-T)
80.51
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
MPT (80) +BN
74.03
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
ADLIK-MO-ResNet50-W4A4
77.878
Learned Step Size Quantization
FQ-ViT (ViT-L)
85.03
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
MobileNet-v1 + EWGS + R2Loss
69.79
R2 Loss: Range Restriction Loss for Model Compression and Quantization
-
EfficientNet-W4A4
73.8
LSQ+: Improving low-bit quantization through learnable offsets and better initialization
ADLIK-MO-ResNet50-W3A4
77.34
Learned Step Size Quantization
MixNet-W4A4
71.7
LSQ+: Improving low-bit quantization through learnable offsets and better initialization
0 of 27 row(s) selected.
Previous
Next