HyperAI超神经

Quantization On Imagenet

评估指标

Top-1 Accuracy (%)

评测结果

各个模型在此基准测试上的表现结果

模型名称
Top-1 Accuracy (%)
Paper TitleRepository
UniQ (Ours)71.5Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer
EfficientNet-B0-W4A476HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
ResNet50-W3A475.45HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
FQ-ViT (DeiT-T)71.61FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
FQ-ViT (Swin-S)82.71FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
FQ-ViT (ViT-B)83.31FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
FQ-ViT (DeiT-B)81.20FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
DenseNet-121 W8A873.356HPTQ: Hardware-Friendly Post Training Quantization
ResNet-18 + PACT + R2Loss68.45R2 Loss: Range Restriction Loss for Model Compression and Quantization-
MobileNetV2 W8A871.46HPTQ: Hardware-Friendly Post Training Quantization
ResNet50-W4A4 (paper)76.7Learned Step Size Quantization
EfficientNet-B0 W8A874.216HPTQ: Hardware-Friendly Post Training Quantization
FQ-ViT (Swin-T)80.51FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
MPT (80) +BN74.03Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
ADLIK-MO-ResNet50-W4A477.878Learned Step Size Quantization
FQ-ViT (ViT-L)85.03FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
MobileNet-v1 + EWGS + R2Loss69.79R2 Loss: Range Restriction Loss for Model Compression and Quantization-
EfficientNet-W4A473.8LSQ+: Improving low-bit quantization through learnable offsets and better initialization
ADLIK-MO-ResNet50-W3A477.34Learned Step Size Quantization
MixNet-W4A471.7LSQ+: Improving low-bit quantization through learnable offsets and better initialization
0 of 27 row(s) selected.