HyperAIHyperAI
2 months ago

Generative Low-bitwidth Data Free Quantization

Xu, Shoukai ; Li, Haokun ; Zhuang, Bohan ; Liu, Jing ; Cao, Jiezhang ; Liang, Chuangrun ; Tan, Mingkui
Generative Low-bitwidth Data Free Quantization
Abstract

Neural network quantization is an effective way to compress deep models andimprove their execution latency and energy efficiency, so that they can bedeployed on mobile or embedded devices. Existing quantization methods requireoriginal data for calibration or fine-tuning to get better performance.However, in many real-world scenarios, the data may not be available due toconfidential or private issues, thereby making existing quantization methodsnot applicable. Moreover, due to the absence of original data, the recentlydeveloped generative adversarial networks (GANs) cannot be applied to generatedata. Although the full-precision model may contain rich data information, suchinformation alone is hard to exploit for recovering the original data orgenerating new meaningful data. In this paper, we investigate asimple-yet-effective method called Generative Low-bitwidth Data FreeQuantization (GDFQ) to remove the data dependence burden. Specifically, wepropose a knowledge matching generator to produce meaningful fake data byexploiting classification boundary knowledge and distribution information inthe pre-trained model. With the help of generated data, we can quantize a modelby learning knowledge from the pre-trained model. Extensive experiments onthree data sets demonstrate the effectiveness of our method. More critically,our method achieves much higher accuracy on 4-bit quantization than theexisting data free quantization method. Code is available athttps://github.com/xushoukai/GDFQ.

Generative Low-bitwidth Data Free Quantization | Latest Papers | HyperAI