Command Palette
Search for a command to run...
GMem: A Modular Approach for Ultra-Efficient Generative Models
GMem: A Modular Approach for Ultra-Efficient Generative Models
Yi Tang Peng Sun Zhenglin Cheng Tao Lin
Abstract
Recent studies indicate that the denoising process in deep generative diffusion models implicitly learns and memorizes semantic information from the data distribution. These findings suggest that capturing more complex data distributions requires larger neural networks, leading to a substantial increase in computational demands, which in turn become the primary bottleneck in both training and inference of diffusion models. To this end, we introduce GMem: A Modular Approach for Ultra-Efficient Generative Models. Our approach GMem decouples the memory capacity from model and implements it as a separate, immutable memory set that preserves the essential semantic information in the data. The results are significant: GMem enhances both training, sampling efficiency, and diversity generation. This design on one hand reduces the reliance on network for memorize complex data distribution and thus enhancing both training and sampling efficiency. On ImageNet at 256×256 resolution, GMem achieves a 50× training speedup compared to SiT, reaching FID =7.66 in fewer than 28 epochs (∼4 hours training time), while SiT requires 1400 epochs. Without classifier-free guidance, GMem achieves state-of-the-art (SoTA) performance FID =1.53 in 160 epochs with only ∼20 hours of training, outperforming LightningDiT which requires 800 epochs and ∼95 hours to attain FID =2.17.