HyperAIHyperAI
3 months ago

Simple and Effective Masked Diffusion Language Models

Subham Sekhar Sahoo, Marianne Arriola, Yair Schiff, Aaron Gokaslan, Edgar Marroquin, Justin T Chiu, Alexander Rush, Volodymyr Kuleshov
Simple and Effective Masked Diffusion Language Models
Abstract

While diffusion models excel at generating high-quality images, prior workreports a significant performance gap between diffusion and autoregressive (AR)methods in language modeling. In this work, we show that simple masked discretediffusion is more performant than previously thought. We apply an effectivetraining recipe that improves the performance of masked diffusion models andderive a simplified, Rao-Blackwellized objective that results in additionalimprovements. Our objective has a simple form -- it is a mixture of classicalmasked language modeling losses -- and can be used to train encoder-onlylanguage models that admit efficient samplers, including ones that can generatearbitrary lengths of text semi-autoregressively like a traditional languagemodel. On language modeling benchmarks, a range of masked diffusion modelstrained with modern engineering practices achieves a new state-of-the-art amongdiffusion models, and approaches AR perplexity. We release our code at:https://github.com/kuleshov-group/mdlm