NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale

Prevailing autoregressive (AR) models for text-to-image generation eitherrely on heavy, computationally-intensive diffusion models to process continuousimage tokens, or employ vector quantization (VQ) to obtain discrete tokens withquantization loss. In this paper, we push the autoregressive paradigm forwardwith NextStep-1, a 14B autoregressive model paired with a 157M flow matchinghead, training on discrete text tokens and continuous image tokens withnext-token prediction objectives. NextStep-1 achieves state-of-the-artperformance for autoregressive models in text-to-image generation tasks,exhibiting strong capabilities in high-fidelity image synthesis. Furthermore,our method shows strong performance in image editing, highlighting the powerand versatility of our unified approach. To facilitate open research, we willrelease our code and models to the community.