HyperAI
4 days ago

A Survey on Latent Reasoning

Rui-Jie Zhu, Tianhao Peng, Tianhao Cheng, Xingwei Qu, Jinfa Huang, Dawei Zhu, Hao Wang, Kaiwen Xue, Xuanliang Zhang, Yong Shan, Tianle Cai, Taylor Kergan, Assel Kembay, Andrew Smith, Chenghua Lin, Binh Nguyen, Yuqi Pan, Yuhong Chou, Zefan Cai, Zhenhe Wu, Yongchi Zhao, Tianyu Liu, Jian Yang, Wangchunshu Zhou, Chujie Zheng, Chongxuan Li, Yuyin Zhou, Zhoujun Li, Zhaoxiang Zhang, Jiaheng Liu, Ge Zhang, Wenhao Huang, Jason Eshraghian
A Survey on Latent Reasoning
Abstract

Large Language Models (LLMs) have demonstrated impressive reasoningcapabilities, especially when guided by explicit chain-of-thought (CoT)reasoning that verbalizes intermediate steps. While CoT improves bothinterpretability and accuracy, its dependence on natural language reasoninglimits the model's expressive bandwidth. Latent reasoning tackles thisbottleneck by performing multi-step inference entirely in the model'scontinuous hidden state, eliminating token-level supervision. To advance latentreasoning research, this survey provides a comprehensive overview of theemerging field of latent reasoning. We begin by examining the foundational roleof neural network layers as the computational substrate for reasoning,highlighting how hierarchical representations support complex transformations.Next, we explore diverse latent reasoning methodologies, includingactivation-based recurrence, hidden state propagation, and fine-tuningstrategies that compress or internalize explicit reasoning traces. Finally, wediscuss advanced paradigms such as infinite-depth latent reasoning via maskeddiffusion models, which enable globally consistent and reversible reasoningprocesses. By unifying these perspectives, we aim to clarify the conceptuallandscape of latent reasoning and chart future directions for research at thefrontier of LLM cognition. An associated GitHub repository collecting thelatest papers and repos is available at:https://github.com/multimodal-art-projection/LatentCoT-Horizon/.