HyperAI
8 days ago

Step-Audio 2 Technical Report

Boyong Wu, Chao Yan, Chen Hu, Cheng Yi, Chengli Feng, Fei Tian, Feiyu Shen, Gang Yu, Haoyang Zhang, Jingbei Li, Mingrui Chen, Peng Liu, Wang You, Xiangyu Tony Zhang, Xingyuan Li, Xuerui Yang, Yayue Deng, Yechang Huang, Yuxin Li, Yuxin Zhang, Zhao You, Brian Li, Changyi Wan, Hanpeng Hu, Jiangjie Zhen, Siyu Chen, Song Yuan, Xuelin Zhang, Yimin Jiang, Yu Zhou, Yuxiang Yang, Bingxin Li, Buyun Ma, Changhe Song, Dongqing Pang, Guoqiang Hu, Haiyang Sun, Kang An, Na Wang, Shuli Gao, Wei Ji, Wen Li, Wen Sun, Xuan Wen, Yong Ren, Yuankai Ma, Yufan Lu, Bin Wang, Bo Li, Changxin Miao, Che Liu, Chen Xu, Dapeng Shi, Dingyuan Hu, Donghang Wu, Enle Liu, Guanzhe Huang, Gulin Yan, Han Zhang, Hao Nie, Haonan Jia, Hongyu Zhou, Jianjian Sun, Jiaoren Wu, Jie Wu, Jie Yang, Jin Yang, Junzhe Lin, Kaixiang Li, Lei Yang, Liying Shi, Li Zhou, Longlong Gu, Ming Li, Mingliang Li, Mingxiao Li, Nan Wu, Qi Han, Qinyuan Tan, Shaoliang Pang, Shengjie Fan, Siqi Liu, Tiancheng Cao, Wanying Lu, Wenqing He, Wuxun Xie, Xu Zhao, Xueqi Li, Yanbo Yu, Yang Yang, Yi Liu, Yifan Lu, Yilei Wang, Yuanhao Ding, Yuanwei Liang, Yuanwei Lu, Yuchu Luo, Yuhe Yin, Yumeng Zhan, Yuxiang Zhang, Zidong Yang, Zixin Zhang, Binxing Jiao, Daxin Jiang, Heung-Yeung Shum, Jiansheng Chen, Jing Li, Xiangyu Zhang, Yibo Zhu
Step-Audio 2 Technical Report
Abstract

This paper presents Step-Audio~2, an end-to-end multi-modal large languagemodel designed for industry-strength audio understanding and speechconversation. By integrating a latent audio encoder and reasoning-centricreinforcement learning (RL), Step-Audio 2 achieves promising performance inautomatic speech recognition (ASR) and audio understanding. To facilitategenuine end-to-end speech conversation, Step-Audio 2 incorporates thegeneration of discrete audio tokens into language modeling, significantlyenhancing its responsiveness to paralinguistic information such as speakingstyles and emotions. To effectively leverage the rich textual and acousticknowledge in real-world data, Step-Audio 2 integrates retrieval-augmentedgeneration (RAG) and is able to call external tools such as web search tomitigate hallucination and audio search to switch timbres. Trained on millionsof hours of speech and audio data, Step-Audio 2 delivers intelligence andexpressiveness across diverse conversational scenarios. Evaluation resultsdemonstrate that Step-Audio 2 achieves state-of-the-art performance on variousaudio understanding and conversational benchmarks compared to other open-sourceand commercial solutions. Please visithttps://github.com/stepfun-ai/Step-Audio2 for more information.