11 天前

InternLM2 技术报告

Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, Xiaoyi Dong, Haodong Duan, Qi Fan, Zhaoye Fei, Yang Gao, Jiaye Ge, Chenya Gu, Yuzhe Gu, Tao Gui, Aijia Guo, Qipeng Guo, Conghui He, Yingfan Hu, Ting Huang, Tao Jiang, Penglong Jiao, Zhenjiang Jin, Zhikai Lei, Jiaxing Li, Jingwen Li, Linyang Li, Shuaibin Li, Wei Li, Yining Li, Hongwei Liu, Jiangning Liu, Jiawei Hong, Kaiwen Liu, Kuikun Liu, Xiaoran Liu, Chengqi Lv, Haijun Lv, Kai Lv, Li Ma, Runyuan Ma, Zerun Ma, Wenchang Ning, Linke Ouyang, Jiantao Qiu, Yuan Qu, Fukai Shang, Yunfan Shao, Demin Song, Zifan Song, Zhihao Sui, Peng Sun, Yu Sun, Huanze Tang, Bin Wang, Guoteng Wang, Jiaqi Wang, Jiayu Wang, Rui Wang, Yudong Wang, Ziyi Wang, Xingjian Wei, Qizhen Weng, Fan Wu, Yingtong Xiong, Chao Xu, Ruiliang Xu, Hang Yan, Yirong Yan, Xiaogui Yang, Haochen Ye, Huaiyuan Ying, Jia Yu, Jing Yu, Yuhang Zang, Chuyu Zhang, Li Zhang, Pan Zhang, Peng Zhang, Ruijie Zhang, Shuo Zhang, Songyang Zhang, Wenjian Zhang, Wenwei Zhang, Xingcheng Zhang, Xinyue Zhang, Hui Zhao, Qian Zhao, Xiaomeng Zhao, Fengzhe Zhou, Zaida Zhou, Jingming Zhuo, Yicheng Zou, Xipeng Qiu, Yu Qiao, Dahua Lin
InternLM2 技术报告
摘要

大型语言模型(LLM)如ChatGPT和GPT-4的快速发展,引发了关于人工通用智能(AGI)即将到来的广泛讨论。然而,在开源模型中复现此类技术进展仍面临诸多挑战。本文介绍了InternLM2——一个开源大型语言模型,其在六大维度、三十项基准测试、长文本建模能力以及开放式主观评估中均展现出优于前代模型的综合性能。这一成果得益于创新的预训练与优化技术。InternLM2的预训练过程被详尽阐述,重点介绍了多样化数据类型的准备,包括文本、代码及长上下文数据。该模型在预训练与微调阶段逐步提升上下文长度,从初始的4K token扩展至32K token,从而高效捕捉长期依赖关系。在200K长度的“针在 haystack”(Needle-in-a-Haystack)测试中,InternLM2表现出卓越性能。为进一步提升模型对齐能力,InternLM2采用监督微调(SFT)策略,并引入一种新颖的条件在线人类反馈强化学习(Conditional Online Reinforcement Learning from Human Feedback, COOL RLHF)方法,有效应对人类偏好冲突与奖励劫持(reward hacking)问题。通过在不同训练阶段与多种模型规模下发布InternLM2系列模型,本文向研究社区提供了对模型演进过程的深入洞察。

InternLM2 技术报告 | 最新论文 | HyperAI超神经