HyperAI초신경

MiMo-VL Technical Report

Xiaomi LLM-Core Team, Zihao Yue, Zhenru Lin, Yifan Song, Weikun Wang, Shuhuai Ren, Shuhao Gu, Shicheng Li, Peidian Li, Liang Zhao, Lei Li, Kainan Bao, Hao Tian, Hailin Zhang, Gang Wang, Dawei Zhu, Cici, Chenhong He, Bowen Ye, Bowen Shen, Zihan Zhang, Zihan Jiang, Zhixian Zheng, Zhichao Song, Zhenbo Luo, Yue Yu, Yudong Wang, Yuanyuan Tian, Yu Tu, Yihan Yan, Yi Huang, Xu Wang, Xinzhe Xu, Xingchen Song, Xing Zhang, Xing Yong, Xin Zhang, Xiangwei Deng, Wenyu Yang, Wenhan Ma, Weiwei Lv, Weiji Zhuang, Wei Liu, Sirui Deng, Shuo Liu, Shimao Chen, Shihua Yu, Shaohui Liu, Shande Wang, Rui Ma, Qiantong Wang, Peng Wang, Nuo Chen, Menghang Zhu, Kangyang Zhou, Kang Zhou, Kai Fang, Jun Shi, Jinhao Dong, Jiebao Xiao, Jiaming Xu, Huaqiu Liu, Hongshen Xu, Heng Qu, Haochen Zhao, Hanglong Lv, Guoan Wang, Duo Zhang, Dong Zhang, Di Zhang, Chong Ma, Chang Liu, Can Cai, Bingquan Xia
발행일: 6/5/2025
MiMo-VL Technical Report
초록

We open-source MiMo-VL-7B-SFT and MiMo-VL-7B-RL, two powerful vision-languagemodels delivering state-of-the-art performance in both general visualunderstanding and multimodal reasoning. MiMo-VL-7B-RL outperforms Qwen2.5-VL-7Bon 35 out of 40 evaluated tasks, and scores 59.4 on OlympiadBench, surpassingmodels with up to 78B parameters. For GUI grounding applications, it sets a newstandard with 56.1 on OSWorld-G, even outperforming specialized models such asUI-TARS. Our training combines four-stage pre-training (2.4 trillion tokens)with Mixed On-policy Reinforcement Learning (MORL) integrating diverse rewardsignals. We identify the importance of incorporating high-quality reasoningdata with long Chain-of-Thought into pre-training stages, and the benefits ofmixed RL despite challenges in simultaneous multi-domain optimization. We alsocontribute a comprehensive evaluation suite covering 50+ tasks to promotereproducibility and advance the field. The model checkpoints and fullevaluation suite are available at https://github.com/XiaomiMiMo/MiMo-VL.