HyperAI초신경

Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning

Shuang Chen, Yue Guo, Zhaochen Su, Yafu Li, Yulun Wu, Jiacheng Chen, Jiayu Chen, Weijie Wang, Xiaoye Qu, Yu Cheng
발행일: 6/5/2025
Advancing Multimodal Reasoning: From Optimized Cold Start to Staged
  Reinforcement Learning
초록

Inspired by the remarkable reasoning capabilities of Deepseek-R1 in complextextual tasks, many works attempt to incentivize similar capabilities inMultimodal Large Language Models (MLLMs) by directly applying reinforcementlearning (RL). However, they still struggle to activate complex reasoning. Inthis paper, rather than examining multimodal RL in isolation, we delve intocurrent training pipelines and identify three crucial phenomena: 1) Effectivecold start initialization is critical for enhancing MLLM reasoning.Intriguingly, we find that initializing with carefully selected text data alonecan lead to performance surpassing many recent multimodal reasoning models,even before multimodal RL. 2) Standard GRPO applied to multimodal RL suffersfrom gradient stagnation, which degrades training stability and performance. 3)Subsequent text-only RL training, following the multimodal RL phase, furtherenhances multimodal reasoning. This staged training approach effectivelybalances perceptual grounding and cognitive reasoning development. Byincorporating the above insights and addressing multimodal RL issues, weintroduce ReVisual-R1, achieving a new state-of-the-art among open-source 7BMLLMs on challenging benchmarks including MathVerse, MathVision, WeMath,LogicVista, DynaMath, and challenging AIME2024 and AIME2025.