Intern-S1: A Scientific Multimodal Foundation Model

In recent years, a plethora of open-source foundation models have emerged,achieving remarkable progress in some widely attended fields, with performancebeing quite close to that of closed-source models. However, in high-value butmore challenging scientific professional fields, either the fields still relyon expert models, or the progress of general foundation models lagssignificantly compared to those in popular areas, far from sufficient fortransforming scientific research and leaving substantial gap betweenopen-source models and closed-source models in these scientific domains. Tomitigate this gap and explore a step further toward Artificial GeneralIntelligence (AGI), we introduce Intern-S1, a specialized generalist equippedwith general understanding and reasoning capabilities with expertise to analyzemultiple science modal data. Intern-S1 is a multimodal Mixture-of-Experts (MoE)model with 28 billion activated parameters and 241 billion total parameters,continually pre-trained on 5T tokens, including over 2.5T tokens fromscientific domains. In the post-training stage, Intern-S1 undergoes offline andthen online reinforcement learning (RL) in InternBootCamp, where we proposeMixture-of-Rewards (MoR) to synergize the RL training on more than 1000 taskssimultaneously. Through integrated innovations in algorithms, data, andtraining systems, Intern-S1 achieved top-tier performance in online RLtraining.On comprehensive evaluation benchmarks, Intern-S1 demonstratescompetitive performance on general reasoning tasks among open-source models andsignificantly outperforms open-source models in scientific domains, surpassingclosed-source state-of-the-art models in professional tasks, such as molecularsynthesis planning, reaction condition prediction, predicting thermodynamicstabilities for crystals. Our models are available athttps://huggingface.co/internlm/Intern-S1.