Aria: An Open Multimodal Native Mixture-of-Experts Model

Information comes in diverse modalities. Multimodal native AI models areessential to integrate real-world information and deliver comprehensiveunderstanding. While proprietary multimodal native models exist, their lack ofopenness imposes obstacles for adoptions, let alone adaptations. To fill thisgap, we introduce Aria, an open multimodal native model with best-in-classperformance across a wide range of multimodal, language, and coding tasks. Ariais a mixture-of-expert model with 3.9B and 3.5B activated parameters per visualtoken and text token, respectively. It outperforms Pixtral-12B andLlama3.2-11B, and is competitive against the best proprietary models on variousmultimodal tasks. We pre-train Aria from scratch following a 4-stage pipeline,which progressively equips the model with strong capabilities in languageunderstanding, multimodal understanding, long context window, and instructionfollowing. We open-source the model weights along with a codebase thatfacilitates easy adoptions and adaptations of Aria in real-world applications.