HyperAI

Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models

Guo Chen, Zhiqi Li, Shihao Wang, Jindong Jiang, Yicheng Liu, Lidong Lu, De-An Huang, Wonmin Byeon, Matthieu Le, Tuomas Rintamaki, Tyler Poon, Max Ehrlich, Tuomas Rintamaki, Tyler Poon, Tong Lu, Limin Wang, Bryan Catanzaro, Jan Kautz, Andrew Tao, Zhiding Yu, Guilin Liu
تاريخ النشر: 4/23/2025
Eagle 2.5: Boosting Long-Context Post-Training for Frontier
  Vision-Language Models
الملخص

We introduce Eagle 2.5, a family of frontier vision-language models (VLMs)for long-context multimodal learning. Our work addresses the challenges in longvideo comprehension and high-resolution image understanding, introducing ageneralist framework for both tasks. The proposed training frameworkincorporates Automatic Degrade Sampling and Image Area Preservation, twotechniques that preserve contextual integrity and visual details. The frameworkalso includes numerous efficiency optimizations in the pipeline forlong-context data training. Finally, we propose Eagle-Video-110K, a noveldataset that integrates both story-level and clip-level annotations,facilitating long-video understanding. Eagle 2.5 demonstrates substantialimprovements on long-context multimodal benchmarks, providing a robust solutionto the limitations of existing VLMs. Notably, our best model Eagle 2.5-8Bachieves 72.4% on Video-MME with 512 input frames, matching the results oftop-tier commercial model such as GPT-4o and large-scale open-source modelslike Qwen2.5-VL-72B and InternVL2.5-78B.