HyperAIHyperAI
2 months ago

InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks

Chen, Zhe ; Wu, Jiannan ; Wang, Wenhai ; Su, Weijie ; Chen, Guo ; Xing, Sen ; Zhong, Muyan ; Zhang, Qinglong ; Zhu, Xizhou ; Lu, Lewei ; Li, Bin ; Luo, Ping ; Lu, Tong ; Qiao, Yu ; Dai, Jifeng
InternVL: Scaling up Vision Foundation Models and Aligning for Generic
  Visual-Linguistic Tasks
Abstract

The exponential growth of large language models (LLMs) has opened up numerouspossibilities for multimodal AGI systems. However, the progress in vision andvision-language foundation models, which are also critical elements ofmulti-modal AGI, has not kept pace with LLMs. In this work, we design alarge-scale vision-language foundation model (InternVL), which scales up thevision foundation model to 6 billion parameters and progressively aligns itwith the LLM, using web-scale image-text data from various sources. This modelcan be broadly applied to and achieve state-of-the-art performance on 32generic visual-linguistic benchmarks including visual perception tasks such asimage-level or pixel-level recognition, vision-language tasks such as zero-shotimage/video classification, zero-shot image/video-text retrieval, and link withLLMs to create multi-modal dialogue systems. It has powerful visualcapabilities and can be a good alternative to the ViT-22B. We hope that ourresearch could contribute to the development of multi-modal large models. Codeand models are available at https://github.com/OpenGVLab/InternVL.

InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks | Latest Papers | HyperAI