CogVLM: Visual Expert for Pretrained Language Models

We introduce CogVLM, a powerful open-source visual language foundation model.Different from the popular shallow alignment method which maps image featuresinto the input space of language model, CogVLM bridges the gap between thefrozen pretrained language model and image encoder by a trainable visual expertmodule in the attention and FFN layers. As a result, CogVLM enables deep fusionof vision language features without sacrificing any performance on NLP tasks.CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modalbenchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+,RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and ranks the 2nd onVQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X55B. Codes and checkpoints are available at https://github.com/THUDM/CogVLM.