HyperAIHyperAI

Command Palette

Search for a command to run...

CogVLM: Visual Expert for Pretrained Language Models

Abstract

We introduce CogVLM, a powerful open-source visual language foundation model.Different from the popular shallow alignment method which maps image featuresinto the input space of language model, CogVLM bridges the gap between thefrozen pretrained language model and image encoder by a trainable visual expertmodule in the attention and FFN layers. As a result, CogVLM enables deep fusionof vision language features without sacrificing any performance on NLP tasks.CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modalbenchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+,RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and ranks the 2nd onVQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X55B. Codes and checkpoints are available at https://github.com/THUDM/CogVLM.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp