HyperAIHyperAI
2 months ago

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Li, Junnan ; Li, Dongxu ; Xiong, Caiming ; Hoi, Steven
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
Abstract

Vision-Language Pre-training (VLP) has advanced the performance for manyvision-language tasks. However, most existing pre-trained models only excel ineither understanding-based tasks or generation-based tasks. Furthermore,performance improvement has been largely achieved by scaling up the datasetwith noisy image-text pairs collected from the web, which is a suboptimalsource of supervision. In this paper, we propose BLIP, a new VLP frameworkwhich transfers flexibly to both vision-language understanding and generationtasks. BLIP effectively utilizes the noisy web data by bootstrapping thecaptions, where a captioner generates synthetic captions and a filter removesthe noisy ones. We achieve state-of-the-art results on a wide range ofvision-language tasks, such as image-text retrieval (+2.7% in averagerecall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score).BLIP also demonstrates strong generalization ability when directly transferredto video-language tasks in a zero-shot manner. Code, models, and datasets arereleased at https://github.com/salesforce/BLIP.

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation | Latest Papers | HyperAI