HyperAIHyperAI
2 months ago

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models

Li, Junnan ; Li, Dongxu ; Savarese, Silvio ; Hoi, Steven
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image
  Encoders and Large Language Models
Abstract

The cost of vision-and-language pre-training has become increasinglyprohibitive due to end-to-end training of large-scale models. This paperproposes BLIP-2, a generic and efficient pre-training strategy that bootstrapsvision-language pre-training from off-the-shelf frozen pre-trained imageencoders and frozen large language models. BLIP-2 bridges the modality gap witha lightweight Querying Transformer, which is pre-trained in two stages. Thefirst stage bootstraps vision-language representation learning from a frozenimage encoder. The second stage bootstraps vision-to-language generativelearning from a frozen language model. BLIP-2 achieves state-of-the-artperformance on various vision-language tasks, despite having significantlyfewer trainable parameters than existing methods. For example, our modeloutperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainableparameters. We also demonstrate the model's emerging capabilities of zero-shotimage-to-text generation that can follow natural language instructions.

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models | Latest Papers | HyperAI