Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone

Vision-language (VL) pre-training has recently received considerableattention. However, most existing end-to-end pre-training approaches eitheronly aim to tackle VL tasks such as image-text retrieval, visual questionanswering (VQA) and image captioning that test high-level understanding ofimages, or only target region-level understanding for tasks such as phrasegrounding and object detection. We present FIBER (Fusion-In-the-Backbone-basedtransformER), a new VL model architecture that can seamlessly handle both thesetypes of tasks. Instead of having dedicated transformer layers for fusion afterthe uni-modal backbones, FIBER pushes multimodal fusion deep into the model byinserting cross-attention into the image and text backbones, bringing gains interms of memory and performance. In addition, unlike previous work that iseither only pre-trained on image-text data or on fine-grained data withbox-level annotations, we present a two-stage pre-training strategy that usesboth these kinds of data efficiently: (i) coarse-grained pre-training based onimage-text data; followed by (ii) fine-grained pre-training based onimage-text-box data. We conduct comprehensive experiments on a wide range of VLtasks, ranging from VQA, image captioning, and retrieval, to phrase grounding,referring expression comprehension, and object detection. Using deep multimodalfusion coupled with the two-stage pre-training, FIBER provides consistentperformance improvements over strong baselines across all tasks, oftenoutperforming methods using magnitudes more data. Code is available athttps://github.com/microsoft/FIBER.