Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models

Large Multimodal Models (LMMs) have shown promise in vision-language tasksbut struggle with high-resolution input and detailed scene understanding.Addressing these challenges, we introduce Monkey to enhance LMM capabilities.Firstly, Monkey processes input images by dividing them into uniform patches,each matching the size (e.g., 448x448) used in the original training of thewell-trained vision encoder. Equipped with individual adapter for each patch,Monkey can handle higher resolutions up to 1344x896 pixels, enabling thedetailed capture of complex visual information. Secondly, it employs amulti-level description generation method, enriching the context forscene-object associations. This two-part strategy ensures more effectivelearning from generated data: the higher resolution allows for a more detailedcapture of visuals, which in turn enhances the effectiveness of comprehensivedescriptions. Extensive ablative results validate the effectiveness of ourdesigns. Additionally, experiments on 18 datasets further demonstrate thatMonkey surpasses existing LMMs in many tasks like Image Captioning and variousVisual Question Answering formats. Specially, in qualitative tests focused ondense text question answering, Monkey has exhibited encouraging resultscompared with GPT4V. Code is available athttps://github.com/Yuliang-Liu/Monkey.