HyperAI

Multi Modal Self Instruct Multimodal Benchmark Dataset

Date

9 months ago

Size

3.16 GB

Organization

Chinese Academy of Sciences
Zhejiang University

Publish URL

github.com

License

CC BY-SA 4.0

特色图像

This dataset was jointly launched by Zhejiang University, the Institute of Software of the Chinese Academy of Sciences, ShanghaiTech University and other institutions in 2024. The relevant paper results are "Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model".

The dataset contains a total of 11,193 abstract images with relevant questions, covering 8 major categories including dashboards, roadmaps, charts, tables, flowcharts, relationship diagrams, visual puzzles and 2D floor plans, in addition to an additional 62,476 data for fine-tuning the model.

Multi-modal-Self-instruct.torrent
Seeding 2Downloading 1Completed 67Total Downloads 104
  • Multi-modal-Self-instruct/
    • README.md
      1.32 KB
    • README.txt
      2.64 KB
      • data/
        • Multi-modal-Self-instruct.zip
          3.16 GB