Visual Question Answering 1
Visual Question Answering (VQA) is a subtask in the field of computer vision that aims to enable machines to understand image content and accurately answer questions related to images through multimodal analysis. The core objective of this task is to integrate visual and linguistic information to enhance the machine's scene understanding capabilities. VQA holds significant value in applications such as intelligent assistance systems, image search, and content moderation, facilitating a more natural human-machine interaction experience.
AMBER
RLAIF-V 12B
BenchLMM
GPT-4V
CLEVR
NeSyCoCo Neuro-Symbolic
MS COCO
COCO Visual Question Answering (VQA) real images 2.0 open ended
EarthVQA
SOBA
GQA
GRIT
OFA
MapEval-Visual
MM-Vet
Gemini 1.5 Pro (gemini-1.5-pro-002)
MM-Vet v2
MM-Vet (w/o External Tools)
Emu-14B
MMBench
LLaVA-InternLM2-ViT + MoSLoRA
MMHal-Bench
MSRVTT-QA
Aurora (ours, r=64) Aurora (ours, r=64)
MSVD-QA
PlotQA-D1
PlotQA-D2
TextVQA test-standard
PromptCap
V*bench
IVM-Enhanced GPT4-V
ViP-Bench
GPT-4V-turbo-detail:high (Visual Prompt)
VisualMRC
LayoutT5 (Large)
VizWiz
Emu-I *
VQA v2
RLHF-V
VQA v2 test-dev
BLIP-2 ViT-G OPT 6.7B (fine-tuned)
VQA v2 test-std
LXMERT (low-magnitude pruning)
VQA v2 val