HyperAI
HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
시각적 질문 응답
Visual Question Answering On Vqa V2 Test Dev 1
Visual Question Answering On Vqa V2 Test Dev 1
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy
Paper Title
Repository
Florence
80.16
Florence: A New Foundation Model for Computer Vision
-
LXMERT (low-magnitude pruning)
70.72
LXMERT Model Compression for Visual Question Answering
-
VK-OOD
76.8
Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis
-
BLIP-2 ViT-G OPT 6.7B (fine-tuned)
82.30
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
-
BLIP-2 ViT-G OPT 2.7B (fine-tuned)
81.74
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
-
LocVLM-L
56.2
Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs
-
BLIP-2 ViT-G FlanT5 XL (fine-tuned)
81.66
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
-
OFA
82.0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
-
Aurora (ours, r=64)
77.69
-
-
CoCa
82.3
CoCa: Contrastive Captioners are Image-Text Foundation Models
-
mPLUG-2
81.11
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
-
0 of 11 row(s) selected.
Previous
Next
Visual Question Answering On Vqa V2 Test Dev 1 | SOTA | HyperAI초신경