HyperAIHyperAI초신경
홈뉴스연구 논문튜토리얼데이터셋백과사전SOTALLM 모델GPU 랭킹컨퍼런스
전체 검색
소개
한국어
HyperAIHyperAI초신경
  1. 홈
  2. SOTA
  3. 시각적 질문 응답
  4. Visual Question Answering On Vqa V2 Test Dev 1

Visual Question Answering On Vqa V2 Test Dev 1

평가 지표

Accuracy

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
Accuracy
Paper TitleRepository
Florence80.16Florence: A New Foundation Model for Computer Vision
LXMERT (low-magnitude pruning)70.72LXMERT Model Compression for Visual Question Answering
VK-OOD76.8Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis
BLIP-2 ViT-G OPT 6.7B (fine-tuned)82.30BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
BLIP-2 ViT-G OPT 2.7B (fine-tuned)81.74BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
LocVLM-L56.2Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs
BLIP-2 ViT-G FlanT5 XL (fine-tuned)81.66BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
OFA82.0OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Aurora (ours, r=64)77.69--
CoCa82.3CoCa: Contrastive Captioners are Image-Text Foundation Models
mPLUG-281.11mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
0 of 11 row(s) selected.
HyperAI

학습, 이해, 실천, 커뮤니티와 함께 인공지능의 미래를 구축하다

한국어

소개

회사 소개데이터셋 도움말

제품

뉴스튜토리얼데이터셋백과사전

링크

TVM 한국어Apache TVMOpenBayes

© HyperAI초신경

TwitterBilibili