HyperAIHyperAI超神经
首页资讯论文教程数据集百科SOTALLM 模型天梯GPU 天梯顶会
全站搜索
关于
中文
HyperAIHyperAI超神经
  1. 首页
  2. SOTA
  3. 视觉问答
  4. Visual Question Answering On Vqa V2 Test Dev 1

Visual Question Answering On Vqa V2 Test Dev 1

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

模型名称
Accuracy
Paper TitleRepository
Florence80.16Florence: A New Foundation Model for Computer Vision
LXMERT (low-magnitude pruning)70.72LXMERT Model Compression for Visual Question Answering
VK-OOD76.8Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis
BLIP-2 ViT-G OPT 6.7B (fine-tuned)82.30BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
BLIP-2 ViT-G OPT 2.7B (fine-tuned)81.74BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
LocVLM-L56.2Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs
BLIP-2 ViT-G FlanT5 XL (fine-tuned)81.66BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
OFA82.0OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Aurora (ours, r=64)77.69--
CoCa82.3CoCa: Contrastive Captioners are Image-Text Foundation Models
mPLUG-281.11mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
0 of 11 row(s) selected.
HyperAI

学习、理解、实践,与社区一起构建人工智能的未来

中文

关于

关于我们数据集帮助

产品

资讯教程数据集百科

链接

TVM 中文Apache TVMOpenBayes

© HyperAI超神经

津ICP备17010941号-1京公网安备11010502038810号京公网安备11010502038810号
TwitterBilibili