HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Visual Reasoning
Visual Reasoning On Nlvr2 Dev
Visual Reasoning On Nlvr2 Dev
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
Paper Title
BEiT-3
91.51
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
X2-VLM (large)
88.7
X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
XFM (base)
87.6
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
X2-VLM (base)
86.2
X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
CoCa
86.1
CoCa: Contrastive Captioners are Image-Text Foundation Models
VLMo
85.64
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
VK-OOD
84.6
Implicit Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis
SimVLM
84.53
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
X-VLM (base)
84.41
Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts
VK-OOD
83.9
Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis
ALBEF (14M)
83.14
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
SOHO
76.37
Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning
ViLT-B/32
75.7
ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
LXMERT (Pre-train + scratch)
74.9
LXMERT: Learning Cross-Modality Encoder Representations from Transformers
VisualBERT
66.7
VisualBERT: A Simple and Performant Baseline for Vision and Language
0 of 15 row(s) selected.
Previous
Next
Visual Reasoning On Nlvr2 Dev | SOTA | HyperAI