HyperAI
HyperAI
الرئيسية
المنصة
الوثائق
الأخبار
الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
شروط الخدمة
سياسة الخصوصية
العربية
HyperAI
HyperAI
Toggle Sidebar
البحث في الموقع...
⌘
K
Command Palette
Search for a command to run...
المنصة
الرئيسية
SOTA
الاستدلال البصري
Visual Reasoning On Nlvr2 Test
Visual Reasoning On Nlvr2 Test
المقاييس
Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Accuracy
Paper Title
BEiT-3
92.58
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
X2-VLM (large)
89.4
X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
XFM (base)
88.4
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
CoCa
87.0
CoCa: Contrastive Captioners are Image-Text Foundation Models
X2-VLM (base)
87.0
X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
VLMo
86.86
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
SimVLM
85.15
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
X-VLM (base)
84.76
Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts
BLIP-129M
83.09
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
ALBEF (14M)
82.55
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
UNITER (Large)
79.5
UNITER: UNiversal Image-TExt Representation Learning
SOHO
77.32
Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning
LXMERT
76.2
LXMERT: Learning Cross-Modality Encoder Representations from Transformers
ViLT-B/32
76.13
ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
0 of 14 row(s) selected.
Previous
Next
Visual Reasoning On Nlvr2 Test | SOTA | HyperAI