HyperAI
HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Visuelle Entailment
Visual Entailment On Snli Ve Val
Visual Entailment On Snli Ve Val
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
Repository
CLIP-ViL
80.20
How Much Can CLIP Benefit Vision-and-Language Tasks?
-
EVE-ROI*
70.81
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
-
OFA
91.0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
-
SimVLM
86.21
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
-
UNITER
78.98
UNITER: UNiversal Image-TExt Representation Learning
-
Prompt Tuning
90.04
Prompt Tuning for Generative Multimodal Pretrained Models
-
VILLA-LARGE
80.18
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
-
CoCa
87.0
CoCa: Contrastive Captioners are Image-Text Foundation Models
-
SOHO
85.00
Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning
-
0 of 9 row(s) selected.
Previous
Next