Video Question Answering On Ivqa
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Accuracy | Paper Title | Repository |
---|---|---|---|
Just Ask (0-shot) | 12.2 | Just Ask: Learning to Answer Questions from Millions of Narrated Videos | |
Just Ask (fine-tune) | 35.4 | Just Ask: Learning to Answer Questions from Millions of Narrated Videos | |
Text + Text (no Multimodal Pretext Training) | 40.2 | Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval | |
Co-Tokenization | 38.2 | Video Question Answering with Iterative Video-Text Co-Tokenization | |
VideoCoCa | 39.0 | VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners | |
FrozenBiLM (0-shot) | 26.8 | Zero-Shot Video Question Answering via Frozen Bidirectional Language Models | |
FrozenBiLM | 39.6 | Zero-Shot Video Question Answering via Frozen Bidirectional Language Models |
0 of 7 row(s) selected.