Video Question Answering On How2Qa
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Accuracy | Paper Title | Repository |
---|---|---|---|
Text + Text (no Multimodal Pretext Training) | 93.2 | Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval | - |
Just Ask | 84.4 | Just Ask: Learning to Answer Questions from Millions of Narrated Videos | - |
FrozenBiLM | 86.7 | Zero-Shot Video Question Answering via Frozen Bidirectional Language Models | - |
ATP | 65.1 | Revisiting the "Video" in Video-Language Understanding | - |
SeViLA | 83.7 | - | - |
Just Ask (0-shot) | 51.1 | Just Ask: Learning to Answer Questions from Millions of Narrated Videos | - |
Hero w/ pre-training | 77.75 | HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training | - |
FrozenBiLM (0-shot) | 58.4 | Zero-Shot Video Question Answering via Frozen Bidirectional Language Models | - |
0 of 8 row(s) selected.