Command Palette
Search for a command to run...
Video Question Answering On Tvqa
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
| Paper Title | ||
|---|---|---|
| LLaMA-VQA | 82.2 | Large Language Models are Temporal and Causal Reasoners for Video Question Answering |
| FrozenBiLM | 82 | Zero-Shot Video Question Answering via Frozen Bidirectional Language Models |
| VindLU | 79.0 | VindLU: A Recipe for Effective Video-and-Language Pretraining |
| iPerceive (Chadha et al., 2020) | 76.96 | iPerceive: Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering |
| Hero w/ pre-training | 74.24 | HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training |
| STAGE (Lei et al., 2019) | 70.50 | TVQA+: Spatio-Temporal Grounding for Video Question Answering |
0 of 6 row(s) selected.