Video Question Answering On Wildqa
Métriques
ROUGE-1
ROUGE-2
ROUGE-L
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | ROUGE-1 | ROUGE-2 | ROUGE-L | Paper Title | Repository |
---|---|---|---|---|---|
T5 (text + video) | 33.1 ± 0.3 | 17.3 ± 0.4 | 31.9 ± 0.2 | WildQA: In-the-Wild Video Question Answering | |
T5 (text) | 33.8 ± 0.2 | 17.7 ± 0.1 | 32.4 ± 0.3 | WildQA: In-the-Wild Video Question Answering | |
Multi (text + video, IO) | 34.0 ± 0.5 | 18.8 ± 0.7 | 32.8 ± 0.6 | WildQA: In-the-Wild Video Question Answering | |
T5 (text, zero-shot) | 0.8 ± 0.0 | 0.0 ± 0.0 | 0.8 ± 0.0 | WildQA: In-the-Wild Video Question Answering | |
Multi (text + video, SE) | 33.8 ± 0.8 | 18.5 ± 0.7 | 32.5 ± 0.8 | WildQA: In-the-Wild Video Question Answering |
0 of 5 row(s) selected.