Generative Question Answering On Cicero
Métriques
ROUGE
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | ROUGE | Paper Title | Repository |
---|---|---|---|
T5-large pre-trained on GLUCOSE | 0.2980 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues | |
T5-large | 0.2946 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues | |
T5-large pre-trained on COMET | 0.2878 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues | |
BART | 0.2837 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues |
0 of 4 row(s) selected.