Generative Question Answering On Cicero
Metriken
ROUGE
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
| Paper Title | ||
|---|---|---|
| T5-large pre-trained on GLUCOSE | 0.2980 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues |
| T5-large | 0.2946 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues |
| T5-large pre-trained on COMET | 0.2878 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues |
| BART | 0.2837 | CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues |
0 of 4 row(s) selected.