Generative Question Answering On Cicero
Metrics
ROUGE
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | ROUGE |
---|---|
cicero-a-dataset-for-contextualized | 0.2980 |
cicero-a-dataset-for-contextualized | 0.2946 |
cicero-a-dataset-for-contextualized | 0.2878 |
cicero-a-dataset-for-contextualized | 0.2837 |