HyperAI

Generative Question Answering On Cicero

Metrics

ROUGE

Results

Performance results of various models on this benchmark

Comparison Table
Model NameROUGE
cicero-a-dataset-for-contextualized0.2980
cicero-a-dataset-for-contextualized0.2946
cicero-a-dataset-for-contextualized0.2878
cicero-a-dataset-for-contextualized0.2837