Text Summarization On Reddit Tifu
Metrics
ROUGE-1
ROUGE-2
ROUGE-L
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | ROUGE-1 | ROUGE-2 | ROUGE-L |
---|---|---|---|
extractive-summarization-as-text-matching | 25.09 | 6.17 | 20.13 |
muppet-massive-multi-task-representations | 30.3 | 11.25 | 24.92 |
summareranker-a-multi-task-mixture-of-experts-1 | 29.83 | 9.5 | 23.47 |
calibrating-sequence-likelihood-improves | 32.03 | 11.13 | 25.51 |
better-fine-tuning-by-reducing | 30.31 | 10.98 | 24.74 |