Extractive Document Summarization On Cnn
Metrics
ROUGE-1
ROUGE-2
ROUGE-L
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | ROUGE-1 | ROUGE-2 | ROUGE-L |
---|---|---|---|
ranking-sentences-for-extractive | 40.0 | 18.2 | 36.6 |
hibert-document-level-pre-training-of | 42.37 | 19.95 | 38.83 |
get-to-the-point-summarization-with-pointer | 40.34 | 17.70 | 36.57 |
neural-document-summarization-by-jointly | 41.59 | 19.01 | 37.98 |
align-and-attend-multimodal-summarization | 44.11 | 20.31 | 35.92 |
neural-latent-extractive-document | 41.05 | 18.77 | 37.54 |
iterative-document-representation-learning | 30.80 | 12.6 | - |
searching-for-effective-neural-extractive | 42.69 | 19.60 | 38.85 |
summary-level-training-of-sentence-rewriting | 42.76 | 19.87 | 39.11 |
banditsum-extractive-summarization-as-a | 41.5 | 18.7 | 37.6 |
considering-nested-tree-structure-in-sentence | 43.86 | 20.64 | 40.20 |
extractive-summarization-as-text-matching | 44.41 | 20.86 | 40.55 |
neural-extractive-summarization-with | 44.68 | 21.30 | 40.75 |
reading-like-her-human-reading-inspired | 42.3 | 18.9 | 37.9 |