HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Text Summarization
Text Summarization On Pubmed 1
Text Summarization On Pubmed 1
Metrics
ROUGE-1
ROUGE-2
ROUGE-L
Results
Performance results of various models on this benchmark
Columns
Model Name
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
Repository
GoSum (extractive)
49.83
23.56
45.10
GoSum: Extractive Summarization of Long Documents by Reinforcement Learning and Graph Organized discourse state
LongT5
50.23
24.76
46.67
LongT5: Efficient Text-To-Text Transformer for Long Sequences
MatchSum (BERT-base)
41.21
14.91
36.75
Extractive Summarization as Text Matching
PEGASUS
45.09
-
-
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
Sent-CLF
45.01
-
-
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
Discourse
38.93
-
-
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
DANCER RUM
43.98
17.65
40.25
A Divide-and-Conquer Approach to the Summarization of Long Documents
BART-LS
50.3
-
-
Adapting Pretrained Text-to-Text Models for Long Text Sequences
Lodoss-full-large (extractive)
49.38
23.89
44.84
Toward Unifying Text Segmentation and Long Document Summarization
ExtSum-LG+MMR-Select+
45.39
20.37
40.99
Systematically Exploring Redundancy Reduction in Summarizing Long Documents
ExtSum-LG+RdLoss
45.3
20.42
40.95
Systematically Exploring Redundancy Reduction in Summarizing Long Documents
HAT-BART
48.25
21.35
36.69
Hierarchical Learning for Generation with Long Source Sequences
-
BigBird-Pegasus
46.32
20.65
42.33
Big Bird: Transformers for Longer Sequences
DANCER PEGASUS
46.34
19.97
42.42
A Divide-and-Conquer Approach to the Summarization of Long Documents
Fastformer
38.09
15.44
34.81
Fastformer: Additive Attention Can Be All You Need
GRETEL
48.20
21.20
43.16
GRETEL: Graph Contrastive Topic Enhanced Language Model for Long Document Extractive Summarization
-
ExtSum-LG
44.81
19.74
-
Extractive Summarization of Long Documents by Combining Global and Local Context
Top Down Transformer (AdaPool) (464M)
51.05
23.26
46.47
Long Document Summarization with Top-down and Bottom-up Inference
-
DANCER LSTM
44.09
17.69
40.27
A Divide-and-Conquer Approach to the Summarization of Long Documents
eyeglaxs
50.34
24.57
45.96
Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization
-
0 of 29 row(s) selected.
Previous
Next