HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Text Summarization
Text Summarization On Pubmed 1
Text Summarization On Pubmed 1
Metrics
ROUGE-1
ROUGE-2
ROUGE-L
Results
Performance results of various models on this benchmark
Columns
Model Name
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
Top Down Transformer (AdaPool) (464M)
51.05
23.26
46.47
Long Document Summarization with Top-down and Bottom-up Inference
eyeglaxs
50.34
24.57
45.96
Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization
BART-LS
50.3
-
-
Adapting Pretrained Text-to-Text Models for Long Text Sequences
LongT5
50.23
24.76
46.67
LongT5: Efficient Text-To-Text Transformer for Long Sequences
GoSum (extractive)
49.83
23.56
45.10
GoSum: Extractive Summarization of Long Documents by Reinforcement Learning and Graph Organized discourse state
Lodoss-full-large (extractive)
49.38
23.89
44.84
Toward Unifying Text Segmentation and Long Document Summarization
MemSum (extractive)
49.25
22.94
44.42
MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes
Lodoss-full-base (extractive)
48.93
23.51
44.40
Toward Unifying Text Segmentation and Long Document Summarization
HAT-BART
48.25
21.35
36.69
Hierarchical Learning for Generation with Long Source Sequences
GRETEL
48.20
21.20
43.16
GRETEL: Graph Contrastive Topic Enhanced Language Model for Long Document Extractive Summarization
DeepPyramidion
47.81
21.14
-
Sparsifying Transformer Models with Trainable Representation Pooling
FactorSum
47.5
20.33
43.76
Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents
HiStruct+
46.59
20.39
42.11
HiStruct+: Improving Extractive Text Summarization with Hierarchical Structure Information
DANCER PEGASUS
46.34
19.97
42.42
A Divide-and-Conquer Approach to the Summarization of Long Documents
BigBird-Pegasus
46.32
20.65
42.33
Big Bird: Transformers for Longer Sequences
ExtSum-LG+MMR-Select+
45.39
20.37
40.99
Systematically Exploring Redundancy Reduction in Summarizing Long Documents
ExtSum-LG+RdLoss
45.3
20.42
40.95
Systematically Exploring Redundancy Reduction in Summarizing Long Documents
PEGASUS
45.09
-
-
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
Sent-CLF
45.01
-
-
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
ExtSum-LG
44.81
19.74
-
Extractive Summarization of Long Documents by Combining Global and Local Context
0 of 29 row(s) selected.
Previous
Next
Text Summarization On Pubmed 1 | SOTA | HyperAI