HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Text Summarization
Text Summarization On Duc 2004 Task 1
Text Summarization On Duc 2004 Task 1
Metrics
ROUGE-1
ROUGE-2
ROUGE-L
Results
Performance results of various models on this benchmark
Columns
Model Name
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
Transformer LM
-
17.74
-
Sample Efficient Text Summarization Using a Single Pre-Trained Transformer
Transformer+LRPE+PE+Re-ranking+Ensemble
32.85
11.78
28.52
Positional Encoding to Control Output Sequence Length
Transformer+LRPE+PE+ALONE+Re-ranking
32.57
11.63
28.24
All Word Embeddings from One Embedding
Transformer+WDrop
33.06
11.45
28.51
Rethinking Perturbations in Encoder-Decoders for Fast Training
Reinforced-Topic-ConvS2S
31.15
10.85
27.68
A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization
DRGD
31.79
10.75
27.48
Deep Recurrent Generative Decoder for Abstractive Text Summarization
EndDec+WFE
32.28
10.54
27.8
Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
Seq2seq + selective + MTL + ERAM
29.33
10.24
25.24
Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization
SEASS
29.21
9.56
25.51
Selective Encoding for Abstractive Sentence Summarization
words-lvt5k-1sent
28.61
9.42
25.24
Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond
Abs+
28.18
8.49
23.81
A Neural Attention Model for Abstractive Sentence Summarization
RAS-Elman
28.97
8.26
24.06
-
ABS
-
-
22.05
A Neural Attention Model for Abstractive Sentence Summarization
0 of 13 row(s) selected.
Previous
Next
Text Summarization On Duc 2004 Task 1 | SOTA | HyperAI