HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Text Summarization
Text Summarization On Gigaword
Text Summarization On Gigaword
Metrics
ROUGE-1
ROUGE-2
ROUGE-L
Results
Performance results of various models on this benchmark
Columns
Model Name
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
OpenAI/o3-mini
60.12
54.22
57.21
-
Riple/Saanvi-v0.1
52.21
45.58
60.29
-
Pegasus+DotProd
40.6
21.0
37.0
Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization
BART-RXF
40.45
20.69
36.56
Better Fine-Tuning by Reducing Representational Collapse
MUPPET BART Large
40.4
20.54
36.21
Muppet: Massive Multi-task Representations with Pre-Finetuning
OFA
39.81
20.66
37.11
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Transformer+Rep(Uni)
39.81
20.40
36.93
Rethinking Perturbations in Encoder-Decoders for Fast Training
Transformer+Wdrop
39.66
20.45
36.59
Rethinking Perturbations in Encoder-Decoders for Fast Training
ProphetNet
39.51
20.42
36.69
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
ERNIE-GENLARGE (large-scale text corpora)
39.46
20.34
36.74
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
PALM
39.45
20.37
36.75
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Best Summary Length
39.27
20.40
37.75
A New Approach to Overgenerating and Scoring Abstractive Summaries
ERNIE-GENLARGE
39.25
20.25
36.53
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
ControlCopying + BPNorm
39.19
20.38
36.69
Controlling the Amount of Verbatim Copying in Abstractive Summarization
PEGASUS
39.12
19.86
36.24
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
BiSET
39.11
19.78
36.87
BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization
ControlCopying + SBWR
39.08
20.47
36.69
Controlling the Amount of Verbatim Copying in Abstractive Summarization
UniLM
38.90
20.05
36.00
Unified Language Model Pre-training for Natural Language Understanding and Generation
ERNIE-GENBASE
38.83
20.04
36.20
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
MASS
38.73
19.71
35.96
MASS: Masked Sequence to Sequence Pre-training for Language Generation
0 of 41 row(s) selected.
Previous
Next