HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Abstraktive Textzusammenfassung
Abstractive Text Summarization On Cnn Daily
Abstractive Text Summarization On Cnn Daily
Metriken
ROUGE-1
ROUGE-2
ROUGE-L
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
Scrambled code + broken (alter)
48.18
19.84
45.35
Universal Evasion Attacks on Summarization Scoring
BRIO
47.78
23.55
44.57
BRIO: Bringing Order to Abstractive Summarization
Pegasus
47.36
24.02
44.45
Calibrating Sequence likelihood Improves Conditional Language Generation
PEGASUS + SummaReranker
47.16
22.61
43.87
SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization
Scrambled code + broken
46.71
20.39
43.56
Universal Evasion Attacks on Summarization Scoring
BART + SimCLS
46.67
22.15
43.54
SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization
SEASON
46.27
22.64
43.08
Salience Allocation as Guidance for Abstractive Summarization
Fourier Transformer
44.76
21.55
41.34
Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator
GLM-XXLarge
44.7
21.4
41.4
GLM: General Language Model Pretraining with Autoregressive Blank Infilling
BART + R-Drop
44.51
21.58
41.24
R-Drop: Regularized Dropout for Neural Networks
CoCoNet + CoCoPretrain
44.50
21.55
41.24
Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization
MUPPET BART Large
44.45
21.25
41.4
Muppet: Massive Multi-task Representations with Pre-Finetuning
CoCoNet
44.39
21.41
41.05
Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization
BART+R3F
44.38
21.53
41.17
Better Fine-Tuning by Reducing Representational Collapse
ERNIE-GENLARGE (large-scale text corpora)
44.31
21.35
41.60
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
PALM
44.30
21.12
41.41
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
ProphetNet
44.20
21.17
41.30
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
PEGASUS
44.17
21.47
41.11
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
BART
44.16
21.28
40.90
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
ERNIE-GENLARGE
44.02
21.17
41.26
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
0 of 53 row(s) selected.
Previous
Next
Abstractive Text Summarization On Cnn Daily | SOTA | HyperAI