HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
텍스트 요약
Text Summarization On Gigaword
Text Summarization On Gigaword
평가 지표
ROUGE-1
ROUGE-2
ROUGE-L
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
OpenAI/o3-mini
60.12
54.22
57.21
-
Riple/Saanvi-v0.1
52.21
45.58
60.29
-
Pegasus+DotProd
40.6
21.0
37.0
Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization
BART-RXF
40.45
20.69
36.56
Better Fine-Tuning by Reducing Representational Collapse
MUPPET BART Large
40.4
20.54
36.21
Muppet: Massive Multi-task Representations with Pre-Finetuning
OFA
39.81
20.66
37.11
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Transformer+Rep(Uni)
39.81
20.40
36.93
Rethinking Perturbations in Encoder-Decoders for Fast Training
Transformer+Wdrop
39.66
20.45
36.59
Rethinking Perturbations in Encoder-Decoders for Fast Training
ProphetNet
39.51
20.42
36.69
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
ERNIE-GENLARGE (large-scale text corpora)
39.46
20.34
36.74
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
PALM
39.45
20.37
36.75
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Best Summary Length
39.27
20.40
37.75
A New Approach to Overgenerating and Scoring Abstractive Summaries
ERNIE-GENLARGE
39.25
20.25
36.53
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
ControlCopying + BPNorm
39.19
20.38
36.69
Controlling the Amount of Verbatim Copying in Abstractive Summarization
PEGASUS
39.12
19.86
36.24
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
BiSET
39.11
19.78
36.87
BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization
ControlCopying + SBWR
39.08
20.47
36.69
Controlling the Amount of Verbatim Copying in Abstractive Summarization
UniLM
38.90
20.05
36.00
Unified Language Model Pre-training for Natural Language Understanding and Generation
ERNIE-GENBASE
38.83
20.04
36.20
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
MASS
38.73
19.71
35.96
MASS: Masked Sequence to Sequence Pre-training for Language Generation
0 of 41 row(s) selected.
Previous
Next
Text Summarization On Gigaword | SOTA | HyperAI초신경