HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
텍스트 요약
Text Summarization On Pubmed 1
Text Summarization On Pubmed 1
평가 지표
ROUGE-1
ROUGE-2
ROUGE-L
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
Top Down Transformer (AdaPool) (464M)
51.05
23.26
46.47
Long Document Summarization with Top-down and Bottom-up Inference
eyeglaxs
50.34
24.57
45.96
Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization
BART-LS
50.3
-
-
Adapting Pretrained Text-to-Text Models for Long Text Sequences
LongT5
50.23
24.76
46.67
LongT5: Efficient Text-To-Text Transformer for Long Sequences
GoSum (extractive)
49.83
23.56
45.10
GoSum: Extractive Summarization of Long Documents by Reinforcement Learning and Graph Organized discourse state
Lodoss-full-large (extractive)
49.38
23.89
44.84
Toward Unifying Text Segmentation and Long Document Summarization
MemSum (extractive)
49.25
22.94
44.42
MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes
Lodoss-full-base (extractive)
48.93
23.51
44.40
Toward Unifying Text Segmentation and Long Document Summarization
HAT-BART
48.25
21.35
36.69
Hierarchical Learning for Generation with Long Source Sequences
GRETEL
48.20
21.20
43.16
GRETEL: Graph Contrastive Topic Enhanced Language Model for Long Document Extractive Summarization
DeepPyramidion
47.81
21.14
-
Sparsifying Transformer Models with Trainable Representation Pooling
FactorSum
47.5
20.33
43.76
Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents
HiStruct+
46.59
20.39
42.11
HiStruct+: Improving Extractive Text Summarization with Hierarchical Structure Information
DANCER PEGASUS
46.34
19.97
42.42
A Divide-and-Conquer Approach to the Summarization of Long Documents
BigBird-Pegasus
46.32
20.65
42.33
Big Bird: Transformers for Longer Sequences
ExtSum-LG+MMR-Select+
45.39
20.37
40.99
Systematically Exploring Redundancy Reduction in Summarizing Long Documents
ExtSum-LG+RdLoss
45.3
20.42
40.95
Systematically Exploring Redundancy Reduction in Summarizing Long Documents
PEGASUS
45.09
-
-
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
Sent-CLF
45.01
-
-
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
ExtSum-LG
44.81
19.74
-
Extractive Summarization of Long Documents by Combining Global and Local Context
0 of 29 row(s) selected.
Previous
Next
Text Summarization On Pubmed 1 | SOTA | HyperAI초신경