HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
Extractive Document Summarization
Extractive Document Summarization On Cnn
Extractive Document Summarization On Cnn
평가 지표
ROUGE-1
ROUGE-2
ROUGE-L
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
Repository
REFRESH
40.0
18.2
36.6
Ranking Sentences for Extractive Summarization with Reinforcement Learning
HIBERT
42.37
19.95
38.83
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
-
Lead-3 baseline
40.34
17.70
36.57
Get To The Point: Summarization with Pointer-Generator Networks
NeuSUM
41.59
19.01
37.98
Neural Document Summarization by Jointly Learning to Score and Select Sentences
A2Summ
44.11
20.31
35.92
Align and Attend: Multimodal Summarization with Dual Contrastive Losses
Latent
41.05
18.77
37.54
Neural Latent Extractive Document Summarization
-
ITS
30.80
12.6
-
Iterative Document Representation Learning Towards Summarization with Polishing
PNBERT
42.69
19.60
38.85
Searching for Effective Neural Extractive Summarization: What Works and What's Next
BERT-ext + RL
42.76
19.87
39.11
Summary Level Training of Sentence Rewriting for Abstractive Summarization
-
BanditSum
41.5
18.7
37.6
BanditSum: Extractive Summarization as a Contextual Bandit
NeRoBERTa
43.86
20.64
40.20
Considering Nested Tree Structure in Sentence Extractive Summarization with Pre-trained Transformer
-
MatchSum
44.41
20.86
40.55
Extractive Summarization as Text Matching
HAHSum
44.68
21.30
40.75
Neural Extractive Summarization with Hierarchical Attentive Heterogeneous Graph Network
-
HER
42.3
18.9
37.9
Reading Like HER: Human Reading Inspired Extractive Summarization
0 of 14 row(s) selected.
Previous
Next