HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
추출형 텍스트 요약
Extractive Document Summarization On Cnn
Extractive Document Summarization On Cnn
평가 지표
ROUGE-1
ROUGE-2
ROUGE-L
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
HAHSum
44.68
21.30
40.75
Neural Extractive Summarization with Hierarchical Attentive Heterogeneous Graph Network
MatchSum
44.41
20.86
40.55
Extractive Summarization as Text Matching
A2Summ
44.11
20.31
35.92
Align and Attend: Multimodal Summarization with Dual Contrastive Losses
NeRoBERTa
43.86
20.64
40.20
Considering Nested Tree Structure in Sentence Extractive Summarization with Pre-trained Transformer
BERT-ext + RL
42.76
19.87
39.11
Summary Level Training of Sentence Rewriting for Abstractive Summarization
PNBERT
42.69
19.60
38.85
Searching for Effective Neural Extractive Summarization: What Works and What's Next
HIBERT
42.37
19.95
38.83
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
HER
42.3
18.9
37.9
Reading Like HER: Human Reading Inspired Extractive Summarization
NeuSUM
41.59
19.01
37.98
Neural Document Summarization by Jointly Learning to Score and Select Sentences
BanditSum
41.5
18.7
37.6
BanditSum: Extractive Summarization as a Contextual Bandit
Latent
41.05
18.77
37.54
Neural Latent Extractive Document Summarization
Lead-3 baseline
40.34
17.70
36.57
Get To The Point: Summarization with Pointer-Generator Networks
REFRESH
40.0
18.2
36.6
Ranking Sentences for Extractive Summarization with Reinforcement Learning
ITS
30.80
12.6
-
Iterative Document Representation Learning Towards Summarization with Polishing
0 of 14 row(s) selected.
Previous
Next
Extractive Document Summarization On Cnn | SOTA | HyperAI초신경