Abstractive Text Summarization On Cnn Daily
评估指标
ROUGE-1
ROUGE-2
ROUGE-L
评测结果
各个模型在此基准测试上的表现结果
比较表格
模型名称 | ROUGE-1 | ROUGE-2 | ROUGE-L |
---|---|---|---|
a-unified-model-for-extractive-and | 40.68 | 17.97 | 37.13 |
subformer-a-parameter-reduced-transformer | 40.9 | 18.3 | 37.7 |
pretraining-based-natural-language-generation | 41.71 | 19.49 | 38.79 |
pegasus-pre-training-with-extracted-gap | 44.17 | 21.47 | 41.11 |
mixture-content-selection-for-diverse | 41.72 | 18.74 | 38.79 |
delta-a-deep-learning-based-language | - | - | 27.3 |
get-to-the-point-summarization-with-pointer | 39.53 | 17.28 | 36.38 |
segmented-recurrent-transformer-an-efficient | 43.19 | 19.80 | 40.40 |
learn-to-copy-from-the-copying-history | 44.50 | 21.55 | 41.24 |
universal-evasion-attacks-on-summarization | 46.71 | 20.39 | 43.56 |
ernie-gen-an-enhanced-multi-flow-pre-training | 44.31 | 21.35 | 41.60 |
attention-is-all-you-need | 39.50 | 16.06 | 36.63 |
muppet-massive-multi-task-representations | 44.45 | 21.25 | 41.4 |
closed-book-training-to-improve-summarization | 40.66 | 17.87 | 37.06 |
the-summary-loop-learning-to-write-1 | 37.7 | - | - |
all-nlp-tasks-are-generation-tasks-a-general | 44.7 | 21.4 | 41.4 |
ernie-gen-an-enhanced-multi-flow-pre-training | 42.30 | 19.92 | 39.68 |
text-summarization-with-pretrained-encoders | 42.13 | 19.6 | 39.18 |
improving-neural-abstractive-document-1 | 40.30 | 18.02 | 37.36 |
get-to-the-point-summarization-with-pointer | 39.53 | 17.28 | 36.38 |
salience-allocation-as-guidance-for | 46.27 | 22.64 | 43.08 |
palm-pre-training-an-autoencoding | 44.30 | 21.12 | 41.41 |
bottom-up-abstractive-summarization | 41.22 | 18.68 | 38.34 |
bart-denoising-sequence-to-sequence-pre | 44.16 | 21.28 | 40.90 |
an-editorial-network-for-enhanced-document | 41.42 | 19.03 | 38.36 |
universal-evasion-attacks-on-summarization | 48.18 | 19.84 | 45.35 |
get-to-the-point-summarization-with-pointer | 39.53 | 17.28 | - |
fourier-transformer-fast-long-range-modeling | 44.76 | 21.55 | 41.34 |
fast-abstractive-summarization-with-reinforce | 40.88 | 17.80 | 38.54 |
improving-abstraction-in-text-summarization | 40.19 | 17.38 | 37.52 |
learn-to-copy-from-the-copying-history | 44.39 | 21.41 | 41.05 |
better-fine-tuning-by-reducing | 44.38 | 21.53 | 41.17 |
improving-neural-abstractive-document | 41.54 | 18.18 | 36.47 |
prophetnet-predicting-future-n-gram-for | 44.20 | 21.17 | 41.30 |
longt5-efficient-text-to-text-transformer-for | 43.94 | 21.40 | 41.28 |
simcls-a-simple-framework-for-contrastive | 46.67 | 22.15 | 43.54 |
r-drop-regularized-dropout-for-neural | 44.51 | 21.58 | 41.24 |
soft-layer-specific-multi-task-summarization | 39.81 | 17.64 | 36.54 |
calibrating-sequence-likelihood-improves | 47.36 | 24.02 | 44.45 |
crispo-multi-aspect-critique-suggestion | - | - | 27.4 |
unilmv2-pseudo-masked-language-models-for | 43.16 | 20.42 | 40.14 |
ernie-gen-an-enhanced-multi-flow-pre-training | 44.02 | 21.17 | 41.26 |
deep-communicating-agents-for-abstractive | 41.69 | 19.47 | 37.92 |
brio-bringing-order-to-abstractive | 47.78 | 23.55 | 44.57 |
summareranker-a-multi-task-mixture-of-experts-1 | 47.16 | 22.61 | 43.87 |
fast-abstractive-summarization-with-reinforce | 41.47 | 18.72 | 37.76 |
unified-language-model-pre-training-for | 43.08 | 20.43 | 40.34 |
abstractive-text-summarization-using-sequence | 40.42 | 17.62 | 36.67 |
mask-attention-networks-rethinking-and | 40.98 | 18.29 | 37.88 |
pay-less-attention-with-lightweight-and | 39.84 | 16.25 | 36.73 |
summary-level-training-of-sentence-rewriting | 41.90 | 19.08 | 39.64 |
exploring-the-limits-of-transfer-learning | 43.52 | 21.55 | 40.69 |
multi-reward-reinforced-summarization-with | 40.43 | 18.00 | 37.10 |