Text Summarization On Gigaword
المقاييس
ROUGE-1
ROUGE-2
ROUGE-L
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
جدول المقارنة
اسم النموذج | ROUGE-1 | ROUGE-2 | ROUGE-L |
---|---|---|---|
a-reinforced-topic-aware-convolutional | 36.92 | 18.29 | 34.58 |
ernie-gen-an-enhanced-multi-flow-pre-training | 39.46 | 20.34 | 36.74 |
simple-unsupervised-summarization-by-1 | 26.48 | 10.05 | 24.41 |
abstractive-text-summarization-using-sequence | 36.4 | 17.7 | 33.71 |
better-fine-tuning-by-reducing | 40.45 | 20.69 | 36.56 |
concept-pointer-network-for-abstractive | 37.01 | 17.1 | 34.87 |
ernie-gen-an-enhanced-multi-flow-pre-training | 38.83 | 20.04 | 36.20 |
muppet-massive-multi-task-representations | 40.4 | 20.54 | 36.21 |
deep-recurrent-generative-decoder-for | 36.27 | 17.57 | 33.62 |
unifying-architectures-tasks-and-modalities | 39.81 | 20.66 | 37.11 |
joint-parsing-and-generation-for-abstractive | 36.61 | 18.85 | 34.33 |
faithful-to-the-original-fact-aware-neural | 37.27 | 17.65 | 34.24 |
rethinking-perturbations-in-encoder-decoders | 39.66 | 20.45 | 36.59 |
soft-layer-specific-multi-task-summarization | 35.98 | 17.76 | 33.63 |
selective-encoding-for-abstractive-sentence | 36.15 | 17.54 | 33.63 |
structure-infused-copy-mechanisms-for | 35.47 | 17.66 | 33.52 |
controlling-the-amount-of-verbatim-copying-in | 39.19 | 20.38 | 36.69 |
retrieve-rerank-and-rewrite-soft-template | 37.04 | 19.03 | 34.46 |
controlling-the-amount-of-verbatim-copying-in | 39.08 | 20.47 | 36.69 |
concept-pointer-network-for-abstractive | 38.02 | 16.97 | 35.43 |
global-encoding-for-abstractive-summarization | 36.3 | 18.0 | 33.8 |
attention-is-all-you-need | 37.57 | 18.90 | 34.69 |
entity-commonsense-representation-for-neural | 37.04 | 16.66 | 34.93 |
a-new-approach-to-overgenerating-and-scoring | 39.27 | 20.40 | 37.75 |
النموذج 25 | 60.12 | 54.22 | 57.21 |
biset-bi-directional-selective-encoding-with | 39.11 | 19.78 | 36.87 |
ernie-gen-an-enhanced-multi-flow-pre-training | 39.25 | 20.25 | 36.53 |
cutting-off-redundant-repeating-generations | 36.30 | 17.31 | 33.88 |
a-neural-attention-model-for-abstractive | 31 | - | - |
palm-pre-training-an-autoencoding | 39.45 | 20.37 | 36.75 |
mask-attention-networks-rethinking-and | 38.28 | 19.46 | 35.46 |
pegasus-pre-training-with-extracted-gap | 39.12 | 19.86 | 36.24 |
beyond-reptile-meta-learned-dot-product | 40.6 | 21.0 | 37.0 |
ensure-the-correctness-of-the-summary | 35.33 | 17.27 | 33.19 |
a-neural-attention-model-for-abstractive | 30.88 | - | - |
prophetnet-predicting-future-n-gram-for | 39.51 | 20.42 | 36.69 |
mass-masked-sequence-to-sequence-pre-training | 38.73 | 19.71 | 35.96 |
النموذج 38 | 52.21 | 45.58 | 60.29 |
abstractive-sentence-summarization-with | 33.78 | 15.97 | 31.15 |
rethinking-perturbations-in-encoder-decoders | 39.81 | 20.40 | 36.93 |
unified-language-model-pre-training-for | 38.90 | 20.05 | 36.00 |