HyperAI

Text Summarization On Gigaword

Métriques

ROUGE-1
ROUGE-2
ROUGE-L

Résultats

Résultats de performance de divers modèles sur ce benchmark

Tableau comparatif
Nom du modèleROUGE-1ROUGE-2ROUGE-L
a-reinforced-topic-aware-convolutional36.9218.2934.58
ernie-gen-an-enhanced-multi-flow-pre-training39.4620.3436.74
simple-unsupervised-summarization-by-126.4810.0524.41
abstractive-text-summarization-using-sequence36.417.733.71
better-fine-tuning-by-reducing40.4520.6936.56
concept-pointer-network-for-abstractive37.0117.134.87
ernie-gen-an-enhanced-multi-flow-pre-training38.8320.0436.20
muppet-massive-multi-task-representations40.420.5436.21
deep-recurrent-generative-decoder-for36.2717.5733.62
unifying-architectures-tasks-and-modalities39.8120.6637.11
joint-parsing-and-generation-for-abstractive36.6118.8534.33
faithful-to-the-original-fact-aware-neural37.2717.6534.24
rethinking-perturbations-in-encoder-decoders39.6620.4536.59
soft-layer-specific-multi-task-summarization35.9817.7633.63
selective-encoding-for-abstractive-sentence36.1517.5433.63
structure-infused-copy-mechanisms-for35.4717.6633.52
controlling-the-amount-of-verbatim-copying-in39.1920.3836.69
retrieve-rerank-and-rewrite-soft-template37.0419.0334.46
controlling-the-amount-of-verbatim-copying-in39.0820.4736.69
concept-pointer-network-for-abstractive38.0216.9735.43
global-encoding-for-abstractive-summarization36.318.033.8
attention-is-all-you-need37.5718.9034.69
entity-commonsense-representation-for-neural37.0416.6634.93
a-new-approach-to-overgenerating-and-scoring39.2720.4037.75
Modèle 2560.1254.2257.21
biset-bi-directional-selective-encoding-with39.1119.7836.87
ernie-gen-an-enhanced-multi-flow-pre-training39.2520.2536.53
cutting-off-redundant-repeating-generations36.3017.3133.88
a-neural-attention-model-for-abstractive31--
palm-pre-training-an-autoencoding39.4520.3736.75
mask-attention-networks-rethinking-and38.2819.4635.46
pegasus-pre-training-with-extracted-gap39.1219.8636.24
beyond-reptile-meta-learned-dot-product40.621.037.0
ensure-the-correctness-of-the-summary35.3317.2733.19
a-neural-attention-model-for-abstractive30.88--
prophetnet-predicting-future-n-gram-for39.5120.4236.69
mass-masked-sequence-to-sequence-pre-training38.7319.7135.96
Modèle 3852.2145.5860.29
abstractive-sentence-summarization-with33.7815.9731.15
rethinking-perturbations-in-encoder-decoders39.8120.4036.93
unified-language-model-pre-training-for38.9020.0536.00