Data To Text Generation On E2E Nlg Challenge
Metrics
BLEU
CIDEr
METEOR
NIST
ROUGE-L
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU | CIDEr | METEOR | NIST | ROUGE-L |
---|---|---|---|---|---|
technical-report-for-e2e-nlg-challenge | 64.22 | 2.2721 | 44.69 | 8.3453 | 66.45 |
tricy-trigger-guided-data-to-text-generation-1 | 66.43 | - | - | - | 70.14 |
tnt-nlg-system-1-using-a-statistical-nlg-to | 65.61 | 2.2183 | 45.17 | 8.5105 | 68.39 |
findings-of-the-e2e-nlg-challenge | 65.93 | 2.2338 | 44.83 | 8.6094 | 68.50 |
copy-mechanism-and-tailored-training-for | 67.05 | 2.2355 | 44.49 | 8.5150 | 68.94 |
e2e-nlg-challenge-neural-models-vs-templates | 56.57 | 1.8206 | 45.29 | 7.4544 | 66.14 |
a-deep-ensemble-model-with-slot-alignment-for | 66.19 | - | 44.54 | 8.6130 | 67.72 |
self-training-from-self-memory-in-data-to | 65.11 | 2.16 | 46.11 | 8.35 | 68.41 |
copy-mechanism-and-tailored-training-for | 65.80 | 2.1803 | 45.16 | 8.5615 | 67.40 |
attention-regularized-sequence-to-sequence | 65.45 | 2.1012 | 43.92 | 8.1804 | 70.83 |
pragmatically-informative-text-generation | 68.60 | 2.37 | 45.25 | 8.73 | 70.82 |