Kg To Text Generation On Webnlg 2 0
Metrics
BLEU
METEOR
ROUGE
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU | METEOR | ROUGE |
---|---|---|---|
jointgt-graph-text-joint-representation | 66.14 | 47.25 | 75.91 |
gap-a-graph-aware-language-model-framework | 62.3 | 44.33 | 73 |
jointgt-graph-text-joint-representation | 64.42 | 46.58 | 74.77 |
jointgt-graph-text-joint-representation | 65.92 | 47.15 | 76.10 |
handling-rare-items-in-data-to-text | 61 | 42 | 71.0 |
kgpt-knowledge-grounded-pre-training-for-data | 62.3 | 44.33 | 73 |
gap-a-graph-aware-language-model-framework | 65.92 | 47.15 | 76.1 |
gap-a-graph-aware-language-model-framework | 64.6 | 46.77 | 75.74 |
gap-a-graph-aware-language-model-framework | - | - | 76.22 |
jointgt-graph-text-joint-representation | 64.55 | 46.51 | 75.13 |
gap-a-graph-aware-language-model-framework | 66.2 | - | 76.36 |
kgpt-knowledge-grounded-pre-training-for-data | 64.11 | 46.30 | 74.57 |
gap-a-graph-aware-language-model-framework | 60.8 | 42.76 | 71.13 |