Kg To Text Generation On Webnlg All
Metrics
BLEU
METEOR
chrF++
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU | METEOR | chrF++ |
---|---|---|---|
investigating-pretrained-language-models-for | 54.72 | 42.23 | 72.29 |
investigating-pretrained-language-models-for | 59.70 | 44.18 | 75.40 |