Table To Text Generation On Webnlg All
Metrics
BLEU
METEOR
TER
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU | METEOR | TER |
---|---|---|---|
htlm-hyper-text-pre-training-and-prompting-of | 55.5 | 0.42 | 0.42 |
htlm-hyper-text-pre-training-and-prompting-of | 55.6 | 0.42 | 0.4 |