Table To Text Generation On Webnlg Unseen
评估指标
BLEU
METEOR
TER
评测结果
各个模型在此基准测试上的表现结果
模型名称 | BLEU | METEOR | TER | Paper Title | Repository |
---|---|---|---|---|---|
GPT-2-Large (fine-tuning) | 43.1 | 0.38 | 0.53 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
HTLM (fine-tuning) | 48.4 | 0.39 | 0.51 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
0 of 2 row(s) selected.