Table To Text Generation On Webnlg Seen
評価指標
BLEU
METEOR
TER
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | BLEU | METEOR | TER | Paper Title | Repository |
---|---|---|---|---|---|
HTLM (fine-tuning) | 65.4 | 0.46 | 0.33 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
GPT-2-Large (fine-tuning) | 65.3 | 0.46 | 0.33 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
0 of 2 row(s) selected.