Command Palette
Search for a command to run...
Table To Text Generation On Webnlg Unseen
評価指標
BLEU
METEOR
TER
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
| Paper Title | ||||
|---|---|---|---|---|
| HTLM (fine-tuning) | 48.4 | 0.39 | 0.51 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models |
| GPT-2-Large (fine-tuning) | 43.1 | 0.38 | 0.53 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models |
0 of 2 row(s) selected.