Table To Text Generation On E2E
評価指標
BLEU
CIDEr
METEOR
NIST
ROUGE-L
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | BLEU | CIDEr | METEOR | NIST | ROUGE-L | Paper Title | Repository |
---|---|---|---|---|---|---|---|
HTLM (fine-tuning) | 70.3 | 2.47 | 46.3 | 8.90 | 70.8 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
GPT-2-Large (fine-tuning) | 68.5 | 2.45 | 46.0 | 8.78 | 69.9 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
0 of 2 row(s) selected.