Data To Text Generation On Webnlg Full 1
評価指標
BLEU
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | BLEU | Paper Title | Repository |
---|---|---|---|
Control Prefixes (A1, T5-large) | 61.94 | Control Prefixes for Parameter-Efficient Text Generation | |
Transformer (Pipeline) | 51.68 | Neural data-to-text generation: A comparison between pipeline and end-to-end architectures | |
DATATUNER_NO_FC | 52.9 | Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity | |
T5-large | 59.70 | Investigating Pretrained Language Models for Graph-to-Text Generation | |
T5-Large | 57.1 | Text-to-Text Pre-Training for Data-to-Text Tasks | |
HTLM (prefix 0.1%) | 56.3 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
Control Prefixes (A1, A2, T5-large) | 62.27 | Control Prefixes for Parameter-Efficient Text Generation | |
T5-large + Wiki + Position | 60.56 | Stage-wise Fine-tuning for Graph-to-Text Generation |
0 of 8 row(s) selected.