Code Generation On Django
評価指標
Accuracy
BLEU Score
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
比較表
モデル名 | Accuracy | BLEU Score |
---|---|---|
leveraging-pre-trained-language-models-for-3 | 78.50 | 89.34 |
reranking-for-neural-semantic-parsing | 80.2 | - |
mariancg-a-code-generation-transformer-model | 81.83 | 90.41 |
latent-predictor-networks-for-code-generation | 62.3 | 77.6 |
leveraging-pre-trained-language-models-for-3 | 77.95 | 88.91 |
tranx-a-transition-based-neural-abstract | 73.7 | - |
leveraging-pre-trained-language-models-for-3 | 76.68 | 56.55 |
semantic-parsing-with-less-prior-and-more | 81.03 | - |
the-impact-of-lexical-and-grammatical-1 | 81.03 | 79.86 |
latent-predictor-networks-for-code-generation | 31.5 | 47.6 |
leveraging-pre-trained-language-models-for-3 | 65.32 | 53.02 |