Code Generation On Django
评估指标
Accuracy
BLEU Score
评测结果
各个模型在此基准测试上的表现结果
模型名称 | Accuracy | BLEU Score | Paper Title | Repository |
---|---|---|---|---|
LUKEMarian | 78.50 | 89.34 | Leveraging pre-trained language models for code generation | |
Reranker | 80.2 | - | Reranking for Neural Semantic Parsing | - |
MarianCG | 81.83 | 90.41 | MarianCG: a code generation transformer model inspired by machine translation | |
lpn (Ling et al., 2016) | 62.3 | 77.6 | Latent Predictor Networks for Code Generation | |
RoBERTaMarian | 77.95 | 88.91 | Leveraging pre-trained language models for code generation | |
Tranx | 73.7 | - | TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation | |
BERTMarian | 76.68 | 56.55 | Leveraging pre-trained language models for code generation | |
BERT + TAE | 81.03 | - | Code Generation from Natural Language with Less Prior and More Monolingual Data | |
TranX + BERT w/mined | 81.03 | 79.86 | The impact of lexical and grammatical processing on generating code from natural language | |
Phrasal Statistical MT (Ling et al., 2016) | 31.5 | 47.6 | Latent Predictor Networks for Code Generation | |
ELECTRAMarian | 65.32 | 53.02 | Leveraging pre-trained language models for code generation |
0 of 11 row(s) selected.