Code Documentation Generation On 5
評価指標
Smoothed BLEU-4
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Smoothed BLEU-4 | Paper Title | Repository |
---|---|---|---|
seq2seq | 6.88 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
Transformer | 25.61 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
pre-train w/ code only | 8.3 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
CodeTrans-TF-Large | 18.98 | CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing | |
CodeBERT (MLM+RTD) | 9.54 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
CodeBERT (RTD) | 8.73 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
CodeBERT (MLM) | 8.51 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
RoBERTa | 5.72 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages |
0 of 8 row(s) selected.