Code Documentation Generation On 4
Metriken
Smoothed BLEU-4
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | Smoothed BLEU-4 | Paper Title | Repository |
---|---|---|---|
CodeBERT (MLM+RTD) | 8.46 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
RoBERTa | 7.26 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
CodeBERT (MLM) | 7.95 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
Transformer | 7.87 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
seq2seq | 6.96 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
CodeTrans-MT-Base | 15.26 | CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing | |
pre-train w/ code only | 7.36 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages |
0 of 7 row(s) selected.