Code Documentation Generation On 4
Métriques
Smoothed BLEU-4
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Smoothed BLEU-4 | Paper Title | Repository |
---|---|---|---|
CodeBERT (MLM+RTD) | 8.46 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
RoBERTa | 7.26 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
CodeBERT (MLM) | 7.95 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
Transformer | 7.87 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
seq2seq | 6.96 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
CodeTrans-MT-Base | 15.26 | CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing | |
pre-train w/ code only | 7.36 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages |
0 of 7 row(s) selected.