Source Code Summarization On Parallelcorpus
Metriken
BLEU-4
METEOR
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | BLEU-4 | METEOR | Paper Title | Repository |
---|---|---|---|---|
AdaMo-basic | 33.85 | 21.68% | Assemble Foundation Models for Automatic Code Summarization | |
AdaMo-noise | 34.05 | 21.92% | Assemble Foundation Models for Automatic Code Summarization |
0 of 2 row(s) selected.