Source Code Summarization On Parallelcorpus
Metrics
BLEU-4
METEOR
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | BLEU-4 | METEOR |
---|---|---|
assemble-foundation-models-for-automatic-code | 33.85 | 21.68% |
assemble-foundation-models-for-automatic-code | 34.05 | 21.92% |