Code Generation On Res Q
Metrics
pass@1
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | pass@1 |
---|---|
res-q-evaluating-code-editing-large-language | 30.0 |
res-q-evaluating-code-editing-large-language | 58.0 |
res-q-evaluating-code-editing-large-language | 20.0 |
res-q-evaluating-code-editing-large-language | 18.0 |
res-q-evaluating-code-editing-large-language | 30.0 |
res-q-evaluating-code-editing-large-language | 36.0 |
res-q-evaluating-code-editing-large-language | 46.0 |
res-q-evaluating-code-editing-large-language | 29.0 |
res-q-evaluating-code-editing-large-language | 37.0 |