Multiple Choice Question Answering Mcqa On 29
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
palm-2-technical-report-1 | 68.8 |
bloomberggpt-a-large-language-model-for | 42 |
scaling-language-models-methods-analysis-1 | 51.1 |
bloomberggpt-a-large-language-model-for | 50 |
bloomberggpt-a-large-language-model-for | 62.4 |
palm-2-technical-report-1 | 91.2 |
bloomberggpt-a-large-language-model-for | 42 |
bloomberggpt-a-large-language-model-for | 45.2 |
training-compute-optimal-large-language | 52.6 |