Multiple Choice Question Answering Mcqa On 26
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Model Name | Accuracy | Paper Title | Repository |
---|---|---|---|
Med-PaLM (CoT + SC) | 81.5 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Med-PaLM 2 (5-shot) | 80.9 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Med-PaLM (ER) | 83.2 | Towards Expert-Level Medical Question Answering with Large Language Models |
0 of 3 row(s) selected.