Multiple Choice Question Answering Mcqa On 25
Metrics
Accuracy
Results
Performance results of various models on this benchmark
| Paper Title | ||
|---|---|---|
| Med-PaLM 2 (5-shot) | 95.2 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (CoT + SC) | 93.4 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (ER) | 92.3 | Towards Expert-Level Medical Question Answering with Large Language Models |
| BioMedGPT-LM-7B | 51.1 | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine |
| Llama2-7B | 43.38 | Llama 2: Open Foundation and Fine-Tuned Chat Models |
| Llama2-7B-chat | 40.07 | Llama 2: Open Foundation and Fine-Tuned Chat Models |
0 of 6 row(s) selected.