Multiple Choice Question Answering Mcqa On 25
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Accuracy | Paper Title | Repository |
---|---|---|---|
Med-PaLM 2 (ER) | 92.3 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Med-PaLM 2 (5-shot) | 95.2 | Towards Expert-Level Medical Question Answering with Large Language Models | |
BioMedGPT-LM-7B | 51.1 | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | |
Med-PaLM 2 (CoT + SC) | 93.4 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Llama2-7B | 43.38 | Llama 2: Open Foundation and Fine-Tuned Chat Models | |
Llama2-7B-chat | 40.07 | Llama 2: Open Foundation and Fine-Tuned Chat Models |
0 of 6 row(s) selected.