Multiple Choice Question Answering Mcqa On 24
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
モデル名 | Accuracy | Paper Title | Repository |
---|---|---|---|
Med-PaLM 2 (CoT + SC) | 80.0 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Med-PaLM 2 (ER) | 84.4 | Towards Expert-Level Medical Question Answering with Large Language Models | |
Med-PaLM 2 (5-shot) | 77.8 | Towards Expert-Level Medical Question Answering with Large Language Models |
0 of 3 row(s) selected.