Multiple Choice Question Answering Mcqa On 23
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
| Paper Title | ||
|---|---|---|
| Med-PaLM 2 (ER) | 88.7 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (CoT + SC) | 88.3 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (5-shot) | 88.3 | Towards Expert-Level Medical Question Answering with Large Language Models |
0 of 3 row(s) selected.