Command Palette
Search for a command to run...
Multiple Choice Question Answering Mcqa On 24
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
| Paper Title | ||
|---|---|---|
| Med-PaLM 2 (ER) | 84.4 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (CoT + SC) | 80.0 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (5-shot) | 77.8 | Towards Expert-Level Medical Question Answering with Large Language Models |
0 of 3 row(s) selected.