Command Palette
Search for a command to run...
Multiple Choice Question Answering Mcqa On 11
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
| Paper Title | ||
|---|---|---|
| Med-PaLM 2 (ER) | 95.8 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (CoT + SC) | 95.1 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Med-PaLM 2 (5-shot) | 94.4 | Towards Expert-Level Medical Question Answering with Large Language Models |
| Chinchilla (few-shot, k=5) | 79.9 | Galactica: A Large Language Model for Science |
| Gopher (few-shot, k=5) | 70.8 | Galactica: A Large Language Model for Science |
| GAL 120B (zero-shot) | 68.8 | Galactica: A Large Language Model for Science |
| OPT (few-shot, k=5) | 30.6 | Galactica: A Large Language Model for Science |
| BLOOM (few-shot, k=5) | 28.5 | Galactica: A Large Language Model for Science |
0 of 8 row(s) selected.