Question Answering On Bioasq
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Accuracy | Paper Title | Repository |
---|---|---|---|
BioLinkBERT (base) | 91.4 | LinkBERT: Pretraining Language Models with Document Links | |
GAL 120B (zero-shot) | 94.3 | Galactica: A Large Language Model for Science | |
PubMedBERT uncased | 87.56 | Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing | |
BioLinkBERT (large) | 94.8 | LinkBERT: Pretraining Language Models with Document Links | |
BLOOM (zero-shot) | 91.4 | Galactica: A Large Language Model for Science | |
GPT-4 | 85.71 | Evaluation of large language model performance on the Biomedical Language Understanding and Reasoning Benchmark | - |
OPT (zero-shot) | 81.4 | Galactica: A Large Language Model for Science |
0 of 7 row(s) selected.