Question Answering On Bioasq
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Accuracy |
---|---|
linkbert-pretraining-language-models-with | 91.4 |
galactica-a-large-language-model-for-science-1 | 94.3 |
domain-specific-language-model-pretraining | 87.56 |
linkbert-pretraining-language-models-with | 94.8 |
galactica-a-large-language-model-for-science-1 | 91.4 |
evaluation-of-large-language-model | 85.71 |
galactica-a-large-language-model-for-science-1 | 81.4 |