Relation Extraction On Ddi
Métriques
Micro F1
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Micro F1 | Paper Title | Repository |
---|---|---|---|
PubMedBERT uncased | 82.36 | Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing | |
KeBioLM | - | Improving Biomedical Pretrained Language Models with Knowledge | |
BioLinkBERT (large) | 83.35 | LinkBERT: Pretraining Language Models with Document Links |
0 of 3 row(s) selected.