Sentence Embeddings For Biomedical Texts On 4
Metriken
F1
Precision
Recall
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | F1 | Precision | Recall | Paper Title | Repository |
---|---|---|---|---|---|
SciBERT uncased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | 91.51 | 91.3 | 91.79 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
BERT-Base uncased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | 89.16 | 89.31 | 89.12 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
BioBERT (pre-trained on PubMed abstracts + PMC, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | 93.38 | 92.98 | 93.85 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
BERT-Base cased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | 89.12 | 88.25 | 90.1 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
SciBERT cased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | 90.69 | 89 | 92.54 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
0 of 5 row(s) selected.