Sentence Embeddings For Biomedical Texts On 3
평가 지표
F1
Precision
Recall
평가 결과
이 벤치마크에서 각 모델의 성능 결과
모델 이름 | F1 | Precision | Recall | Paper Title | Repository |
---|---|---|---|---|---|
BERT-Base uncased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | 86.8 | 85.76 | 88.15 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
BioBERT (pre-trained on PubMed abstracts + PMC, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | 89.75 | 88.93 | 90.76 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
SciBERT cased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | 89.3 | 87.31 | 91.53 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
BERT-Base cased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | 84.21 | 83.36 | 85.2 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
SciBERT uncased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | 89.3 | 87.99 | 90.78 | Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations | - |
0 of 5 row(s) selected.