HyperAIHyperAI超神经
首页资讯最新论文教程数据集百科SOTALLM 模型天梯GPU 天梯顶会
全站搜索
关于
中文
HyperAIHyperAI超神经
  1. 首页
  2. SOTA
  3. 关系提取
  4. Relation Extraction On Chemprot

Relation Extraction On Chemprot

评估指标

Micro F1

评测结果

各个模型在此基准测试上的表现结果

模型名称
Micro F1
Paper TitleRepository
CharacterBERT (base, medical)73.44CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters
BioM-BERT-BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
SciBert (Finetune)-SciBERT: A Pretrained Language Model for Scientific Text
SciBERT (Base Vocab)-SciBERT: A Pretrained Language Model for Scientific Text
ELECTRAMed-ELECTRAMed: a new pre-trained language representation model for biomedical NLP
PubMedBERT uncased77.24Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
SciFive Large-SciFive: a text-to-text transformer model for biomedical literature
BioLinkBERT (large)79.98LinkBERT: Pretraining Language Models with Document Links
KeBioLM-Improving Biomedical Pretrained Language Models with Knowledge
BioMegatron-BioMegatron: Larger Biomedical Domain Language Model
BioT5X (base)-SciFive: a text-to-text transformer model for biomedical literature
BioBERT-BioBERT: a pre-trained biomedical language representation model for biomedical text mining
NCBI_BERT(large) (P)---
0 of 13 row(s) selected.
HyperAI

学习、理解、实践,与社区一起构建人工智能的未来

中文

关于

关于我们数据集帮助

产品

资讯教程数据集百科

链接

TVM 中文Apache TVMOpenBayes

© HyperAI超神经

津ICP备17010941号-1京公网安备11010502038810号京公网安备11010502038810号
TwitterBilibili