HyperAIHyperAI초신경
홈뉴스연구 논문튜토리얼데이터셋백과사전SOTALLM 모델GPU 랭킹컨퍼런스
전체 검색
소개
한국어
HyperAIHyperAI초신경
  1. 홈
  2. SOTA
  3. 관계 추출
  4. Relation Extraction On Chemprot

Relation Extraction On Chemprot

평가 지표

Micro F1

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
Micro F1
Paper TitleRepository
CharacterBERT (base, medical)73.44CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters-
BioM-BERT-BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
SciBert (Finetune)-SciBERT: A Pretrained Language Model for Scientific Text-
SciBERT (Base Vocab)-SciBERT: A Pretrained Language Model for Scientific Text-
ELECTRAMed-ELECTRAMed: a new pre-trained language representation model for biomedical NLP-
PubMedBERT uncased77.24Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing-
SciFive Large-SciFive: a text-to-text transformer model for biomedical literature-
BioLinkBERT (large)79.98LinkBERT: Pretraining Language Models with Document Links-
KeBioLM-Improving Biomedical Pretrained Language Models with Knowledge-
BioMegatron-BioMegatron: Larger Biomedical Domain Language Model-
BioT5X (base)-SciFive: a text-to-text transformer model for biomedical literature-
BioBERT-BioBERT: a pre-trained biomedical language representation model for biomedical text mining-
NCBI_BERT(large) (P)-Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets-
0 of 13 row(s) selected.
HyperAI

학습, 이해, 실천, 커뮤니티와 함께 인공지능의 미래를 구축하다

한국어

소개

회사 소개데이터셋 도움말

제품

뉴스튜토리얼데이터셋백과사전

링크

TVM 한국어Apache TVMOpenBayes

© HyperAI초신경

TwitterBilibili