HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
Relation Classification
Relation Classification On Tacred 1
Relation Classification On Tacred 1
평가 지표
F1
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
F1
Paper Title
Repository
BERT
66.0
ERNIE: Enhanced Language Representation with Informative Entities
TANL (multi-task)
61.9
Structured Prediction as Translation between Augmented Natural Languages
LUKE 483M
72.7
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
DeepEx (zero-shot top-10)
76.4
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
KnowBERT
71.5
Knowledge Enhanced Contextual Word Representations
DeepEx (zero-shot top-1)
49.2
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
MTB Baldini Soares et al. (2019)
71.5
Matching the Blanks: Distributional Similarity for Relation Learning
Deepstruct zero-shot
36.1
DeepStruct: Pretraining of Language Models for Structure Prediction
SpanBERT
70.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
RoBERTa
71.3
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
DeepStruct multi-task w/ finetune
76.8
DeepStruct: Pretraining of Language Models for Structure Prediction
ERNIE
68.0
ERNIE: Enhanced Language Representation with Informative Entities
DeepStruct multi-task
74.9
DeepStruct: Pretraining of Language Models for Structure Prediction
TANL
71.9
Structured Prediction as Translation between Augmented Natural Languages
KEPLER
71.7
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
C-GCN
66.4
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
K-Adapter
72.0
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
0 of 17 row(s) selected.
Previous
Next