Relation Classification On Tacred 1

평가 지표

F1

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
F1
Paper TitleRepository
BERT66.0ERNIE: Enhanced Language Representation with Informative Entities-
TANL (multi-task)61.9Structured Prediction as Translation between Augmented Natural Languages-
LUKE 483M72.7LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention-
DeepEx (zero-shot top-10)76.4Zero-Shot Information Extraction as a Unified Text-to-Triple Translation-
KnowBERT71.5Knowledge Enhanced Contextual Word Representations-
DeepEx (zero-shot top-1)49.2Zero-Shot Information Extraction as a Unified Text-to-Triple Translation-
MTB Baldini Soares et al. (2019)71.5Matching the Blanks: Distributional Similarity for Relation Learning-
Deepstruct zero-shot36.1DeepStruct: Pretraining of Language Models for Structure Prediction-
SpanBERT70.8SpanBERT: Improving Pre-training by Representing and Predicting Spans-
RoBERTa71.3K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters-
DeepStruct multi-task w/ finetune76.8DeepStruct: Pretraining of Language Models for Structure Prediction-
ERNIE68.0ERNIE: Enhanced Language Representation with Informative Entities-
DeepStruct multi-task74.9DeepStruct: Pretraining of Language Models for Structure Prediction-
TANL71.9Structured Prediction as Translation between Augmented Natural Languages-
KEPLER71.7KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation-
C-GCN66.4Graph Convolution over Pruned Dependency Trees Improves Relation Extraction-
K-Adapter72.0K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters-
0 of 17 row(s) selected.
Relation Classification On Tacred 1 | SOTA | HyperAI초신경