HyperAI초신경

Relation Extraction On Tacred

평가 지표

F1

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
F1
Paper TitleRepository
DeepStruct multi-task w/ finetune76.8DeepStruct: Pretraining of Language Models for Structure Prediction
TRE67.4Improving Relation Extraction by Pre-trained Language Representations
SA-LSTM+D67.6Beyond Word Attention: Using Segment Attention in Neural Relation Extraction-
C-AGGCN68.2Attention Guided Graph Convolutional Networks for Relation Extraction
LUKE-LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
K-ADAPTER (F+L)72.04K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
C-GCN66.4Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
RoBERTa-large-typed-marker74.6An Improved Baseline for Sentence-level Relation Extraction
C-GCN + PA-LSTM68.2Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
KEPLER71.7KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
AGGCN65.1Attention Guided Graph Convolutional Networks for Relation Extraction
RE-MC75.4Enhancing Targeted Minority Class Prediction in Sentence-Level Relation Extraction
ERNIE67.97ERNIE: Enhanced Language Representation with Informative Entities
C-SGC67.0Simplifying Graph Convolutional Networks
RECENT+SpanBERT75.2Relation Classification with Entity Type Restriction-
SpanBERT-large70.8SpanBERT: Improving Pre-training by Representing and Predicting Spans
NLI_RoBERTa71.0Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction
KnowBert-W+W71.5Knowledge Enhanced Contextual Word Representations
LLM-QA4RE (XXLarge)52.2Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors
Contrastive Pre-training69.5Learning from Context or Names? An Empirical Study on Neural Relation Extraction
0 of 40 row(s) selected.