HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Relation Extraction
Relation Extraction On Tacred
Relation Extraction On Tacred
Metrics
F1
Results
Performance results of various models on this benchmark
Columns
Model Name
F1
Paper Title
Repository
DeepStruct multi-task w/ finetune
76.8
DeepStruct: Pretraining of Language Models for Structure Prediction
TRE
67.4
Improving Relation Extraction by Pre-trained Language Representations
SA-LSTM+D
67.6
Beyond Word Attention: Using Segment Attention in Neural Relation Extraction
-
C-AGGCN
68.2
Attention Guided Graph Convolutional Networks for Relation Extraction
LUKE
-
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
K-ADAPTER (F+L)
72.04
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
C-GCN
66.4
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
RoBERTa-large-typed-marker
74.6
An Improved Baseline for Sentence-level Relation Extraction
C-GCN + PA-LSTM
68.2
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
KEPLER
71.7
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
AGGCN
65.1
Attention Guided Graph Convolutional Networks for Relation Extraction
RE-MC
75.4
Enhancing Targeted Minority Class Prediction in Sentence-Level Relation Extraction
ERNIE
67.97
ERNIE: Enhanced Language Representation with Informative Entities
C-SGC
67.0
Simplifying Graph Convolutional Networks
RECENT+SpanBERT
75.2
Relation Classification with Entity Type Restriction
-
SpanBERT-large
70.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
NLI_RoBERTa
71.0
Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction
KnowBert-W+W
71.5
Knowledge Enhanced Contextual Word Representations
LLM-QA4RE (XXLarge)
52.2
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors
Contrastive Pre-training
69.5
Learning from Context or Names? An Empirical Study on Neural Relation Extraction
0 of 40 row(s) selected.
Previous
Next