HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Relation Classification
Relation Classification On Tacred 1
Relation Classification On Tacred 1
评估指标
F1
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
F1
Paper Title
Repository
BERT
66.0
ERNIE: Enhanced Language Representation with Informative Entities
TANL (multi-task)
61.9
Structured Prediction as Translation between Augmented Natural Languages
LUKE 483M
72.7
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
DeepEx (zero-shot top-10)
76.4
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
KnowBERT
71.5
Knowledge Enhanced Contextual Word Representations
DeepEx (zero-shot top-1)
49.2
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
MTB Baldini Soares et al. (2019)
71.5
Matching the Blanks: Distributional Similarity for Relation Learning
Deepstruct zero-shot
36.1
DeepStruct: Pretraining of Language Models for Structure Prediction
SpanBERT
70.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
RoBERTa
71.3
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
DeepStruct multi-task w/ finetune
76.8
DeepStruct: Pretraining of Language Models for Structure Prediction
ERNIE
68.0
ERNIE: Enhanced Language Representation with Informative Entities
DeepStruct multi-task
74.9
DeepStruct: Pretraining of Language Models for Structure Prediction
TANL
71.9
Structured Prediction as Translation between Augmented Natural Languages
KEPLER
71.7
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
C-GCN
66.4
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
K-Adapter
72.0
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
0 of 17 row(s) selected.
Previous
Next