HyperAI超神経
ホーム
ニュース
最新論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
ホーム
SOTA
Relation Classification
Relation Classification On Tacred 1
Relation Classification On Tacred 1
評価指標
F1
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
F1
Paper Title
Repository
BERT
66.0
ERNIE: Enhanced Language Representation with Informative Entities
TANL (multi-task)
61.9
Structured Prediction as Translation between Augmented Natural Languages
LUKE 483M
72.7
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
DeepEx (zero-shot top-10)
76.4
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
KnowBERT
71.5
Knowledge Enhanced Contextual Word Representations
DeepEx (zero-shot top-1)
49.2
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
MTB Baldini Soares et al. (2019)
71.5
Matching the Blanks: Distributional Similarity for Relation Learning
Deepstruct zero-shot
36.1
DeepStruct: Pretraining of Language Models for Structure Prediction
SpanBERT
70.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
RoBERTa
71.3
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
DeepStruct multi-task w/ finetune
76.8
DeepStruct: Pretraining of Language Models for Structure Prediction
ERNIE
68.0
ERNIE: Enhanced Language Representation with Informative Entities
DeepStruct multi-task
74.9
DeepStruct: Pretraining of Language Models for Structure Prediction
TANL
71.9
Structured Prediction as Translation between Augmented Natural Languages
KEPLER
71.7
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
C-GCN
66.4
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
K-Adapter
72.0
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
0 of 17 row(s) selected.
Previous
Next