HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
Relation Classification
Relation Classification On Tacred 1
Relation Classification On Tacred 1
المقاييس
F1
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
F1
Paper Title
Repository
BERT
66.0
ERNIE: Enhanced Language Representation with Informative Entities
TANL (multi-task)
61.9
Structured Prediction as Translation between Augmented Natural Languages
LUKE 483M
72.7
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
DeepEx (zero-shot top-10)
76.4
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
KnowBERT
71.5
Knowledge Enhanced Contextual Word Representations
DeepEx (zero-shot top-1)
49.2
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
MTB Baldini Soares et al. (2019)
71.5
Matching the Blanks: Distributional Similarity for Relation Learning
Deepstruct zero-shot
36.1
DeepStruct: Pretraining of Language Models for Structure Prediction
SpanBERT
70.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
RoBERTa
71.3
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
DeepStruct multi-task w/ finetune
76.8
DeepStruct: Pretraining of Language Models for Structure Prediction
ERNIE
68.0
ERNIE: Enhanced Language Representation with Informative Entities
DeepStruct multi-task
74.9
DeepStruct: Pretraining of Language Models for Structure Prediction
TANL
71.9
Structured Prediction as Translation between Augmented Natural Languages
KEPLER
71.7
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
C-GCN
66.4
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
K-Adapter
72.0
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
0 of 17 row(s) selected.
Previous
Next