HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
관계 분류
Relation Classification On Tacred 1
Relation Classification On Tacred 1
평가 지표
F1
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
F1
Paper Title
DeepStruct multi-task w/ finetune
76.8
DeepStruct: Pretraining of Language Models for Structure Prediction
DeepEx (zero-shot top-10)
76.4
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
DeepStruct multi-task
74.9
DeepStruct: Pretraining of Language Models for Structure Prediction
LUKE 483M
72.7
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
K-Adapter
72.0
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
TANL
71.9
Structured Prediction as Translation between Augmented Natural Languages
KEPLER
71.7
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
KnowBERT
71.5
Knowledge Enhanced Contextual Word Representations
MTB Baldini Soares et al. (2019)
71.5
Matching the Blanks: Distributional Similarity for Relation Learning
RoBERTa
71.3
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
SpanBERT
70.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
ERNIE
68.0
ERNIE: Enhanced Language Representation with Informative Entities
C-GCN
66.4
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
BERT
66.0
ERNIE: Enhanced Language Representation with Informative Entities
TANL (multi-task)
61.9
Structured Prediction as Translation between Augmented Natural Languages
DeepEx (zero-shot top-1)
49.2
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
Deepstruct zero-shot
36.1
DeepStruct: Pretraining of Language Models for Structure Prediction
0 of 17 row(s) selected.
Previous
Next
Relation Classification On Tacred 1 | SOTA | HyperAI초신경