Relation Extraction On Re Tacred
Metriken
F1
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Modellname | F1 | Paper Title | Repository |
|---|---|---|---|
| RoBERTa-large-typed-marker | 91.1 | An Improved Baseline for Sentence-level Relation Extraction | |
| PA-LSTM | 79.4 | Position-aware Attention and Supervised Data Improve Slot Filling | |
| LLM-QA4RE (XXLarge) | 66.5 | Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | |
| EXOBRAIN | 91.4 | Improving Sentence-Level Relation Extraction through Curriculum Learning | - |
| SpanBERT | 85.3 | SpanBERT: Improving Pre-training by Representing and Predicting Spans | |
| GenPT (RoBERTa) | 91.1 | Generative Prompt Tuning for Relation Classification | |
| RAG4RE | 73.3 | Retrieval-Augmented Generation-based Relation Extraction | |
| REBEL (no entity type marker) | 90.4 | REBEL: Relation Extraction By End-to-end Language generation | |
| C-GCN | 80.3 | Graph Convolution over Pruned Dependency Trees Improves Relation Extraction |
0 of 9 row(s) selected.