Named Entity Recognition On Dwie
Metrics
F1-Hard
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | F1-Hard |
---|---|
dwie-an-entity-centric-dataset-for-multi-task | 74.8 |
rexel-an-end-to-end-model-for-document-level | 90.59 |
injecting-knowledge-base-information-into-end | 75.0 |