Named Entity Recognition On Conll
Metrics
F1
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | F1 |
---|---|
deep-contextualized-word-representations | 93.42 |
subregweigh-effective-and-efficient | 95.27 |
crossweigh-training-named-entity-tagger-from | 94.13 |
learning-from-noisy-labels-for-entity-centric | 95.60 |
neural-architectures-for-named-entity | 91.47 |
learning-from-noisy-labels-for-entity-centric | 94.04 |
subregweigh-effective-and-efficient | 95.45 |
crossweigh-training-named-entity-tagger-from | 94.28 |
improving-named-entity-recognition-by | 94.81 |
luke-deep-contextualized-entity | 95.89 |
end-to-end-sequence-labeling-via-bi | 91.87 |