Coreference Resolution On Conll 2012
Metrics
Avg F1
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Avg F1 |
---|---|
coreference-resolution-through-a-seq2seq | 83.3 |
coreference-resolution-without-span | 80.3 |
learning-to-ignore-long-document-coreference | 79.6 |
word-level-coreference-resolution | 81.0 |
corefqa-coreference-resolution-as-query-based | 79.9 |
a-cluster-ranking-model-for-full-anaphora | 76.4 |
end-to-end-neural-coreference-resolution | 67.2 |
end-to-end-neural-coreference-resolution | 70.4 |
2407-21489 | 83.6 |
coreference-resolution-without-span | 80.2 |
end-to-end-deep-reinforcement-learning-based | 73.8 |
higher-order-coreference-resolution-with | 73.0 |
end-to-end-neural-coreference-resolution | 68.8 |
autoregressive-structured-prediction-with | 82.3 |
revealing-the-myth-of-higher-order-inference | 80.2 |
corefqa-coreference-resolution-as-query-based | 83.1 |
coreference-resolution-with-entity | 76.61 |
bert-for-coreference-resolution-baselines-and | 76.9 |