Question Answering On Squad11 Dev
المقاييس
EM
F1
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
جدول المقارنة
اسم النموذج | EM | F1 |
---|---|---|
learning-recurrent-span-representations-for | 66.4 | 74.9 |
words-or-characters-fine-grained-gating-for | 59.95 | 71.25 |
reinforced-mnemonic-reader-for-machine | 78.9 | 86.3 |
machine-comprehension-using-match-lstm-and | 64.1 | 64.7 |
learned-in-translation-contextualized-word | 71.3 | 79.9 |
multi-perspective-context-matching-for | 66.1 | 75.8 |
exploring-machine-reading-comprehension-with | 76.7 | 84.9 |
prune-once-for-all-sparse-pre-trained | 75.62 | 83.87 |
bart-denoising-sequence-to-sequence-pre | - | 90.8 |
learning-dense-representations-of-phrases-at | 78.3 | 86.3 |
prune-once-for-all-sparse-pre-trained | 83.22 | 90.02 |
a-fully-attention-based-information-retriever | 65.1 | 75.6 |
prune-once-for-all-sparse-pre-trained | 81.1 | 88.42 |
exploring-the-limits-of-transfer-learning | 88.53 | 94.95 |
fusionnet-fusing-via-fully-aware-attention | 75.3 | 83.6 |
190910351 | 79.7 | 87.5 |
ruminating-reader-reasoning-with-gated-multi | 70.6 | 79.5 |
deep-contextualized-word-representations | - | 85.6 |
stochastic-answer-networks-for-machine | 76.235 | 84.056 |
dynamic-coattention-networks-for-question | 65.4 | 75.6 |
dcn-mixed-objective-and-deep-residual | 74.5 | 83.1 |
smarnet-teaching-machines-to-read-and | 71.362 | 80.183 |
prune-once-for-all-sparse-pre-trained | 78.1 | 85.82 |
bert-pre-training-of-deep-bidirectional | 86.2 | 92.2 |
qanet-combining-local-convolution-with-global | 73.6 | 82.7 |
exploring-the-limits-of-transfer-learning | 79.1 | 87.24 |
luke-deep-contextualized-entity | 89.8 | - |
exploring-the-limits-of-transfer-learning | 85.44 | 92.08 |
end-to-end-answer-chunk-extraction-and | 62.5 | 71.2 |
gated-self-matching-networks-for-reading | 71.1 | 79.5 |
exploring-the-limits-of-transfer-learning | 90.06 | 95.64 |
distilbert-a-distilled-version-of-bert | - | 85.8 |
structural-embedding-of-syntactic-trees-for | 67.89 | 77.42 |
structural-embedding-of-syntactic-trees-for | 67.65 | 77.19 |
xlnet-generalized-autoregressive-pretraining | 89.7 | 95.1 |
simple-recurrent-units-for-highly | 71.4 | 80.2 |
prune-once-for-all-sparse-pre-trained | 80.84 | 88.24 |
bert-pre-training-of-deep-bidirectional | 84.2 | 91.1 |
bidirectional-attention-flow-for-machine | 67.7 | 77.3 |
dice-loss-for-data-imbalanced-nlp-tasks | 89.79 | 95.77 |
prune-once-for-all-sparse-pre-trained | 83.35 | 90.2 |
making-neural-qa-as-simple-as-possible-but | 70.3 | 78.5 |
luke-deep-contextualized-entity | - | 95 |
distilbert-a-distilled-version-of-bert | 77.7 | - |
exploring-the-limits-of-transfer-learning | 86.66 | 93.79 |
reducing-bert-pre-training-time-from-3-days | - | 90.584 |
prune-once-for-all-sparse-pre-trained | 77.03 | 85.13 |
learning-to-compute-word-embeddings-on-the | 63.06 | - |
prune-once-for-all-sparse-pre-trained | 76.91 | 84.82 |
exploring-question-understanding-and | 69.10 | 78.38 |
reading-wikipedia-to-answer-open-domain | 69.5 | 78.8 |
qanet-combining-local-convolution-with-global | 74.5 | 83.2 |
qanet-combining-local-convolution-with-global | 75.1 | 83.8 |
phase-conductor-on-multi-layered-attentions | 72.1 | 81.4 |
prune-once-for-all-sparse-pre-trained | 79.83 | 87.25 |