Question Answering On Trecqa
Metriken
MAP
MRR
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | MAP | MRR |
---|---|---|
context-aware-transformer-pre-training-for | 0.919 | 0.945 |
deep-learning-for-answer-sentence-selection | 0.711 | 0.785 |
towards-scalable-and-reliable-capsule | 0.7773 | 0.7416 |
pre-training-transformer-models-with-sentence | 0.923 | 0.946 |
anmm-ranking-short-answer-texts-with | 0.750 | 0.811 |
hyperbolic-representation-learning-for-fast | 0.770 | 0.825 |
paragraph-based-transformer-pre-training-for | 0.911 | 0.952 |
pre-training-transformer-models-with-sentence | 0.903 | 0.951 |
pairwise-word-interaction-modeling-with-deep | 0.7588 | 0.8219 |
structural-self-supervised-objectives-for | 0.954 | 0.984 |
a-compare-aggregate-model-with-latent | 0.868 | 0.928 |
tanda-transfer-and-adapt-pre-trained | 0.943 | 0.974 |
rlas-biabc-a-reinforcement-learning-based | 0.913 | 0.998 |