Ad Hoc Information Retrieval On Trec Robust04
評価指標
MAP
P@20
nDCG@20
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
比較表
モデル名 | MAP | P@20 | nDCG@20 |
---|---|---|---|
document-ranking-with-a-pretrained-sequence | 0.3876 | 0.5165 | 0.6091 |
190407094 | - | 0.4667 | 0.5381 |
neural-ranking-models-with-weak-supervision | 0.2811 | - | - |
a-deep-relevance-matching-model-for-ad-hoc | 0.279 | 0.382 | 0.431 |
neural-ranking-models-with-weak-supervision | 0.2837 | - | - |
190407094 | - | 0.4042 | 0.4541 |
from-neural-re-ranking-to-neural-ranking | 0.2971 | 0.3948 | 0.4391 |
nprf-a-neural-pseudo-relevance-feedback | 0.2464 | 0.3510 | 0.3989 |
deep-relevance-ranking-using-enhanced | 0.271 | 0.389 | 0.464 |
parade-passage-representation-aggregation-for | - | 0.4604 | 0.5399 |
deeper-text-understanding-for-ir-with | - | - | 0.469 |
the-neural-hype-and-comparisons-against-weak | 0.302 | 0.4012 | - |
from-neural-re-ranking-to-neural-ranking | 0.2856 | 0.3766 | 0.4310 |
simple-applications-of-bert-for-ad-hoc | 0.3278 | 0.4287 | - |
parade-passage-representation-aggregation-for | - | 0.4486 | 0.5252 |
nprf-a-neural-pseudo-relevance-feedback | 0.2846 | 0.3926 | 0.4327 |
from-neural-re-ranking-to-neural-ranking | 0.2499 | - | - |
deep-relevance-ranking-using-enhanced | 0.258 | 0.374 | 0.445 |
deeper-text-understanding-for-ir-with | - | - | 0.467 |
nprf-a-neural-pseudo-relevance-feedback | 0.2904 | 0.4064 | 0.4502 |
deeper-text-understanding-for-ir-with | - | - | 0.444 |