Answer Selection On Asnq
Métriques
MAP
MRR
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | MAP | MRR | Paper Title | Repository |
---|---|---|---|---|
ELECTRA-Base + SSP | 0.697 | 0.757 | Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | - |
DeBERTa-V3-Large + SSP | 0.743 | 0.800 | Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | - |
RoBERTa-Base Joint MSPP | 0.673 | 0.737 | Paragraph-based Transformer Pre-training for Multi-Sentence Inference |
0 of 3 row(s) selected.