HyperAI
HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Réponse à des questions
Question Answering On Trecqa
Question Answering On Trecqa
Métriques
MAP
MRR
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
MAP
MRR
Paper Title
Repository
Contextual DeBERTa-V3-Large + SSP
0.919
0.945
Context-Aware Transformer Pre-Training for Answer Sentence Selection
-
CNN
0.711
0.785
Deep Learning for Answer Sentence Selection
NLP-Capsule
0.7773
0.7416
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
DeBERTa-V3-Large + SSP
0.923
0.946
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
aNMM
0.750
0.811
aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model
HyperQA
0.770
0.825
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
RoBERTa-Base Joint + MSPP
0.911
0.952
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
RoBERTa-Base + PSD
0.903
0.951
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
PWIN
0.7588
0.8219
-
-
TANDA DeBERTa-V3-Large + ALL
0.954
0.984
Structural Self-Supervised Objectives for Transformers
-
Comp-Clip + LM + LC
0.868
0.928
A Compare-Aggregate Model with Latent Clustering for Answer Selection
-
TANDA-RoBERTa (ASNQ, TREC-QA)
0.943
0.974
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
-
RLAS-BIABC
0.913
0.998
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
-
0 of 13 row(s) selected.
Previous
Next
Question Answering On Trecqa | SOTA | HyperAI