HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Fragebeantwortung
Question Answering On Trecqa
Question Answering On Trecqa
Metriken
MAP
MRR
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
MAP
MRR
Paper Title
Repository
Contextual DeBERTa-V3-Large + SSP
0.919
0.945
Context-Aware Transformer Pre-Training for Answer Sentence Selection
-
CNN
0.711
0.785
Deep Learning for Answer Sentence Selection
NLP-Capsule
0.7773
0.7416
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
DeBERTa-V3-Large + SSP
0.923
0.946
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
aNMM
0.750
0.811
aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model
HyperQA
0.770
0.825
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
RoBERTa-Base Joint + MSPP
0.911
0.952
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
RoBERTa-Base + PSD
0.903
0.951
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
PWIN
0.7588
0.8219
-
-
TANDA DeBERTa-V3-Large + ALL
0.954
0.984
Structural Self-Supervised Objectives for Transformers
-
Comp-Clip + LM + LC
0.868
0.928
A Compare-Aggregate Model with Latent Clustering for Answer Selection
-
TANDA-RoBERTa (ASNQ, TREC-QA)
0.943
0.974
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
-
RLAS-BIABC
0.913
0.998
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
-
0 of 13 row(s) selected.
Previous
Next