HyperAI超神经

Question Answering On Wikiqa

评估指标

MAP
MRR

评测结果

各个模型在此基准测试上的表现结果

模型名称
MAP
MRR
Paper TitleRepository
Paragraph vector0.51100.5160Distributed Representations of Sentences and Documents
DeBERTa-Large + SSP0.9010.914Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection-
HyperQA0.7120.727Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
PWIM0.70900.7234--
Paragraph vector (lexical overlap + dist output)0.5976 0.6058Distributed Representations of Sentences and Documents
SWEM-concat0.67880.6908Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
LSTM (lexical overlap + dist output)0.6820.6988Neural Variational Inference for Text Processing
Bigram-CNN (lexical overlap + dist output)0.65200.6652Deep Learning for Answer Sentence Selection
TANDA-RoBERTa (ASNQ, WikiQA)0.9200.933TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection-
RE20.74520.7618Simple and Effective Text Matching with Richer Alignment Features
PairwiseRank + Multi-Perspective CNN0.70100.7180Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency-
RoBERTa-Base + SSP0.8870.899Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection-
LSTM0.65520.6747Neural Variational Inference for Text Processing
AP-CNN0.68860.6957Attentive Pooling Networks
CNN-Cnt0.65200.6652--
Bigram-CNN0.61900.6281Deep Learning for Answer Sentence Selection
RLAS-BIABC0.9240.908RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm-
MMA-NSE attention0.68110.6993Neural Semantic Encoders
LDC0.70580.7226Sentence Similarity Learning by Lexical Decomposition and Composition
Comp-Clip + LM + LC0.7640.784A Compare-Aggregate Model with Latent Clustering for Answer Selection-
0 of 25 row(s) selected.