HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
홈
SOTA
질문 응답
Question Answering On Trecqa
Question Answering On Trecqa
평가 지표
MAP
MRR
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
MAP
MRR
Paper Title
Repository
TANDA DeBERTa-V3-Large + ALL
0.954
0.984
Structural Self-Supervised Objectives for Transformers
TANDA-RoBERTa (ASNQ, TREC-QA)
0.943
0.974
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
DeBERTa-V3-Large + SSP
0.923
0.946
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
Contextual DeBERTa-V3-Large + SSP
0.919
0.945
Context-Aware Transformer Pre-Training for Answer Sentence Selection
-
RLAS-BIABC
0.913
0.998
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
-
RoBERTa-Base Joint + MSPP
0.911
0.952
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
RoBERTa-Base + PSD
0.903
0.951
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
Comp-Clip + LM + LC
0.868
0.928
A Compare-Aggregate Model with Latent Clustering for Answer Selection
-
NLP-Capsule
0.7773
0.7416
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
HyperQA
0.770
0.825
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
PWIN
0.7588
0.8219
-
-
aNMM
0.750
0.811
aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model
CNN
0.711
0.785
Deep Learning for Answer Sentence Selection
0 of 13 row(s) selected.
Previous
Next