HyperAI초신경

Question Answering On Squad11 Dev

평가 지표

EM
F1

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
EM
F1
Paper TitleRepository
RASOR66.474.9Learning Recurrent Span Representations for Extractive Question Answering
FG fine-grained gate59.9571.25Words or Characters? Fine-grained Gating for Reading Comprehension
R.M-Reader (single)78.9 86.3Reinforced Mnemonic Reader for Machine Reading Comprehension
Match-LSTM with Bi-Ans-Ptr (Boundary+Search+b) 64.1 64.7Machine Comprehension Using Match-LSTM and Answer Pointer
DCN (Char + CoVe)71.379.9Learned in Translation: Contextualized Word Vectors
MPCM66.175.8Multi-Perspective Context Matching for Machine Comprehension
KAR76.784.9 Explicit Utilization of General Knowledge in Machine Reading Comprehension-
DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8)75.6283.87Prune Once for All: Sparse Pre-Trained Language Models
BART Base (with text infilling)-90.8BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
DensePhrases78.386.3Learning Dense Representations of Phrases at Scale
BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8)83.2290.02Prune Once for All: Sparse Pre-Trained Language Models
FABIR65.175.6A Fully Attention-Based Information Retriever
BERT-Base-uncased-PruneOFA (85% unstruct sparse)81.188.42Prune Once for All: Sparse Pre-Trained Language Models
T5-3B88.5394.95Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
FusionNet75.383.6FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension
TinyBERT-6 67M79.787.5TinyBERT: Distilling BERT for Natural Language Understanding
Ruminating Reader70.679.5Ruminating Reader: Reasoning with Gated Multi-Hop Attention-
BiDAF + Self Attention + ELMo-85.6Deep contextualized word representations
SAN (single)76.23584.056Stochastic Answer Networks for Machine Reading Comprehension
DCN65.475.6Dynamic Coattention Networks For Question Answering
0 of 55 row(s) selected.