HyperAI
HyperAI超神経
ホーム
ニュース
最新論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
ホーム
SOTA
質問応答
Question Answering On Squad20 Dev
Question Answering On Squad20 Dev
評価指標
EM
F1
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
EM
F1
Paper Title
Repository
ALBERT base
76.1
79.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
RoBERTa (no data aug)
86.5
89.4
RoBERTa: A Robustly Optimized BERT Pretraining Approach
ALBERT large
79.0
82.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
XLNet (single model)
87.9
90.6
XLNet: Generalized Autoregressive Pretraining for Language Understanding
RMR + ELMo (Model-III)
72.3
74.8
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
-
SemBERT large
80.9
83.6
Semantics-aware BERT for Language Understanding
SpanBERT
-
86.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
SG-Net
85.1
87.9
SG-Net: Syntax-Guided Machine Reading Comprehension
TinyBERT-6 67M
69.9
73.4
TinyBERT: Distilling BERT for Natural Language Understanding
XLNet+DSC
87.65
89.51
Dice Loss for Data-imbalanced NLP Tasks
ALBERT xlarge
83.1
85.9
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
U-Net
70.3
74.0
U-Net: Machine Reading Comprehension with Unanswerable Questions
ALBERT xxlarge
85.1
88.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
0 of 13 row(s) selected.
Previous
Next
Question Answering On Squad20 Dev | SOTA | HyperAI超神経