HyperAIHyperAI초신경
홈뉴스연구 논문튜토리얼데이터셋백과사전SOTALLM 모델GPU 랭킹컨퍼런스
전체 검색
소개
한국어
HyperAIHyperAI초신경
  1. 홈
  2. SOTA
  3. 질문 응답
  4. Question Answering On Squad20 Dev

Question Answering On Squad20 Dev

평가 지표

EM
F1

평가 결과

이 벤치마크에서 각 모델의 성능 결과

모델 이름
EM
F1
Paper TitleRepository
ALBERT base76.179.1ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
RoBERTa (no data aug)86.589.4RoBERTa: A Robustly Optimized BERT Pretraining Approach
ALBERT large79.082.1ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
XLNet (single model)87.990.6XLNet: Generalized Autoregressive Pretraining for Language Understanding
RMR + ELMo (Model-III)72.374.8Read + Verify: Machine Reading Comprehension with Unanswerable Questions-
SemBERT large80.983.6Semantics-aware BERT for Language Understanding
SpanBERT-86.8SpanBERT: Improving Pre-training by Representing and Predicting Spans
SG-Net85.187.9SG-Net: Syntax-Guided Machine Reading Comprehension
TinyBERT-6 67M69.973.4TinyBERT: Distilling BERT for Natural Language Understanding
XLNet+DSC87.6589.51Dice Loss for Data-imbalanced NLP Tasks
ALBERT xlarge83.185.9ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
U-Net70.374.0 U-Net: Machine Reading Comprehension with Unanswerable Questions
ALBERT xxlarge85.188.1ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
0 of 13 row(s) selected.
HyperAI

학습, 이해, 실천, 커뮤니티와 함께 인공지능의 미래를 구축하다

한국어

소개

회사 소개데이터셋 도움말

제품

뉴스튜토리얼데이터셋백과사전

링크

TVM 한국어Apache TVMOpenBayes

© HyperAI초신경

TwitterBilibili