HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
자연어 추론
Natural Language Inference On Qnli
Natural Language Inference On Qnli
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy
Paper Title
ALICE
99.2%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
ALBERT
99.2%
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
StructBERTRoBERTa ensemble
99.2%
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
MT-DNN-SMART
99.2%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
RoBERTa (ensemble)
98.9%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-11B
96.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
T5-3B
96.3%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTaV3large
96%
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
ELECTRA
95.4%
-
DeBERTa (large)
95.3%
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
XLNet (single model)
94.9%
XLNet: Generalized Autoregressive Pretraining for Language Understanding
T5-Large 770M
94.8%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
94.7%
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
ERNIE 2.0 Large
94.6%
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
PSQ (Chen et al., 2020)
94.5
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
RoBERTa-large 355M + Entailment as Few-shot Learner
94.5%
Entailment as Few-Shot Learner
SpanBERT
94.3%
SpanBERT: Improving Pre-training by Representing and Predicting Spans
TRANS-BLSTM
94.08%
TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding
T5-Base
93.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ASA + RoBERTa
93.6%
Adversarial Self-Attention for Language Understanding
0 of 43 row(s) selected.
Previous
Next
Natural Language Inference On Qnli | SOTA | HyperAI초신경