HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Natural Language Inference
Natural Language Inference On Qnli
Natural Language Inference On Qnli
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
Paper Title
ALICE
99.2%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
ALBERT
99.2%
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
StructBERTRoBERTa ensemble
99.2%
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
MT-DNN-SMART
99.2%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
RoBERTa (ensemble)
98.9%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-11B
96.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
T5-3B
96.3%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTaV3large
96%
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
ELECTRA
95.4%
-
DeBERTa (large)
95.3%
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
XLNet (single model)
94.9%
XLNet: Generalized Autoregressive Pretraining for Language Understanding
T5-Large 770M
94.8%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
94.7%
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
ERNIE 2.0 Large
94.6%
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
PSQ (Chen et al., 2020)
94.5
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
RoBERTa-large 355M + Entailment as Few-shot Learner
94.5%
Entailment as Few-Shot Learner
SpanBERT
94.3%
SpanBERT: Improving Pre-training by Representing and Predicting Spans
TRANS-BLSTM
94.08%
TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding
T5-Base
93.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ASA + RoBERTa
93.6%
Adversarial Self-Attention for Language Understanding
0 of 43 row(s) selected.
Previous
Next
Natural Language Inference On Qnli | SOTA | HyperAI