HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Natural Language Inference
Natural Language Inference On Snli
Natural Language Inference On Snli
Metrics
% Test Accuracy
% Train Accuracy
Parameters
Results
Performance results of various models on this benchmark
Columns
Model Name
% Test Accuracy
% Train Accuracy
Parameters
Paper Title
UnitedSynT5 (3B)
94.7
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
UnitedSynT5 (335M)
93.5
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
Neural Tree Indexers for Text Understanding
93.1
-
355
Entailment as Few-Shot Learner
EFL (Entailment as Few-shot Learner) + RoBERTa-large
93.1
?
355m
Entailment as Few-Shot Learner
RoBERTa-large + self-explaining layer
92.3
?
355m+
Self-Explaining Structures Improve NLP Models
RoBERTa-large+Self-Explaining
92.3
-
340
Self-Explaining Structures Improve NLP Models
CA-MTL
92.1
92.6
340m
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
SemBERT
91.9
94.4
339m
Semantics-aware BERT for Language Understanding
MT-DNN-SMARTLARGEv0
91.7
-
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN
91.6
97.2
330m
Multi-Task Deep Neural Networks for Natural Language Understanding
SJRC (BERT-Large +SRL)
91.3
95.7
308m
Explicit Contextual Semantics for Text Comprehension
Ntumpha
90.5
99.1
220
Multi-Task Deep Neural Networks for Natural Language Understanding
Densely-Connected Recurrent and Co-Attentive Network Ensemble
90.1
95.0
53.3m
Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
MFAE
90.07
93.18
-
What Do Questions Exactly Ask? MFAE: Duplicate Question Identification with Multi-Fusion Asking Emphasis
Fine-Tuned LM-Pretrained Transformer
89.9
96.6
85m
Improving Language Understanding by Generative Pre-Training
300D DMAN Ensemble
89.6
96.1
79m
-
300D DMAN Ensemble
89.6
96.1
79m
Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference
150D Multiway Attention Network Ensemble
89.4
95.5
58m
Multiway Attention Networks for Modeling Sentence Pairs
ESIM + ELMo Ensemble
89.3
92.1
40m
Deep contextualized word representations
450D DR-BiLSTM Ensemble
89.3
94.8
45m
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
0 of 98 row(s) selected.
Previous
Next
Natural Language Inference On Snli | SOTA | HyperAI