HyperAI
HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Inférence de langage naturel
Natural Language Inference On Wnli
Natural Language Inference On Wnli
Métriques
Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Accuracy
Paper Title
Repository
ALBERT
91.8
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
HNNensemble
89
A Hybrid Neural Network Model for Commonsense Reasoning
StructBERTRoBERTa ensemble
89.7
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
SqueezeBERT
65.1
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
XLNet
92.5
XLNet: Generalized Autoregressive Pretraining for Language Understanding
T5-Base 220M
78.8
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
BERT-large 340M (fine-tuned on WSCR)
71.9
A Surprisingly Robust Trick for Winograd Schema Challenge
RoBERTa (ensemble)
89
RoBERTa: A Robustly Optimized BERT Pretraining Approach
HNN
83.6
A Hybrid Neural Network Model for Commonsense Reasoning
DistilBERT 66M
44.4
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
FLAN 137B (few-shot, k=4)
70.4
Finetuned Language Models Are Zero-Shot Learners
ERNIE 2.0 Large
67.8
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
T5-Large 770M
85.6
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
BERTwiki 340M (fine-tuned on WSCR)
74.7
A Surprisingly Robust Trick for Winograd Schema Challenge
FLAN 137B (zero-shot)
74.6
Finetuned Language Models Are Zero-Shot Learners
T5-XL 3B
89.7
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTa
94.5
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
RWKV-4-Raven-14B
49.3
RWKV: Reinventing RNNs for the Transformer Era
Turing NLR v5 XXL 5.4B (fine-tuned)
95.9
-
-
T5-Small 60M
69.2
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
0 of 23 row(s) selected.
Previous
Next
Natural Language Inference On Wnli | SOTA | HyperAI