HyperAI
HyperAI
الرئيسية
الأخبار
الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
البحث في الموقع...
⌘
K
الرئيسية
SOTA
الاستدلال اللغوي الطبيعي
Natural Language Inference On Wnli
Natural Language Inference On Wnli
المقاييس
Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Accuracy
Paper Title
Repository
ALBERT
91.8
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
-
HNNensemble
89
A Hybrid Neural Network Model for Commonsense Reasoning
-
StructBERTRoBERTa ensemble
89.7
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
SqueezeBERT
65.1
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
-
XLNet
92.5
XLNet: Generalized Autoregressive Pretraining for Language Understanding
-
T5-Base 220M
78.8
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
BERT-large 340M (fine-tuned on WSCR)
71.9
A Surprisingly Robust Trick for Winograd Schema Challenge
-
RoBERTa (ensemble)
89
RoBERTa: A Robustly Optimized BERT Pretraining Approach
-
HNN
83.6
A Hybrid Neural Network Model for Commonsense Reasoning
-
DistilBERT 66M
44.4
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
-
FLAN 137B (few-shot, k=4)
70.4
Finetuned Language Models Are Zero-Shot Learners
-
ERNIE 2.0 Large
67.8
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
-
T5-Large 770M
85.6
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
BERTwiki 340M (fine-tuned on WSCR)
74.7
A Surprisingly Robust Trick for Winograd Schema Challenge
-
FLAN 137B (zero-shot)
74.6
Finetuned Language Models Are Zero-Shot Learners
-
T5-XL 3B
89.7
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
DeBERTa
94.5
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
-
RWKV-4-Raven-14B
49.3
RWKV: Reinventing RNNs for the Transformer Era
-
Turing NLR v5 XXL 5.4B (fine-tuned)
95.9
-
-
T5-Small 60M
69.2
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
-
0 of 23 row(s) selected.
Previous
Next