HyperAI
HyperAI
الرئيسية
المنصة
الوثائق
الأخبار
الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
شروط الخدمة
سياسة الخصوصية
العربية
HyperAI
HyperAI
Toggle Sidebar
البحث في الموقع...
⌘
K
Command Palette
Search for a command to run...
المنصة
الرئيسية
SOTA
الاستدلال اللغوي الطبيعي
Natural Language Inference On Rte
Natural Language Inference On Rte
المقاييس
Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Accuracy
Paper Title
Vega v2 6B (KD-based prompt transfer)
96%
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
PaLM 540B (fine-tuned)
95.7%
PaLM: Scaling Language Modeling with Pathways
Turing NLR v5 XXL 5.4B (fine-tuned)
94.1%
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
ST-MoE-32B 269B (fine-tuned)
93.5%
ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-1.5B
93.2%
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
MUPPET Roberta Large
92.8%
Muppet: Massive Multi-task Representations with Pre-Finetuning
DeBERTaV3large
92.7%
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
T5-XXL 11B
92.5%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
T5-XXL 11B (fine-tuned)
92.5%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
UL2 20B (fine-tuned)
92.1%
UL2: Unifying Language Learning Paradigms
ST-MoE-L 4.1B (fine-tuned)
92.1%
ST-MoE: Designing Stable and Transferable Sparse Expert Models
SMARTRoBERTa
92.0%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
FLAN 137B (prompt-tuned)
91.7%
Finetuned Language Models Are Zero-Shot Learners
T5-XL 3B
91.1%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RoBERTa-large 355M + Entailment as Few-shot Learner
90.5%
Entailment as Few-Shot Learner
ALBERT
89.2%
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Adv-RoBERTa ensemble
88.7%
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
RoBERTa
88.2%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa (ensemble)
88.2%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-Large 738M
87.4%
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
0 of 90 row(s) selected.
Previous
Next