HyperAI
HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
الاستدلال اللغوي الطبيعي
Natural Language Inference On Qnli
Natural Language Inference On Qnli
المقاييس
Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Accuracy
Paper Title
Repository
FNet-Large
85%
FNet: Mixing Tokens with Fourier Transforms
RealFormer
91.89%
RealFormer: Transformer Likes Residual Attention
Nyströmformer
88.7%
Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
Q-BERT (Shen et al., 2020)
93.0
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
-
DeBERTaV3large
96%
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Charformer-Tall
91.0%
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
ELECTRA
95.4%
-
-
SMART-BERT
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
data2vec
91.1%
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
SpanBERT
94.3%
SpanBERT: Improving Pre-training by Representing and Predicting Spans
24hBERT
90.6
How to Train BERT with an Academic Budget
ASA + RoBERTa
93.6%
Adversarial Self-Attention for Language Understanding
PSQ (Chen et al., 2020)
94.5
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
TRANS-BLSTM
94.08%
TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding
-
T5-Small
90.3%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ASA + BERT-base
91.4%
Adversarial Self-Attention for Language Understanding
ALICE
99.2%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
T5-Base
93.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
T5-11B
96.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ERNIE
91.3%
ERNIE: Enhanced Language Representation with Informative Entities
0 of 43 row(s) selected.
Previous
Next
Natural Language Inference On Qnli | SOTA | HyperAI