HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
Paraphrase Identification
Paraphrase Identification On Quora Question
Paraphrase Identification On Quora Question
المقاييس
Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Accuracy
Paper Title
Repository
MwAN
89.12
Multiway Attention Networks for Modeling Sentence Pairs
XLNet-Large (ensemble)
90.3
XLNet: Generalized Autoregressive Pretraining for Language Understanding
RoBERTa-large 355M + Entailment as Few-shot Learner
-
Entailment as Few-Shot Learner
ERNIE
-
ERNIE: Enhanced Language Representation with Informative Entities
ASA + BERT-base
-
Adversarial Self-Attention for Language Understanding
TRANS-BLSTM
88.28
TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding
-
RealFormer
91.34
RealFormer: Transformer Likes Residual Attention
SplitEE-S
-
SplitEE: Early Exit in Deep Neural Networks with Split Computing
SMART-BERT
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN
89.6
Multi-Task Deep Neural Networks for Natural Language Understanding
GenSen
87.01
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
ASA + RoBERTa
-
Adversarial Self-Attention for Language Understanding
DIIN
89.06
Natural Language Inference over Interaction Space
FNet-Large
-
FNet: Mixing Tokens with Fourier Transforms
1-3[0.8pt/2pt] Random
80
Self-Explaining Structures Improve NLP Models
StructBERTRoBERTa ensemble
90.7
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
BERT-Base
-
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
-
BiMPM
88.17
Bilateral Multi-Perspective Matching for Natural Language Sentences
FreeLB
74.8
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
BERT-LARGE
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
0 of 31 row(s) selected.
Previous
Next