HyperAI

Paraphrase Identification On Quora Question

Métriques

Accuracy

Résultats

Résultats de performance de divers modèles sur ce benchmark

Nom du modèle
Accuracy
Paper TitleRepository
MwAN 89.12Multiway Attention Networks for Modeling Sentence Pairs
XLNet-Large (ensemble)90.3XLNet: Generalized Autoregressive Pretraining for Language Understanding
RoBERTa-large 355M + Entailment as Few-shot Learner-Entailment as Few-Shot Learner
ERNIE-ERNIE: Enhanced Language Representation with Informative Entities
ASA + BERT-base-Adversarial Self-Attention for Language Understanding
TRANS-BLSTM88.28TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding-
RealFormer91.34RealFormer: Transformer Likes Residual Attention
SplitEE-S-SplitEE: Early Exit in Deep Neural Networks with Split Computing
SMART-BERT-SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN89.6Multi-Task Deep Neural Networks for Natural Language Understanding
GenSen87.01Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
ASA + RoBERTa-Adversarial Self-Attention for Language Understanding
DIIN89.06Natural Language Inference over Interaction Space
FNet-Large-FNet: Mixing Tokens with Fourier Transforms
1-3[0.8pt/2pt] Random80Self-Explaining Structures Improve NLP Models
StructBERTRoBERTa ensemble90.7StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding-
BERT-Base-Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning-
BiMPM88.17Bilateral Multi-Perspective Matching for Natural Language Sentences
FreeLB74.8SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
BERT-LARGE-BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
0 of 31 row(s) selected.