HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
시스템 설정
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
Paraphrase Identification
Paraphrase Identification On Quora Question
Paraphrase Identification On Quora Question
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy
Paper Title
Repository
MwAN
89.12
Multiway Attention Networks for Modeling Sentence Pairs
XLNet-Large (ensemble)
90.3
XLNet: Generalized Autoregressive Pretraining for Language Understanding
RoBERTa-large 355M + Entailment as Few-shot Learner
-
Entailment as Few-Shot Learner
ERNIE
-
ERNIE: Enhanced Language Representation with Informative Entities
ASA + BERT-base
-
Adversarial Self-Attention for Language Understanding
TRANS-BLSTM
88.28
TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding
-
RealFormer
91.34
RealFormer: Transformer Likes Residual Attention
SplitEE-S
-
SplitEE: Early Exit in Deep Neural Networks with Split Computing
SMART-BERT
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN
89.6
Multi-Task Deep Neural Networks for Natural Language Understanding
GenSen
87.01
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
ASA + RoBERTa
-
Adversarial Self-Attention for Language Understanding
DIIN
89.06
Natural Language Inference over Interaction Space
FNet-Large
-
FNet: Mixing Tokens with Fourier Transforms
1-3[0.8pt/2pt] Random
80
Self-Explaining Structures Improve NLP Models
StructBERTRoBERTa ensemble
90.7
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
BERT-Base
-
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
-
BiMPM
88.17
Bilateral Multi-Perspective Matching for Natural Language Sentences
FreeLB
74.8
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
BERT-LARGE
-
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
0 of 31 row(s) selected.
Previous
Next