HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
자연어 추론
Natural Language Inference On Snli
Natural Language Inference On Snli
평가 지표
% Test Accuracy
% Train Accuracy
Parameters
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
% Test Accuracy
% Train Accuracy
Parameters
Paper Title
UnitedSynT5 (3B)
94.7
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
UnitedSynT5 (335M)
93.5
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
Neural Tree Indexers for Text Understanding
93.1
-
355
Entailment as Few-Shot Learner
EFL (Entailment as Few-shot Learner) + RoBERTa-large
93.1
?
355m
Entailment as Few-Shot Learner
RoBERTa-large + self-explaining layer
92.3
?
355m+
Self-Explaining Structures Improve NLP Models
RoBERTa-large+Self-Explaining
92.3
-
340
Self-Explaining Structures Improve NLP Models
CA-MTL
92.1
92.6
340m
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
SemBERT
91.9
94.4
339m
Semantics-aware BERT for Language Understanding
MT-DNN-SMARTLARGEv0
91.7
-
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN
91.6
97.2
330m
Multi-Task Deep Neural Networks for Natural Language Understanding
SJRC (BERT-Large +SRL)
91.3
95.7
308m
Explicit Contextual Semantics for Text Comprehension
Ntumpha
90.5
99.1
220
Multi-Task Deep Neural Networks for Natural Language Understanding
Densely-Connected Recurrent and Co-Attentive Network Ensemble
90.1
95.0
53.3m
Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
MFAE
90.07
93.18
-
What Do Questions Exactly Ask? MFAE: Duplicate Question Identification with Multi-Fusion Asking Emphasis
Fine-Tuned LM-Pretrained Transformer
89.9
96.6
85m
Improving Language Understanding by Generative Pre-Training
300D DMAN Ensemble
89.6
96.1
79m
-
300D DMAN Ensemble
89.6
96.1
79m
Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference
150D Multiway Attention Network Ensemble
89.4
95.5
58m
Multiway Attention Networks for Modeling Sentence Pairs
ESIM + ELMo Ensemble
89.3
92.1
40m
Deep contextualized word representations
450D DR-BiLSTM Ensemble
89.3
94.8
45m
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
0 of 98 row(s) selected.
Previous
Next
Natural Language Inference On Snli | SOTA | HyperAI초신경