HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI超神経
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
プラットフォーム
ホーム
SOTA
自然言語推論
Natural Language Inference On Snli
Natural Language Inference On Snli
評価指標
% Test Accuracy
% Train Accuracy
Parameters
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
% Test Accuracy
% Train Accuracy
Parameters
Paper Title
UnitedSynT5 (3B)
94.7
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
UnitedSynT5 (335M)
93.5
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
Neural Tree Indexers for Text Understanding
93.1
-
355
Entailment as Few-Shot Learner
EFL (Entailment as Few-shot Learner) + RoBERTa-large
93.1
?
355m
Entailment as Few-Shot Learner
RoBERTa-large + self-explaining layer
92.3
?
355m+
Self-Explaining Structures Improve NLP Models
RoBERTa-large+Self-Explaining
92.3
-
340
Self-Explaining Structures Improve NLP Models
CA-MTL
92.1
92.6
340m
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
SemBERT
91.9
94.4
339m
Semantics-aware BERT for Language Understanding
MT-DNN-SMARTLARGEv0
91.7
-
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN
91.6
97.2
330m
Multi-Task Deep Neural Networks for Natural Language Understanding
SJRC (BERT-Large +SRL)
91.3
95.7
308m
Explicit Contextual Semantics for Text Comprehension
Ntumpha
90.5
99.1
220
Multi-Task Deep Neural Networks for Natural Language Understanding
Densely-Connected Recurrent and Co-Attentive Network Ensemble
90.1
95.0
53.3m
Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
MFAE
90.07
93.18
-
What Do Questions Exactly Ask? MFAE: Duplicate Question Identification with Multi-Fusion Asking Emphasis
Fine-Tuned LM-Pretrained Transformer
89.9
96.6
85m
Improving Language Understanding by Generative Pre-Training
300D DMAN Ensemble
89.6
96.1
79m
-
300D DMAN Ensemble
89.6
96.1
79m
Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference
150D Multiway Attention Network Ensemble
89.4
95.5
58m
Multiway Attention Networks for Modeling Sentence Pairs
ESIM + ELMo Ensemble
89.3
92.1
40m
Deep contextualized word representations
450D DR-BiLSTM Ensemble
89.3
94.8
45m
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
0 of 98 row(s) selected.
Previous
Next
Natural Language Inference On Snli | SOTA | HyperAI超神経