HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Natural Language Inference
Natural Language Inference On Scitail
Natural Language Inference On Scitail
评估指标
Dev Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Dev Accuracy
Paper Title
Repository
MT-DNN-SMART_1%ofTrainingData
88.6
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
Improving Language Understanding by Generative Pre-Training
RE2
-
Simple and Effective Text Matching with Richer Alignment Features
MT-DNN-SMARTLARGEv0
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
SplitEE-S
-
SplitEE: Early Exit in Deep Neural Networks with Split Computing
CA-MTL
-
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
Hierarchical BiLSTM Max Pooling
-
Sentence Embeddings in NLI with Iterative Refinement Encoders
MT-DNN
-
Multi-Task Deep Neural Networks for Natural Language Understanding
MT-DNN-SMART_0.1%ofTrainingData
82.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN-SMART_100%ofTrainingData
96.1
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
CAFE
-
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference
-
MT-DNN-SMART_10%ofTrainingData
91.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
-
-
0 of 13 row(s) selected.
Previous
Next