HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Natural Language Inference
Natural Language Inference On Scitail
Natural Language Inference On Scitail
Métriques
Dev Accuracy
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Dev Accuracy
Paper Title
Repository
MT-DNN-SMART_1%ofTrainingData
88.6
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
Improving Language Understanding by Generative Pre-Training
RE2
-
Simple and Effective Text Matching with Richer Alignment Features
MT-DNN-SMARTLARGEv0
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
SplitEE-S
-
SplitEE: Early Exit in Deep Neural Networks with Split Computing
CA-MTL
-
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
Hierarchical BiLSTM Max Pooling
-
Sentence Embeddings in NLI with Iterative Refinement Encoders
MT-DNN
-
Multi-Task Deep Neural Networks for Natural Language Understanding
MT-DNN-SMART_0.1%ofTrainingData
82.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN-SMART_100%ofTrainingData
96.1
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
CAFE
-
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference
-
MT-DNN-SMART_10%ofTrainingData
91.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
-
-
0 of 13 row(s) selected.
Previous
Next