HyperAIHyperAI

Natural Language Inference On Scitail

Métriques

Dev Accuracy

Résultats

Résultats de performance de divers modèles sur ce benchmark

Nom du modèle
Dev Accuracy
Paper TitleRepository
MT-DNN-SMART_1%ofTrainingData88.6SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization-
Finetuned Transformer LM-Improving Language Understanding by Generative Pre-Training
RE2-Simple and Effective Text Matching with Richer Alignment Features-
MT-DNN-SMARTLARGEv0-SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization-
SplitEE-S-SplitEE: Early Exit in Deep Neural Networks with Split Computing-
CA-MTL-Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data-
Hierarchical BiLSTM Max Pooling-Sentence Embeddings in NLI with Iterative Refinement Encoders-
MT-DNN-Multi-Task Deep Neural Networks for Natural Language Understanding-
MT-DNN-SMART_0.1%ofTrainingData82.3SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization-
MT-DNN-SMART_100%ofTrainingData96.1SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization-
CAFE-Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference-
MT-DNN-SMART_10%ofTrainingData91.3SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization-
Finetuned Transformer LM---
0 of 13 row(s) selected.