HyperAI
HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
자연어 추론
Natural Language Inference On Scitail
Natural Language Inference On Scitail
평가 지표
Dev Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Dev Accuracy
Paper Title
Repository
MT-DNN-SMART_1%ofTrainingData
88.6
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
Improving Language Understanding by Generative Pre-Training
RE2
-
Simple and Effective Text Matching with Richer Alignment Features
MT-DNN-SMARTLARGEv0
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
SplitEE-S
-
SplitEE: Early Exit in Deep Neural Networks with Split Computing
CA-MTL
-
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
Hierarchical BiLSTM Max Pooling
-
Sentence Embeddings in NLI with Iterative Refinement Encoders
MT-DNN
-
Multi-Task Deep Neural Networks for Natural Language Understanding
MT-DNN-SMART_0.1%ofTrainingData
82.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN-SMART_100%ofTrainingData
96.1
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
CAFE
-
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference
-
MT-DNN-SMART_10%ofTrainingData
91.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
-
-
0 of 13 row(s) selected.
Previous
Next
Natural Language Inference On Scitail | SOTA | HyperAI초신경