HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI超神経
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
プラットフォーム
ホーム
SOTA
自然言語推論
Natural Language Inference On Rte
Natural Language Inference On Rte
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Accuracy
Paper Title
Vega v2 6B (KD-based prompt transfer)
96%
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
PaLM 540B (fine-tuned)
95.7%
PaLM: Scaling Language Modeling with Pathways
Turing NLR v5 XXL 5.4B (fine-tuned)
94.1%
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
ST-MoE-32B 269B (fine-tuned)
93.5%
ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-1.5B
93.2%
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
MUPPET Roberta Large
92.8%
Muppet: Massive Multi-task Representations with Pre-Finetuning
DeBERTaV3large
92.7%
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
T5-XXL 11B
92.5%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
T5-XXL 11B (fine-tuned)
92.5%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
UL2 20B (fine-tuned)
92.1%
UL2: Unifying Language Learning Paradigms
ST-MoE-L 4.1B (fine-tuned)
92.1%
ST-MoE: Designing Stable and Transferable Sparse Expert Models
SMARTRoBERTa
92.0%
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
FLAN 137B (prompt-tuned)
91.7%
Finetuned Language Models Are Zero-Shot Learners
T5-XL 3B
91.1%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RoBERTa-large 355M + Entailment as Few-shot Learner
90.5%
Entailment as Few-Shot Learner
ALBERT
89.2%
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Adv-RoBERTa ensemble
88.7%
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
RoBERTa
88.2%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa (ensemble)
88.2%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-Large 738M
87.4%
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
0 of 90 row(s) selected.
Previous
Next
Natural Language Inference On Rte | SOTA | HyperAI超神経