HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
ホーム
SOTA
質問応答
Question Answering On Storycloze
Question Answering On Storycloze
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Accuracy
Paper Title
Repository
BLOOMZ
96.3
Crosslingual Generalization through Multitask Finetuning
Flipped-3B
95.88
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
FLAN 137B (few-shot, k=10)
94.7
Finetuned Language Models Are Zero-Shot Learners
T0-3B (CoT fine-tuned)
94.5
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
KiC-770M
94.40
Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models
-
FLAN 137B (zero-shot)
93.4
Finetuned Language Models Are Zero-Shot Learners
Reading Strategies Model
88.3
Improving Machine Reading Comprehension with General Reading Strategies
Finetuned Transformer LM
86.5
Improving Language Understanding by Generative Pre-Training
-
RoE-3B
86.33
Exploring the Benefits of Training Expert Language Models over Instruction Tuning
OPT-175B
79.82
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
SparseGPT (175B, 50% Sparsity)
78.87
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
Memory chains and semantic supervision
78.7
-
-
Hidden Coherence Model
77.6
Story Comprehension for Predicting What Happens Next
-
SparseGPT (175B, 4:8 Sparsity)
77.02
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
val-LS-skip
76.5
A Simple and Effective Approach to the Story Cloze Test
-
SparseGPT (175B, 2:4 Sparsity)
76.19
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
sMLP – deterministic 9.4B (0-shot)
74.7
Efficient Language Modeling with Sparse all-MLP
-
Switch Transformer 9B
73.3
Efficient Language Modeling with Sparse all-MLP
-
GPT-3 Large 760M (zero-shot)
72.4
Language Models are Few-Shot Learners
Gshard 9B
67.9
Efficient Language Modeling with Sparse all-MLP
-
0 of 23 row(s) selected.
Previous
Next