HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
利用規約
プライバシーポリシー
日本語
HyperAI
HyperAI超神経
Toggle Sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
プラットフォーム
ホーム
SOTA
言語モデル
Language Modelling On Lambada
Language Modelling On Lambada
評価指標
Accuracy
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Accuracy
Paper Title
PaLM-540B (Few-Shot)
89.7
PaLM: Scaling Language Modeling with Pathways
PaLM 2-L (one-shot)
86.9
PaLM 2 Technical Report
GPT-3 175B (Few-Shot)
86.4
Language Models are Few-Shot Learners
LLaMA-65B+CFG (Zero-Shot)
84.0
Stay on topic with Classifier-Free Guidance
LLaMA-30B+CFG (zero-shot)
83.9
Stay on topic with Classifier-Free Guidance
PaLM 2-M (one-shot)
83.7
PaLM 2 Technical Report
Cohere Large
82.33
-
LLaMA-13B+CFG (zero-shot)
82.2
Stay on topic with Classifier-Free Guidance
PaLM-540B (One-Shot)
81.8
PaLM: Scaling Language Modeling with Pathways
GLaM 62B/64E (One-Shot)
80.9
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
PaLM 2-S (one-shot)
80.7
PaLM 2 Technical Report
GLM-130B (bidirectional attention)
80.2
GLM-130B: An Open Bilingual Pre-trained Model
SparseGPT (175B, 2:4 Sparsity)
79.47
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
SparseGPT (175B, 4:8 Sparsity)
78.77
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
PaLM-540B (Zero-Shot)
77.9
PaLM: Scaling Language Modeling with Pathways
Chinchilla (Zero-Shot)
77.7
Training Compute-Optimal Large Language Models
SparseGPT (175B, 50% Sparsity)
76.51
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 175B (Zero-Shot)
76.2
Language Models are Few-Shot Learners
OPT-175B
75.59
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 13B (Zero-Shot)
72.5
Language Models are Few-Shot Learners
0 of 37 row(s) selected.
Previous
Next
Language Modelling On Lambada | SOTA | HyperAI超神経