HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Language Modelling
Language Modelling On Lambada
Language Modelling On Lambada
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
Paper Title
PaLM-540B (Few-Shot)
89.7
PaLM: Scaling Language Modeling with Pathways
PaLM 2-L (one-shot)
86.9
PaLM 2 Technical Report
GPT-3 175B (Few-Shot)
86.4
Language Models are Few-Shot Learners
LLaMA-65B+CFG (Zero-Shot)
84.0
Stay on topic with Classifier-Free Guidance
LLaMA-30B+CFG (zero-shot)
83.9
Stay on topic with Classifier-Free Guidance
PaLM 2-M (one-shot)
83.7
PaLM 2 Technical Report
Cohere Large
82.33
-
LLaMA-13B+CFG (zero-shot)
82.2
Stay on topic with Classifier-Free Guidance
PaLM-540B (One-Shot)
81.8
PaLM: Scaling Language Modeling with Pathways
GLaM 62B/64E (One-Shot)
80.9
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
PaLM 2-S (one-shot)
80.7
PaLM 2 Technical Report
GLM-130B (bidirectional attention)
80.2
GLM-130B: An Open Bilingual Pre-trained Model
SparseGPT (175B, 2:4 Sparsity)
79.47
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
SparseGPT (175B, 4:8 Sparsity)
78.77
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
PaLM-540B (Zero-Shot)
77.9
PaLM: Scaling Language Modeling with Pathways
Chinchilla (Zero-Shot)
77.7
Training Compute-Optimal Large Language Models
SparseGPT (175B, 50% Sparsity)
76.51
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 175B (Zero-Shot)
76.2
Language Models are Few-Shot Learners
OPT-175B
75.59
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 13B (Zero-Shot)
72.5
Language Models are Few-Shot Learners
0 of 37 row(s) selected.
Previous
Next
Language Modelling On Lambada | SOTA | HyperAI