HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
언어모델링
Language Modelling On Lambada
Language Modelling On Lambada
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy
Paper Title
PaLM-540B (Few-Shot)
89.7
PaLM: Scaling Language Modeling with Pathways
PaLM 2-L (one-shot)
86.9
PaLM 2 Technical Report
GPT-3 175B (Few-Shot)
86.4
Language Models are Few-Shot Learners
LLaMA-65B+CFG (Zero-Shot)
84.0
Stay on topic with Classifier-Free Guidance
LLaMA-30B+CFG (zero-shot)
83.9
Stay on topic with Classifier-Free Guidance
PaLM 2-M (one-shot)
83.7
PaLM 2 Technical Report
Cohere Large
82.33
-
LLaMA-13B+CFG (zero-shot)
82.2
Stay on topic with Classifier-Free Guidance
PaLM-540B (One-Shot)
81.8
PaLM: Scaling Language Modeling with Pathways
GLaM 62B/64E (One-Shot)
80.9
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
PaLM 2-S (one-shot)
80.7
PaLM 2 Technical Report
GLM-130B (bidirectional attention)
80.2
GLM-130B: An Open Bilingual Pre-trained Model
SparseGPT (175B, 2:4 Sparsity)
79.47
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
SparseGPT (175B, 4:8 Sparsity)
78.77
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
PaLM-540B (Zero-Shot)
77.9
PaLM: Scaling Language Modeling with Pathways
Chinchilla (Zero-Shot)
77.7
Training Compute-Optimal Large Language Models
SparseGPT (175B, 50% Sparsity)
76.51
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 175B (Zero-Shot)
76.2
Language Models are Few-Shot Learners
OPT-175B
75.59
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 13B (Zero-Shot)
72.5
Language Models are Few-Shot Learners
0 of 37 row(s) selected.
Previous
Next
Language Modelling On Lambada | SOTA | HyperAI초신경