Language Modelling
Language Modeling 是预测文档中下一个词或字符的任务,可训练语言模型应用于文本生成、文本分类和问答等自然语言处理任务。自2010年代起,神经语言模型取代了N-gram模型,2020年代后,大型语言模型(LLMs)成为实现最先进水平的唯一途径。模型的能力通过交叉熵和困惑度评估,常用数据集包括WikiText-103、One Billion Word、Text8、C4和The Pile。
Ethereum Phishing Transaction Network
100 sleep nights of 8 caregivers
Gpt3
2000 HUB5 English
MMLU
Arxiv HEP-TH citation graph
BIG-bench-lite
GLM-130B (3-shot)
Bookcorpus2
Books3
C4
Primer
CLUE (AFQMC)
CLUE (C3)
CLUE (CMNLI)
CLUE (CMRC2018)
GLM-130B
CLUE (DRCD)
CLUE (OCNLI_50K)
GLM-130B
CLUE (WSC1.1)
Curation Corpus
DM Mathematics
enwik8
GPT-2 (48 layers, h=1600)
enwik8 dev
Transformer-LS (small)
enwiki8
PAR Transformer 24B
FewCLUE (BUSTM)
FewCLUE (CHID-FC)
FewCLUE (CLUEWSC-FC)
FewCLUE (EPRSTMT)
FewCLUE (OCNLI-FC)
FreeLaw
GitHub
Gutenberg PG-19
HackerNews
Hutter Prize
Transformer-XL + RMS dynamic eval
LAMBADA
GPT-3 175B (Few-Shot)
language-modeling-recommendation
GPT2
NIH ExPorter
One Billion Word
MDLM (AR baseline)
OpenSubtitles
OpenWebText
GPT2-Hermite
OpenWebtext2
Penn Treebank (Character Level)
Mogrifier LSTM + dynamic eval
Penn Treebank (Word Level)
GPT-3 (Zero-Shot)
PhilPapers
Pile CC
PTB Diagnostic ECG Database
I-DARTS
PubMed Cognitive Control Abstracts
PubMed Central
SALMon
Spirit-LM (Expr.)
StackExchange
Gopher
Text8
GPT-2
Text8 dev
Transformer-LS (small)
The Pile
Test-Time Fine-Tuning with SIFT + Llama-3.2 (3B)
Ubuntu IRC
USPTO Backgrounds
VietMed
Hybrid 4-gram VietMed-Train + ExtraText
Wiki-40B
FLASH-Quad-8k
WikiText-103
RETRO (7.5B)
WikiText-2
SparseGPT (175B, 50% Sparsity)