HyperAI
HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
语言建模
Language Modelling On Penn Treebank Character
Language Modelling On Penn Treebank Character
评估指标
Bit per Character (BPC)
Number of params
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Bit per Character (BPC)
Number of params
Paper Title
Repository
TCN
1.31
5.9M
Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
-
Past Decode Reg. + AWD-LSTM-MoS + dyn. eval.
1.169
13.8M
Improved Language Modeling by Decoding the Past
-
2-layer Norm HyperLSTM
1.219
14.4M
HyperNetworks
-
Feedback Transformer
1.160
10.7M
Addressing Some Limitations of Transformers with Feedback Memory
-
Mogrifier LSTM + dynamic eval
1.083
24M
Mogrifier LSTM
-
GAM-RHN-5
1.147
16.0M
Recurrent Highway Networks with Grouped Auxiliary Memory
Mogrifier LSTM
1.120
24M
Mogrifier LSTM
-
Seq-U-Net
1.3
5.9M
Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
-
Trellis Network
1.158
13.4M
Trellis Networks for Sequence Modeling
-
R-Transformer
1.24
-
R-Transformer: Recurrent Neural Network Enhanced Transformer
-
6-layer QRNN
1.187
13.8M
An Analysis of Neural Language Modeling at Multiple Scales
-
IndRNN
1.19
-
Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN
-
Dense IndRNN
1.18
-
Deep Independently Recurrent Neural Network (IndRNN)
-
Temporal Convolutional Network
1.31
-
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
-
NAS-RL
1.214
16.3M
Neural Architecture Search with Reinforcement Learning
-
FS-LSTM-4
1.190
27M
Fast-Slow Recurrent Neural Networks
-
Bipartite Flow
1.38
-
Discrete Flows: Invertible Generative Models of Discrete Data
-
STAR
1.30
-
Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
-
3-layer AWD-LSTM
1.175
13.8M
An Analysis of Neural Language Modeling at Multiple Scales
-
FS-LSTM-2
1.193
27M
Fast-Slow Recurrent Neural Networks
-
0 of 20 row(s) selected.
Previous
Next