HyperAI超神経
ホーム
ニュース
最新論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
ホーム
SOTA
Language Modelling
Language Modelling On Penn Treebank Character
Language Modelling On Penn Treebank Character
評価指標
Bit per Character (BPC)
Number of params
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Bit per Character (BPC)
Number of params
Paper Title
Repository
TCN
1.31
5.9M
Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Past Decode Reg. + AWD-LSTM-MoS + dyn. eval.
1.169
13.8M
Improved Language Modeling by Decoding the Past
-
2-layer Norm HyperLSTM
1.219
14.4M
HyperNetworks
Feedback Transformer
1.160
10.7M
Addressing Some Limitations of Transformers with Feedback Memory
Mogrifier LSTM + dynamic eval
1.083
24M
Mogrifier LSTM
GAM-RHN-5
1.147
16.0M
Recurrent Highway Networks with Grouped Auxiliary Memory
Mogrifier LSTM
1.120
24M
Mogrifier LSTM
Seq-U-Net
1.3
5.9M
Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
Trellis Network
1.158
13.4M
Trellis Networks for Sequence Modeling
R-Transformer
1.24
-
R-Transformer: Recurrent Neural Network Enhanced Transformer
6-layer QRNN
1.187
13.8M
An Analysis of Neural Language Modeling at Multiple Scales
IndRNN
1.19
-
Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN
Dense IndRNN
1.18
-
Deep Independently Recurrent Neural Network (IndRNN)
Temporal Convolutional Network
1.31
-
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
NAS-RL
1.214
16.3M
Neural Architecture Search with Reinforcement Learning
FS-LSTM-4
1.190
27M
Fast-Slow Recurrent Neural Networks
Bipartite Flow
1.38
-
Discrete Flows: Invertible Generative Models of Discrete Data
STAR
1.30
-
Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
3-layer AWD-LSTM
1.175
13.8M
An Analysis of Neural Language Modeling at Multiple Scales
FS-LSTM-2
1.193
27M
Fast-Slow Recurrent Neural Networks
0 of 20 row(s) selected.
Previous
Next