Language Modelling On Wiki 40B
Metrics
Perplexity
Results
Performance results of various models on this benchmark
Model Name | Perplexity | Paper Title | Repository |
---|---|---|---|
FLASH-Quad-8k | 14.998 | Transformer Quality in Linear Time | |
Combiner-Axial-8k | 16.49 | Combiner: Full Attention Transformer with Sparse Computation Cost | |
Combiner-Fixed-8k | 16.60 | Combiner: Full Attention Transformer with Sparse Computation Cost |
0 of 3 row(s) selected.