Command Palette
Search for a command to run...
Atari Games On Atari Games
Métriques
Mean Human Normalized Score
Résultats
Résultats de performance de divers modèles sur ce benchmark
| Paper Title | ||
|---|---|---|
| GDI-H3 | 9620.33% | Generalized Data Distribution Iteration |
| GDI-I3 | 7810.1% | Generalized Data Distribution Iteration |
| MuZero | 4996.20% | Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model |
| Go-Explore | 4989.94% | First return, then explore |
| Agent57 | 4763.69% | Agent57: Outperforming the Atari Human Benchmark |
| R2D2 | 3374.31% | Recurrent Experience Replay in Distributed Reinforcement Learning |
| NGU | 3169.90% | Never Give Up: Learning Directed Exploration Strategies |
| LASER | 1741.36% | Off-Policy Actor-Critic with Shared Experience Replay |
| IMPALA, deep | 957.34% | IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures |
| Rainbow DQN | 873.97% | Rainbow: Combining Improvements in Deep Reinforcement Learning |
| DreamerV2 | 631.17% | Mastering Atari with Discrete World Models |
| SimPLe | 25.3% | Model-Based Reinforcement Learning for Atari |
0 of 12 row(s) selected.