HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Atari Games
Atari Games On Atari 2600 Crazy Climber
Atari Games On Atari 2600 Crazy Climber
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Repository
C51 noop
179877.0
A Distributional Perspective on Reinforcement Learning
A3C FF (1 day) hs
101624.0
Asynchronous Methods for Deep Reinforcement Learning
Prior noop
141161.0
Prioritized Experience Replay
GDI-I3
201000
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
GDI-I3
201000
Generalized Data Distribution Iteration
-
IMPALA (deep)
136950.00
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
Reactor 500M
236422.0
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
-
DDQN+Pop-Art noop
119679.0
Learning values across many orders of magnitude
-
R2D2
366690.7
Recurrent Experience Replay in Distributed Reinforcement Learning
-
DreamerV2
161839
Mastering Atari with Discrete World Models
Duel noop
143570.0
Dueling Network Architectures for Deep Reinforcement Learning
DDQN (tuned) noop
117282.0
Dueling Network Architectures for Deep Reinforcement Learning
IQN
179082
Implicit Quantile Networks for Distributional Reinforcement Learning
FQF
223470.6
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
A3C FF hs
112646.0
Asynchronous Methods for Deep Reinforcement Learning
ASL DDQN
166019
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
-
Ape-X
320426
Distributed Prioritized Experience Replay
CGP
12900
Evolving simple programs for playing Atari games
Bootstrapped DQN
137925.9
Deep Exploration via Bootstrapped DQN
Agent57
565909.85
Agent57: Outperforming the Atari Human Benchmark
0 of 49 row(s) selected.
Previous
Next