HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Atari Games
Atari Games On Atari 2600 Kung Fu Master
Atari Games On Atari 2600 Kung Fu Master
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Repository
Prior noop
39581.0
Prioritized Experience Replay
-
DQN hs
20882.0
Deep Reinforcement Learning with Double Q-learning
-
DDQN (tuned) noop
29710.0
Dueling Network Architectures for Deep Reinforcement Learning
-
A3C LSTM hs
40835.0
Asynchronous Methods for Deep Reinforcement Learning
-
Advantage Learning
32182.99
Increasing the Action Gap: New Operators for Reinforcement Learning
-
Prior+Duel noop
48375.0
Dueling Network Architectures for Deep Reinforcement Learning
-
A3C FF (1 day) hs
3046.0
Asynchronous Methods for Deep Reinforcement Learning
-
GDI-I3
140440
Generalized Data Distribution Iteration
-
FQF
111138.5
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
-
DDQN (tuned) hs
30207.0
Deep Reinforcement Learning with Double Q-learning
-
Bootstrapped DQN
36733.3
Deep Exploration via Bootstrapped DQN
-
DDQN+Pop-Art noop
34393.0
Learning values across many orders of magnitude
-
CGP
57400
Evolving simple programs for playing Atari games
-
POP3D
33728
Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization
-
Persistent AL
34650.91
Increasing the Action Gap: New Operators for Reinforcement Learning
-
GDI-H3 (200M)
1666000
GDI: Rethinking What Makes Reinforcement Learning Different from Supervised Learning
-
DreamerV2
62741
Mastering Atari with Discrete World Models
-
NoisyNet-Dueling
41672
Noisy Networks for Exploration
-
Gorila
20620.0
Massively Parallel Methods for Deep Reinforcement Learning
-
DQN noop
26059.0
Deep Reinforcement Learning with Double Q-learning
-
0 of 43 row(s) selected.
Previous
Next