HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Atari Games
Atari Games On Atari 2600 Seaquest
Atari Games On Atari 2600 Seaquest
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Repository
DQN Best
1740
Playing Atari with Deep Reinforcement Learning
SAC
211.6
Soft Actor-Critic for Discrete Action Settings
MuZero
999976.52
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
DQN hs
4216.7
Deep Reinforcement Learning with Double Q-learning
A3C FF (1 day) hs
2300.2
Asynchronous Methods for Deep Reinforcement Learning
C51 noop
266434.0
A Distributional Perspective on Reinforcement Learning
Prior noop
26357.8
Prioritized Experience Replay
GDI-I3
943910
Generalized Data Distribution Iteration
-
Prior hs
25463.7
Prioritized Experience Replay
A2C + SIL
2456.5
Self-Imitation Learning
DNA
4146
DNA: Proximal Policy Optimization with a Dual Network Architecture
DDQN (tuned) hs
14498.0
Deep Reinforcement Learning with Double Q-learning
ES FF (1 hour) noop
1390.0
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Recurrent Rational DQN Average
7460
Adaptive Rational Activations to Boost Deep Reinforcement Learning
Discrete Latent Space World Model (VQ-VAE)
635
Smaller World Models for Reinforcement Learning
-
GDI-I3
943910
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
Duel hs
37361.6
Dueling Network Architectures for Deep Reinforcement Learning
Bootstrapped DQN
9083.1
Deep Exploration via Bootstrapped DQN
Nature DQN
5286.0
Human level control through deep reinforcement learning
MAC
1703.4
Mean Actor Critic
0 of 57 row(s) selected.
Previous
Next