HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Atari Games
Atari Games On Atari 2600 Berzerk
Atari Games On Atari 2600 Berzerk
Métriques
Score
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Score
Paper Title
Repository
FQF
12422.2
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
A3C FF (1 day) hs
1433.4
Asynchronous Methods for Deep Reinforcement Learning
Ape-X
57196.7
Distributed Prioritized Experience Replay
IQN
1053
Implicit Quantile Networks for Distributional Reinforcement Learning
DDQN+Pop-Art noop
1199.6
Learning values across many orders of magnitude
-
DDQN (tuned) hs
1011.1
Deep Reinforcement Learning with Double Q-learning
A3C FF hs
817.9
Asynchronous Methods for Deep Reinforcement Learning
A3C LSTM hs
862.2
Asynchronous Methods for Deep Reinforcement Learning
ES FF (1 hour) noop
686.0
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Prior noop
1305.6
Prioritized Experience Replay
DQN noop
585.6
Deep Reinforcement Learning with Double Q-learning
GDI-I3
7607
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
-
Prior+Duel hs
2178.6
Deep Reinforcement Learning with Double Q-learning
Prior hs
865.9
Prioritized Experience Replay
IMPALA (deep)
1852.70
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
C51 noop
1645.0
A Distributional Perspective on Reinforcement Learning
Reactor 500M
2303.1
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
-
Agent57
61507.83
Agent57: Outperforming the Atari Human Benchmark
MuZero
85932.60
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
DQN hs
493.4
Deep Reinforcement Learning with Double Q-learning
0 of 39 row(s) selected.
Previous
Next