HyperAI
Accueil
Actualités
Articles de recherche récents
Tutoriels
Ensembles de données
Wiki
SOTA
Modèles LLM
Classement GPU
Événements
Recherche
À propos
Français
HyperAI
Toggle sidebar
Rechercher sur le site...
⌘
K
Accueil
SOTA
Atari Games
Atari Games On Atari 2600 Asterix
Atari Games On Atari 2600 Asterix
Métriques
Score
Résultats
Résultats de performance de divers modèles sur ce benchmark
Columns
Nom du modèle
Score
Paper Title
Repository
Ape-X
313305
Distributed Prioritized Experience Replay
SARSA
1332
-
-
ES FF (1 hour) noop
1440
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Prior hs
22484.5
Prioritized Experience Replay
Prior+Duel hs
364200.0
Dueling Network Architectures for Deep Reinforcement Learning
CGP
1880
Evolving simple programs for playing Atari games
DDQN (tuned) hs
16837.0
Deep Reinforcement Learning with Double Q-learning
NoisyNet-Dueling
28350
Noisy Networks for Exploration
SAC
272
Soft Actor-Critic for Discrete Action Settings
DNA
323965
DNA: Proximal Policy Optimization with a Dual Network Architecture
R2D2
999153.3
Recurrent Experience Replay in Distributed Reinforcement Learning
-
ASL DDQN
567640
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
-
Best Learner
987.3
The Arcade Learning Environment: An Evaluation Platform for General Agents
DDQN+Pop-Art noop
18919.5
Learning values across many orders of magnitude
-
IMPALA (deep)
300732.00
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
Nature DQN
6012
Human level control through deep reinforcement learning
A3C FF hs
22140.5
Asynchronous Methods for Deep Reinforcement Learning
GDI-H3
999999
Generalized Data Distribution Iteration
-
DDQN (tuned) noop
17356.5
Dueling Network Architectures for Deep Reinforcement Learning
Prior+Duel noop
375080.0
Dueling Network Architectures for Deep Reinforcement Learning
0 of 49 row(s) selected.
Previous
Next