HyperAI

Atari Games On Atari 2600 Zaxxon

Métriques

Score

Résultats

Résultats de performance de divers modèles sur ce benchmark

Tableau comparatif
Nom du modèleScore
the-arcade-learning-environment-an-evaluation3365.1
deep-reinforcement-learning-with-double-q4412.0
mastering-atari-with-discrete-world-models-150699
noisy-networks-for-exploration14874
implicit-quantile-networks-for-distributional21772
recurrent-independent-mechanisms15000
learning-values-across-many-orders-of14402.0
dna-proximal-policy-optimization-with-a-dual22588
distributional-reinforcement-learning-with-113112
dueling-network-architectures-for-deep13886.0
agent57-outperforming-the-atari-human249808.9
generalized-data-distribution-iteration216020
online-and-offline-reinforcement-learning-by154131.86
generalized-data-distribution-iteration109140
impala-scalable-distributed-deep-rl-with32935.50
massively-parallel-methods-for-deep6159.4
prioritized-experience-replay9474.0
asynchronous-methods-for-deep-reinforcement2659.0
policy-optimization-with-penalized-point9472
increasing-the-action-gap-new-operators-for9129.61
evolution-strategies-as-a-scalable6380.0
dueling-network-architectures-for-deep10163.0
asynchronous-methods-for-deep-reinforcement24622.0
a-distributional-perspective-on-reinforcement10513.0
deep-reinforcement-learning-with-double-q11320.0
distributed-prioritized-experience-replay42285.5
human-level-control-through-deep4977.0
self-imitation-learning9164.2
asynchronous-methods-for-deep-reinforcement23519.0
Modèle 3021.4
train-a-real-world-local-path-planner-in-one16420
prioritized-experience-replay10469.0
deep-reinforcement-learning-with-double-q8593.0
the-arcade-learning-environment-an-evaluation22610
mastering-atari-go-chess-and-shogi-by725853.90
dueling-network-architectures-for-deep12944.0
deep-reinforcement-learning-with-double-q5363.0
recurrent-experience-replay-in-distributed224910.7
evolving-simple-programs-for-playing-atari2980
deep-exploration-via-bootstrapped-dqn11491.7
dueling-network-architectures-for-deep10164.0