Atari Games On Atari 2600 Amidar
Métriques
Score
Résultats
Résultats de performance de divers modèles sur ce benchmark
Tableau comparatif
Nom du modèle | Score |
---|---|
value-prediction-network | 641 |
recurrent-experience-replay-in-distributed | 29321.4 |
asynchronous-methods-for-deep-reinforcement | 283.9 |
generalized-data-distribution-iteration | 1442 |
curl-contrastive-unsupervised-representations | 232.3 |
mastering-atari-with-discrete-world-models-1 | 2577 |
human-level-control-through-deep | 739.5 |
distributed-prioritized-experience-replay | 8659.2 |
dueling-network-architectures-for-deep | 238.4 |
deep-reinforcement-learning-with-double-q | 238.4 |
policy-optimization-with-penalized-point | 729.15 |
deep-reinforcement-learning-with-double-q | 978.0 |
improving-computational-efficiency-in-visual | 250.5 |
self-imitation-learning | 1362 |
the-reactor-a-fast-and-sample-efficient-actor | 1015.8 |
a-distributional-perspective-on-reinforcement | 1735.0 |
train-a-real-world-local-path-planner-in-one | 2232.3 |
agent57-outperforming-the-atari-human | 29660.08 |
distributional-reinforcement-learning-with-1 | 1641 |
increasing-the-action-gap-new-operators-for | 1451.65 |
increasing-the-action-gap-new-operators-for | 1557.43 |
dueling-network-architectures-for-deep | 1793.3 |
the-arcade-learning-environment-an-evaluation | 180.3 |
deep-reinforcement-learning-with-double-q | 169.1 |
soft-actor-critic-for-discrete-action | 7.9 |
implicit-quantile-networks-for-distributional | 2946 |
the-arcade-learning-environment-an-evaluation | 103.4 |
deep-reinforcement-learning-with-double-q | 178.4 |
dueling-network-architectures-for-deep | 172.7 |
dueling-network-architectures-for-deep | 2354.5 |
massively-parallel-methods-for-deep | 189.2 |
noisy-networks-for-exploration | 3537 |
dna-proximal-policy-optimization-with-a-dual | 1025 |
impala-scalable-distributed-deep-rl-with | 1554.79 |
fully-parameterized-quantile-function-for | 3165.3 |
asynchronous-methods-for-deep-reinforcement | 173.0 |
learning-values-across-many-orders-of | 782.5 |
asynchronous-methods-for-deep-reinforcement | 263.9 |
dueling-network-architectures-for-deep | 2296.8 |
evolving-simple-programs-for-playing-atari | 199 |
prioritized-experience-replay | 129.1 |
generalized-data-distribution-iteration | 1065 |
prioritized-experience-replay | 1838.9 |
online-and-offline-reinforcement-learning-by | 1197.38 |
evolution-strategies-as-a-scalable | 112.0 |
Modèle 46 | 183.6 |
deep-exploration-via-bootstrapped-dqn | 1272.5 |
mastering-atari-go-chess-and-shogi-by | 28634.39 |