HyperAI

Atari Games On Atari 2600 Tennis

Metriken

Score

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameScore
Modell 10.0
dueling-network-architectures-for-deep5.1
train-a-real-world-local-path-planner-in-one22.3
recurrent-rational-networks20.6
policy-optimization-with-penalized-point-8.32
recurrent-experience-replay-in-distributed-0.1
recurrent-rational-networks20.5
gdi-rethinking-what-makes-reinforcement24
human-level-control-through-deep-2.5
agent57-outperforming-the-atari-human23.84
deep-reinforcement-learning-with-double-q12.2
implicit-quantile-networks-for-distributional23.6
the-arcade-learning-environment-an-evaluation2.8
massively-parallel-methods-for-deep-0.7
asynchronous-methods-for-deep-reinforcement-6.4
prioritized-experience-replay-5.3
mastering-atari-go-chess-and-shogi-by0.00
evolution-strategies-as-a-scalable-4.5
distributional-reinforcement-learning-with-123.6
deep-reinforcement-learning-with-double-q11.1
deep-exploration-via-bootstrapped-dqn0
learning-values-across-many-orders-of12.1
a-distributional-perspective-on-reinforcement23.1
asynchronous-methods-for-deep-reinforcement-6.3
dueling-network-architectures-for-deep-22.8
deep-reinforcement-learning-with-double-q-13.2
prioritized-experience-replay0.0
generalized-data-distribution-iteration24
dna-proximal-policy-optimization-with-a-dual-10.9
the-arcade-learning-environment-an-evaluation-0.1
mastering-atari-with-discrete-world-models-114
generalized-data-distribution-iteration24
impala-scalable-distributed-deep-rl-with0.55
self-imitation-learning-17.3
online-and-offline-reinforcement-learning-by0
deep-reinforcement-learning-with-double-q-7.8
asynchronous-methods-for-deep-reinforcement-10.2
distributed-prioritized-experience-replay23.9
dueling-network-architectures-for-deep4.4
noisy-networks-for-exploration0
increasing-the-action-gap-new-operators-for0
evolving-simple-programs-for-playing-atari0
dueling-network-architectures-for-deep0.0