HyperAI

Atari Games On Atari 2600 Gravitar

Metriken

Score

Ergebnisse

Leistungsergebnisse verschiedener Modelle zu diesem Benchmark

Vergleichstabelle
ModellnameScore
Modell 1429.0
evolving-simple-programs-for-playing-atari2350
human-level-control-through-deep306.7
evolution-strategies-as-a-scalable805.0
agent57-outperforming-the-atari-human19213.96
exploration-by-self-supervised-exploitation4643
mastering-atari-go-chess-and-shogi-by6682.70
exploration-by-self-supervised-exploitation2741
exploration-by-self-supervised-exploitation6712
self-imitation-learning1874.2
asynchronous-methods-for-deep-reinforcement320.0
gdi-rethinking-what-makes-reinforcement5905
count-based-exploration-with-the-successor1078.3
learning-values-across-many-orders-of483.5
dueling-network-architectures-for-deep588.0
online-and-offline-reinforcement-learning-by8006.93
deep-reinforcement-learning-with-double-q298.0
impala-scalable-distributed-deep-rl-with359.50
a-distributional-perspective-on-reinforcement440.0
generalized-data-distribution-iteration5915
increasing-the-action-gap-new-operators-for446.92
the-arcade-learning-environment-an-evaluation2850
exploration-by-random-network-distillation3906
the-arcade-learning-environment-an-evaluation387.7
count-based-exploration-with-neural-density238.0
dueling-network-architectures-for-deep297.0
unifying-count-based-exploration-and238.68
dueling-network-architectures-for-deep412.0
generalized-data-distribution-iteration5905
increasing-the-action-gap-new-operators-for417.65
prioritized-experience-replay548.5
distributed-prioritized-experience-replay1598.5
large-scale-study-of-curiosity-driven1165.1
recurrent-experience-replay-in-distributed15680.7
mastering-atari-with-discrete-world-models-13789
dueling-network-architectures-for-deep238.0
dna-proximal-policy-optimization-with-a-dual2190
policy-optimization-with-penalized-point557.17
deep-exploration-via-bootstrapped-dqn286.1
prioritized-experience-replay269.5
distributional-reinforcement-learning-with-1995
asynchronous-methods-for-deep-reinforcement303.5
first-return-then-explore7588
noisy-networks-for-exploration2209
implicit-quantile-networks-for-distributional911
count-based-exploration-with-neural-density498.3
deep-reinforcement-learning-with-double-q473.0
train-a-real-world-local-path-planner-in-one760
deep-reinforcement-learning-with-double-q200.5
asynchronous-methods-for-deep-reinforcement269.5
deep-reinforcement-learning-with-double-q167.0
massively-parallel-methods-for-deep538.4
fully-parameterized-quantile-function-for1406.0