HyperAI

Atari Games On Atari 2600 Berzerk

Metrics

Score

Results

Performance results of various models on this benchmark

Comparison Table
Model NameScore
fully-parameterized-quantile-function-for12422.2
asynchronous-methods-for-deep-reinforcement1433.4
distributed-prioritized-experience-replay57196.7
implicit-quantile-networks-for-distributional1053
learning-values-across-many-orders-of1199.6
deep-reinforcement-learning-with-double-q1011.1
asynchronous-methods-for-deep-reinforcement817.9
asynchronous-methods-for-deep-reinforcement862.2
evolution-strategies-as-a-scalable686.0
prioritized-experience-replay1305.6
deep-reinforcement-learning-with-double-q585.6
gdi-rethinking-what-makes-reinforcement7607
deep-reinforcement-learning-with-double-q2178.6
prioritized-experience-replay865.9
impala-scalable-distributed-deep-rl-with1852.70
a-distributional-perspective-on-reinforcement1645.0
the-reactor-a-fast-and-sample-efficient-actor2303.1
agent57-outperforming-the-atari-human61507.83
mastering-atari-go-chess-and-shogi-by85932.60
deep-reinforcement-learning-with-double-q493.4
dueling-network-architectures-for-deep1472.6
dueling-network-architectures-for-deep910.6
train-a-real-world-local-path-planner-in-one2597.2
increasing-the-action-gap-new-operators-for1328.25
online-and-offline-reinforcement-learning-by2705.82
generalized-data-distribution-iteration7607
dueling-network-architectures-for-deep1225.4
distributional-reinforcement-learning-with-13117
dna-proximal-policy-optimization-with-a-dual19789
recurrent-experience-replay-in-distributed53318.7
first-return-then-explore197376
mastering-atari-with-discrete-world-models-1810
the-arcade-learning-environment-an-evaluation670
generalized-data-distribution-iteration14649
noisy-networks-for-exploration1896
dueling-network-architectures-for-deep3409.0
evolving-simple-programs-for-playing-atari1138
increasing-the-action-gap-new-operators-for747.26
the-arcade-learning-environment-an-evaluation501.3