HyperAI超神経

Atari Games On Atari 2600 Wizard Of Wor

評価指標

Score

評価結果

このベンチマークにおける各モデルのパフォーマンス結果

比較表
モデル名Score
prioritized-experience-replay4802.0
self-imitation-learning7088.3
prioritized-experience-replay5727.0
recurrent-experience-replay-in-distributed144362.7
online-and-offline-reinforcement-learning-by100096.6
mastering-atari-with-discrete-world-models-112851
evolution-strategies-as-a-scalable3480.0
the-arcade-learning-environment-an-evaluation1981.3
learning-values-across-many-orders-of483.0
generalized-data-distribution-iteration63735
massively-parallel-methods-for-deep10431.0
deep-reinforcement-learning-with-double-q2704.0
distributed-prioritized-experience-replay46204
generalized-data-distribution-iteration64239
agent57-outperforming-the-atari-human157306.41
dueling-network-architectures-for-deep12352.0
the-arcade-learning-environment-an-evaluation105500
noisy-networks-for-exploration9149
human-level-control-through-deep3393.0
deep-reinforcement-learning-with-double-q6201.0
モデル 2136.9
mastering-atari-go-chess-and-shogi-by197126.00
impala-scalable-distributed-deep-rl-with9157.50
dna-proximal-policy-optimization-with-a-dual20851
increasing-the-action-gap-new-operators-for9541.14
distributional-reinforcement-learning-with-125061
a-distributional-perspective-on-reinforcement9300.0
dueling-network-architectures-for-deep7492.0
deep-reinforcement-learning-with-double-q1609.0
implicit-quantile-networks-for-distributional31190
asynchronous-methods-for-deep-reinforcement5278.0
evolving-simple-programs-for-playing-atari3820
asynchronous-methods-for-deep-reinforcement18082.0
dueling-network-architectures-for-deep7054.0
dueling-network-architectures-for-deep7855.0
fully-parameterized-quantile-function-for44782.6
policy-optimization-with-penalized-point4704
train-a-real-world-local-path-planner-in-one21049
deep-reinforcement-learning-with-double-q10471.0
deep-exploration-via-bootstrapped-dqn6804.7
asynchronous-methods-for-deep-reinforcement17244.0