HyperAI超神経

Atari Games On Atari 2600 Gravitar

評価指標

Score

評価結果

このベンチマークにおける各モデルのパフォーマンス結果

モデル名
Score
Paper TitleRepository
SARSA429.0--
CGP2350Evolving simple programs for playing Atari games
Nature DQN306.7Human level control through deep reinforcement learning
ES FF (1 hour) noop805.0Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Agent5719213.96Agent57: Outperforming the Atari Human Benchmark
SND-STD4643Self-supervised network distillation: an effective approach to exploration in sparse reward environments
MuZero6682.70Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
SND-V2741Self-supervised network distillation: an effective approach to exploration in sparse reward environments
SND-VIC6712Self-supervised network distillation: an effective approach to exploration in sparse reward environments
A2C + SIL1874.2Self-Imitation Learning
A3C LSTM hs320.0Asynchronous Methods for Deep Reinforcement Learning
GDI-I35905GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning-
DQNMMCe1078.3Count-Based Exploration with the Successor Representation
DDQN+Pop-Art noop483.5Learning values across many orders of magnitude-
Duel noop588.0Dueling Network Architectures for Deep Reinforcement Learning
MuZero (Res2 Adam)8006.93Online and Offline Reinforcement Learning by Planning with a Learned Model
DQN hs298.0Deep Reinforcement Learning with Double Q-learning
IMPALA (deep)359.50IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
C51 noop440.0A Distributional Perspective on Reinforcement Learning
GDI-H35915Generalized Data Distribution Iteration-
0 of 53 row(s) selected.