Atari Games On Atari 2600 Pitfall
Metrics
Score
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | Score |
---|---|
noisy-networks-for-exploration | 0 |
first-return-then-explore | 6954 |
policy-optimization-with-penalized-point | 0 |
distributional-reinforcement-learning-with-1 | 0 |
implicit-quantile-networks-for-distributional | 0 |
dna-proximal-policy-optimization-with-a-dual | 0 |
increasing-the-action-gap-new-operators-for | 0 |
online-and-offline-reinforcement-learning-by | 0 |
exploration-by-self-supervised-exploitation | 0 |
mastering-atari-go-chess-and-shogi-by | 0.00 |
exploration-by-self-supervised-exploitation | 0 |
mastering-atari-with-discrete-world-models-1 | 0 |
evolving-simple-programs-for-playing-atari | 0 |
generalized-data-distribution-iteration | -4.345 |
distributed-prioritized-experience-replay | -0.6 |
go-explore-a-new-approach-for-hard | 102571 |
train-a-real-world-local-path-planner-in-one | 0 |
impala-scalable-distributed-deep-rl-with | -1.66 |
recurrent-experience-replay-in-distributed | 0.0 |
exploration-by-random-network-distillation | -3 |
gdi-rethinking-what-makes-reinforcement | 0 |
agent57-outperforming-the-atari-human | 18756.01 |
generalized-data-distribution-iteration | 0 |