Continuous Control On Lunar Lander Openai Gym
Métriques
Score
Résultats
Résultats de performance de divers modèles sur ce benchmark
Nom du modèle | Score | Paper Title | Repository |
---|---|---|---|
MAC | 163.5 | Mean Actor Critic | |
TD3 | 277.26±4.17 | Addressing Function Approximation Error in Actor-Critic Methods | |
DDPG | 256.98±14.38 | Continuous control with deep reinforcement learning | |
PPO | 175.14±44.94 | Proximal Policy Optimization Algorithms | |
SAC | 284.59±0.97 | Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor |
0 of 5 row(s) selected.