HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Smac 1
Smac On Smac Def Infantry Sequential
Smac On Smac Def Infantry Sequential
Metriken
Median Win Rate
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Median Win Rate
Paper Title
Repository
DRIMA
100
Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning
-
DIQL
93.8
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
DDN
90.6
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
DMIX
100
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
MADDPG
100
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
QTRAN
100
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
IQL
93.8
The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions
QMIX
96.9
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
COMA
28.1
Counterfactual Multi-Agent Policy Gradients
MASAC
37.5
Decomposed Soft Actor-Critic Method for Cooperative Multi-Agent Reinforcement Learning
VDN
96.9
Value-Decomposition Networks For Cooperative Multi-Agent Learning
0 of 11 row(s) selected.
Previous
Next