HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
SMAC+
Smac On Smac Def Armored Sequential
Smac On Smac Def Armored Sequential
Metriken
Median Win Rate
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Median Win Rate
Paper Title
DRIMA
100
Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning
VDN
96.9
Value-Decomposition Networks For Cooperative Multi-Agent Learning
QTRAN
93.8
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
MADDPG
90.6
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
DMIX
81.3
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
DDN
71.9
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
DIQL
53.1
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
IQL
9.4
The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions
QMIX
0.0
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
MASAC
0.0
Decomposed Soft Actor-Critic Method for Cooperative Multi-Agent Reinforcement Learning
COMA
0.0
Counterfactual Multi-Agent Policy Gradients
0 of 11 row(s) selected.
Previous
Next
Smac On Smac Def Armored Sequential | SOTA | HyperAI