HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
SMAC+
Smac On Smac Def Infantry Sequential
Smac On Smac Def Infantry Sequential
Metriken
Median Win Rate
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Median Win Rate
Paper Title
DRIMA
100
Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning
DMIX
100
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
MADDPG
100
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
QTRAN
100
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
QMIX
96.9
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
VDN
96.9
Value-Decomposition Networks For Cooperative Multi-Agent Learning
DIQL
93.8
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
IQL
93.8
The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions
DDN
90.6
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
MASAC
37.5
Decomposed Soft Actor-Critic Method for Cooperative Multi-Agent Reinforcement Learning
COMA
28.1
Counterfactual Multi-Agent Policy Gradients
0 of 11 row(s) selected.
Previous
Next
Smac On Smac Def Infantry Sequential | SOTA | HyperAI