Command Palette
Search for a command to run...
Heterogene Agenten-Kollaboratives Reinforcement Learning
Heterogene Agenten-Kollaboratives Reinforcement Learning
Zhixia Zhang Zixuan Huang Xin Xia Deqing Wang Fuzhen Zhuang Shuai Ma Ning Ding Yaodong Yang Jianxin Li Yikun Ban
Zusammenfassung
Wir stellen Heterogeneous Agent Collaborative Reinforcement Learning (HACRL) vor, ein neues Lernparadigma, das die Ineffizienzen isolierter on-policy-Optimierung adressiert. HACRL ermöglicht eine kollaborative Optimierung bei gleichzeitiger unabhängiger Ausführung: Heterogene Agenten teilen während des Trainings verifizierte Rollouts, um sich gegenseitig zu verbessern, arbeiten jedoch zum Inferenzzeitpunkt unabhängig voneinander. Im Gegensatz zu auf Large Language Models (LLM) basierenden Multi-Agenten-Reinforcement-Learning-Ansätzen (MARL) erfordert HACRL keine koordinierte Bereitstellung; zudem ermöglicht es – im Unterschied zu On-/Off-Policy-Distillation – eine bidirektionale gegenseitige Lerninteraktion zwischen heterogenen Agenten anstelle eines einseitigen Wissenstransfers vom Lehrer zum Schüler. Aufbauend auf diesem Paradigma schlagen wir HACPO vor, einen kollaborativen RL-Algorithmus, der eine prinzipiengeleitete Rollout-Teilung ermöglicht, um die Stichprobenausnutzung und den agentenübergreifenden Wissenstransfer zu maximieren. Um Leistungsunterschiede und Verschiebungen der Policy-Verteilung zu mildern, führt HACPO vier maßgeschneiderte Mechanismen ein, für die theoretische Garantien bezüglich einer erwartungstreuen Vorteilsschätzung und der Korrektheit der Optimierung bestehen. Umfassende Experimente mit diversen Kombinationen heterogener Modelle sowie auf Reasoning-Benchmarks zeigen, dass HACPO konsistent alle beteiligten Agenten verbessert, dabei GSPO im Durchschnitt um 3,3 % übertrifft und gleichzeitig nur die Hälfte der Rollout-Kosten verursacht.
One-sentence Summary
Researchers from Beihang University and collaborating institutes propose HACRL, a paradigm enabling heterogeneous agents to share verified rollouts for mutual improvement without coordinated deployment. Their algorithm, HACPO, introduces bidirectional learning mechanisms that outperform GSPO in reasoning benchmarks while halving rollout costs.
Key Contributions
- Heterogeneous Agent Collaborative Reinforcement Learning (HACRL) addresses the inefficiencies of isolated on-policy optimization by enabling heterogeneous agents to share verified rollouts during training while maintaining independent execution at inference time.
- The proposed HACPO algorithm implements this paradigm through four tailored mechanisms that mitigate capability discrepancies and policy distribution shifts to ensure unbiased advantage estimation and maximize sample utilization.
- Extensive experiments across diverse heterogeneous model combinations and reasoning benchmarks demonstrate that HACPO consistently improves all participating agents, outperforming GSPO by an average of 3.3% while using only half the rollout cost.
Introduction
Reinforcement Learning with Verifiable Rewards (RLVR) has become a standard for training strong reasoning models, yet it suffers from high computational costs due to isolated on-policy sampling where each agent generates and discards its own trajectories. Prior approaches like Multi-Agent Reinforcement Learning require coordinated execution that is impractical for independent deployment, while knowledge distillation typically enforces a one-way transfer from a teacher to a student that limits bidirectional learning among heterogeneous models. The authors introduce Heterogeneous Agent Collaborative Reinforcement Learning (HACRL) and its algorithm HACPO to enable independent agents to share verified rollouts during training for mutual improvement. This framework maximizes sample efficiency by reusing trajectories across multiple agents and ensures unbiased optimization through four tailored mechanisms that address capability discrepancies and policy distribution shifts.
Method
The authors propose Heterogeneous Agent Collaborative Policy Optimization (HACPO), a novel framework designed to facilitate rollout sharing and knowledge transfer among heterogeneous Large Language Model (LLM) agents. Unlike traditional Multi-Agent Reinforcement Learning (MARL) which often relies on joint responses or Knowledge Distillation which follows a one-way path, HACRL enables independent execution with mutual learning through cross-agent rollout reuse.

The core objective of HACRL is to optimize each agent k by maximizing a joint objective that combines self-generated experiences (Jhomo) and cross-agent information (Jhete). This formulation allows agents to benefit from the diverse capabilities of their peers while managing the challenges introduced by heterogeneity.

As illustrated in the workflow diagram, the training process involves two primary challenges: capability discrepancy and policy distribution discrepancy. To address these, HACPO incorporates four tailored modifications.
Agent-Capability-Aware Advantage Estimation Standard group-relative advantage estimation relies solely on self-generated rewards, which is suboptimal in heterogeneous settings. HACPO introduces a capability-adjusted baseline μ^t(k) that leverages rewards from all agents, reweighted by their relative capabilities. The advantage for a response yt,i(k) is defined as:
At,i(k)=σt,jointR(yt,i(k))−μ^t(k)where σt,joint is the standard deviation of rewards across all agents. The baseline μ^t(k) is computed using a capability ratio ωt(k,j):
μ^t(k)=nG1j=1∑ni=1∑Gωt(k,j)R(yt,i(j))Here, ωt(k,j) represents the smoothed performance ratio between agent k and agent j, ensuring that the baseline is properly calibrated across agents with different strengths.
Model Capabilities Discrepancy Coefficient To further handle capability gaps, the framework applies the capability ratio directly to the advantage when updating an agent using cross-agent samples. When agent k learns from a response generated by agent j, the effective advantage is scaled:
A~t,i(k)=ωt(j,k)At,i(j)This mechanism encourages aggressive learning from stronger agents while adopting a conservative update strategy for samples from weaker agents.
Exponential Importance Sampling To correct for distributional mismatches between the policy generating the sample and the policy being updated, HACPO employs sequence-level importance sampling. For a response yt,i(j) generated by agent j and used to update agent k, the importance ratio is:
st,i(k,j)=πθold(j)(yt,i(j))πθt(k)(yt,i(j))∣yt,i(j)∣1Given that inter-agent policy discrepancies can be large, the authors introduce a non-gradient exponential reweighting to mitigate aggressive updates:
s~t,i(k,j)=st,i(k,j)⋅(sg[st,i(k,j)])αwhere α≥0 controls the degree of conservativeness.
Stepwise Clipping Finally, to stabilize training and prevent cross-agent rollouts from dominating the gradient updates, HACPO utilizes an asymmetric clipping scheme. Unlike standard symmetric clipping, the upper bound for cross-agent importance ratios is strictly limited to 1.0:
st.i(k,j)∈[1.0−δ,1.0]Additionally, a stepwise clipping strategy is applied within each training step. As the number of parameter updates k increases, the lower bound tightens:
clip(st,i(k,j))=clip(st,i(k,j),1−δ+k⋅δstep,1.0)This ensures that cross-agent responses are subject to increasingly stricter constraints as the training step progresses, maintaining stability in the heterogeneous collaborative policy optimization process.
Experiment
- Experiments across three heterogeneity settings (state, size, and model architecture) validate that HACPO outperforms single-agent baselines and naive multi-agent approaches by enabling bidirectional knowledge exchange between agents of varying capabilities.
- Qualitative analysis confirms that stronger models benefit from the complementary exploration signals and informative errors of weaker agents, while weaker models gain from the guidance of stronger peers, proving that learning is not purely unidirectional.
- Ablation studies demonstrate that agent-capability-aware advantage estimation and gradient modulation are essential for correcting systematic biases and balancing learning rates between heterogeneous agents.
- The necessity of stepwise clipping is established as a critical mechanism for stabilizing training, preventing the severe instability caused by unpredictable importance sampling values in cross-agent responses.
- Results across diverse model combinations, including different architectures and tokenizers, confirm the robustness and generalizability of the proposed method in extracting transferable knowledge from heterogeneous rollouts.