HyperAIHyperAI

Command Palette

Search for a command to run...

التعلم التعزيزي التعاوني للوكلاء غير المتجانسين

Zhixia Zhang Zixuan Huang Xin Xia Deqing Wang Fuzhen Zhuang Shuai Ma Ning Ding Yaodong Yang Jianxin Li Yikun Ban

الملخص

نقدّم في هذا البحث منهجية جديدة تُعرف باسم "التعلم التعزيزي التعاوني لوكلاء غير متجانسين" (Heterogeneous Agent Collaborative Reinforcement Learning، اختصاراً: HACRL)، تهدف إلى معالجة أوجه القصور في تحسين السياسات المعزولة القائمة على السياسة الحالية (on-policy). تتيح هذه المنهجية تحسيناً تعاونياً مع تنفيذ مستقل: حيث يتبادل الوكلاء غير المتجانسين مسارات محاكاة (rollouts) مُتحقَّق منها أثناء مرحلة التدريب لتحسين أدائهم بشكل متبادل، بينما يعمل كل وكيل بشكل مستقل أثناء مرحلة الاستدلال. وعلى عكس أنظمة التعلم التعزيزي متعدد الوكلاء (MARL) القائمة على نماذج اللغة الكبيرة (LLMs)، لا تتطلب منهجية HACRL نشرًا منسقًا بين الوكلاء؛ كما أنها، بخلاف عمليات الاستنساخ (distillation) القائمة على السياسات الحالية أو غير الحالية، تُمكّن من تعلم متبادل ثنائي الاتجاه بين الوكلاء غير المتجانسين، بدلاً من نقل المعرفة أحادي الاتجاه من المعلم إلى المتعلم.بناءً على هذه المنهجية، نقترح خوارزمية جديدة للتعلم التعزيزي التعاوني تُسمى HACPO، والتي تتيح مشاركة منهجية لمسارات المحاكاة بهدف تعظيم استغلال العينات وتسهيل نقل المعرفة بين الوكلاء. وللتخفيف من الفجوات في القدرات والتحوّلات في توزيع السياسات، تدمج خوارزمية HACPO أربع آليات مخصصة، مدعومة بضمانات نظرية تضمن عدم انحياز تقدير الميزة (unbiased advantage estimation) وصحة عملية التحسين. وقد أظهرت التجارب الشاملة التي أجريت عبر مجموعات متنوعة من النماذج غير المتجانسة ومعياريات الاستدلال أن خوارزمية HACPO تحسّن أداء جميع الوكلاء المشاركين بشكل متسق، متفوقةً على خوارزمية GSPO بنسبة متوسطة تبلغ 3.3%، مع استخدام نصف تكلفة مسارات المحاكاة فقط.

One-sentence Summary

Researchers from Beihang University and collaborating institutes propose HACRL, a paradigm enabling heterogeneous agents to share verified rollouts for mutual improvement without coordinated deployment. Their algorithm, HACPO, introduces bidirectional learning mechanisms that outperform GSPO in reasoning benchmarks while halving rollout costs.

Key Contributions

  • Heterogeneous Agent Collaborative Reinforcement Learning (HACRL) addresses the inefficiencies of isolated on-policy optimization by enabling heterogeneous agents to share verified rollouts during training while maintaining independent execution at inference time.
  • The proposed HACPO algorithm implements this paradigm through four tailored mechanisms that mitigate capability discrepancies and policy distribution shifts to ensure unbiased advantage estimation and maximize sample utilization.
  • Extensive experiments across diverse heterogeneous model combinations and reasoning benchmarks demonstrate that HACPO consistently improves all participating agents, outperforming GSPO by an average of 3.3% while using only half the rollout cost.

Introduction

Reinforcement Learning with Verifiable Rewards (RLVR) has become a standard for training strong reasoning models, yet it suffers from high computational costs due to isolated on-policy sampling where each agent generates and discards its own trajectories. Prior approaches like Multi-Agent Reinforcement Learning require coordinated execution that is impractical for independent deployment, while knowledge distillation typically enforces a one-way transfer from a teacher to a student that limits bidirectional learning among heterogeneous models. The authors introduce Heterogeneous Agent Collaborative Reinforcement Learning (HACRL) and its algorithm HACPO to enable independent agents to share verified rollouts during training for mutual improvement. This framework maximizes sample efficiency by reusing trajectories across multiple agents and ensures unbiased optimization through four tailored mechanisms that address capability discrepancies and policy distribution shifts.

Method

The authors propose Heterogeneous Agent Collaborative Policy Optimization (HACPO), a novel framework designed to facilitate rollout sharing and knowledge transfer among heterogeneous Large Language Model (LLM) agents. Unlike traditional Multi-Agent Reinforcement Learning (MARL) which often relies on joint responses or Knowledge Distillation which follows a one-way path, HACRL enables independent execution with mutual learning through cross-agent rollout reuse.

The core objective of HACRL is to optimize each agent kkk by maximizing a joint objective that combines self-generated experiences (JhomoJ_{\mathrm{homo}}Jhomo) and cross-agent information (JheteJ_{\mathrm{hete}}Jhete). This formulation allows agents to benefit from the diverse capabilities of their peers while managing the challenges introduced by heterogeneity.

As illustrated in the workflow diagram, the training process involves two primary challenges: capability discrepancy and policy distribution discrepancy. To address these, HACPO incorporates four tailored modifications.

Agent-Capability-Aware Advantage Estimation Standard group-relative advantage estimation relies solely on self-generated rewards, which is suboptimal in heterogeneous settings. HACPO introduces a capability-adjusted baseline μ^t(k)\hat{\mu}_{t}^{(k)}μ^t(k) that leverages rewards from all agents, reweighted by their relative capabilities. The advantage for a response yt,i(k)y_{t,i}^{(k)}yt,i(k) is defined as:

At,i(k)=R(yt,i(k))μ^t(k)σt,jointA _ { t , i } ^ { ( k ) } = \frac { R \Big ( y _ { t , i } ^ { ( k ) } \Big ) - \hat { \mu } _ { t } ^ { ( k ) } } { \sigma _ { t , j o i n t } }At,i(k)=σt,jointR(yt,i(k))μ^t(k)

where σt,joint\sigma_{t, joint}σt,joint is the standard deviation of rewards across all agents. The baseline μ^t(k)\hat{\mu}_{t}^{(k)}μ^t(k) is computed using a capability ratio ωt(k,j)\omega_{t}^{(k,j)}ωt(k,j):

μ^t(k)=1nGj=1ni=1Gωt(k,j)R(yt,i(j))\hat { \mu } _ { t } ^ { ( k ) } = \frac { 1 } { n G } \sum _ { j = 1 } ^ { n } \sum _ { i = 1 } ^ { G } \omega _ { t } ^ { ( k , j ) } \, R \Big ( y _ { t , i } ^ { ( j ) } \Big )μ^t(k)=nG1j=1ni=1Gωt(k,j)R(yt,i(j))

Here, ωt(k,j)\omega_{t}^{(k,j)}ωt(k,j) represents the smoothed performance ratio between agent kkk and agent jjj, ensuring that the baseline is properly calibrated across agents with different strengths.

Model Capabilities Discrepancy Coefficient To further handle capability gaps, the framework applies the capability ratio directly to the advantage when updating an agent using cross-agent samples. When agent kkk learns from a response generated by agent jjj, the effective advantage is scaled:

A~t,i(k)=ωt(j,k)At,i(j)\tilde { A } _ { t , i } ^ { ( k ) } = \omega _ { t } ^ { ( j , k ) } \, A _ { t , i } ^ { ( j ) }A~t,i(k)=ωt(j,k)At,i(j)

This mechanism encourages aggressive learning from stronger agents while adopting a conservative update strategy for samples from weaker agents.

Exponential Importance Sampling To correct for distributional mismatches between the policy generating the sample and the policy being updated, HACPO employs sequence-level importance sampling. For a response yt,i(j)y_{t,i}^{(j)}yt,i(j) generated by agent jjj and used to update agent kkk, the importance ratio is:

st,i(k,j)=(πθt(k)(yt,i(j))πθold(j)(yt,i(j)))1yt,i(j)s _ { t , i } ^ { ( k , j ) } = \left( \frac { \pi _ { \theta _ { t } } ^ { ( k ) } \left( y _ { t , i } ^ { ( j ) } \right) } { \pi _ { \theta _ { \mathrm { o l d } } } ^ { ( j ) } \left( y _ { t , i } ^ { ( j ) } \right) } \right) ^ { \frac { 1 } { | y _ { t , i } ^ { ( j ) } | } }st,i(k,j)=πθold(j)(yt,i(j))πθt(k)(yt,i(j))yt,i(j)1

Given that inter-agent policy discrepancies can be large, the authors introduce a non-gradient exponential reweighting to mitigate aggressive updates:

s~t,i(k,j)=st,i(k,j)(sg[st,i(k,j)])α\tilde { s } _ { t , i } ^ { ( k , j ) } = s _ { t , i } ^ { ( k , j ) } \cdot \left( \mathrm { s g } [ \, s _ { t , i } ^ { ( k , j ) } \, ] \right) ^ { \alpha }s~t,i(k,j)=st,i(k,j)(sg[st,i(k,j)])α

where α0\alpha \geq 0α0 controls the degree of conservativeness.

Stepwise Clipping Finally, to stabilize training and prevent cross-agent rollouts from dominating the gradient updates, HACPO utilizes an asymmetric clipping scheme. Unlike standard symmetric clipping, the upper bound for cross-agent importance ratios is strictly limited to 1.0:

st.i(k,j)[1.0δ,1.0]s _ { t . i } ^ { ( k , j ) } \in [ 1 . 0 - \delta , \, 1 . 0 ]st.i(k,j)[1.0δ,1.0]

Additionally, a stepwise clipping strategy is applied within each training step. As the number of parameter updates kkk increases, the lower bound tightens:

clip(st,i(k,j))=clip(st,i(k,j),1δ+kδstep,1.0)\mathrm { c l i p } ( s _ { t , i } ^ { ( k , j ) } ) = \mathrm { c l i p } \Big ( s _ { t , i } ^ { ( k , j ) } , \, 1 - \delta + k \cdot \delta _ { \mathrm { s t e p } } , \, 1 . 0 \Big )clip(st,i(k,j))=clip(st,i(k,j),1δ+kδstep,1.0)

This ensures that cross-agent responses are subject to increasingly stricter constraints as the training step progresses, maintaining stability in the heterogeneous collaborative policy optimization process.

Experiment

  • Experiments across three heterogeneity settings (state, size, and model architecture) validate that HACPO outperforms single-agent baselines and naive multi-agent approaches by enabling bidirectional knowledge exchange between agents of varying capabilities.
  • Qualitative analysis confirms that stronger models benefit from the complementary exploration signals and informative errors of weaker agents, while weaker models gain from the guidance of stronger peers, proving that learning is not purely unidirectional.
  • Ablation studies demonstrate that agent-capability-aware advantage estimation and gradient modulation are essential for correcting systematic biases and balancing learning rates between heterogeneous agents.
  • The necessity of stepwise clipping is established as a critical mechanism for stabilizing training, preventing the severe instability caused by unpredictable importance sampling values in cross-agent responses.
  • Results across diverse model combinations, including different architectures and tokenizers, confirm the robustness and generalizability of the proposed method in extracting transferable knowledge from heterogeneous rollouts.

بناء الذكاء الاصطناعي بالذكاء الاصطناعي

من الفكرة إلى الإطلاق — سرّع تطوير الذكاء الاصطناعي الخاص بك مع المساعدة البرمجية المجانية بالذكاء الاصطناعي، وبيئة جاهزة للاستخدام، وأفضل أسعار لوحدات معالجة الرسومات.

البرمجة التعاونية باستخدام الذكاء الاصطناعي
وحدات GPU جاهزة للعمل
أفضل الأسعار

HyperAI Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp