HyperAIHyperAI

Command Palette

Search for a command to run...

UI-Voyager: وكيل واجهة مستخدم رسومية ذاتي التطور يتعلم من خلال التجارب الفاشلة

الملخص

جذبت وكلاء واجهة المستخدم الرسومية (GUI) المتنقلة المستقلة اهتمامًا متزايدًا مع تطور نماذج اللغة الكبيرة متعددة الوسائط (MLLMs). ومع ذلك، لا تزال الأساليب الحالية تعاني من كفاءة تعلم منخفضة عند التعامل مع المسارات الفاشلة، ومن غموض في إسناد الاعتمادية (credit assignment) في ظل المكافآت الشحيحة للمهام طويلة المدى في واجهات المستخدم الرسومية. وللتغلب على هذه التحديات، نقترح "UI-Voyager"، وهو وكيل متنقل مبتكر لواجهة المستخدم الرسومية يتطور ذاتيًا عبر مرحلتين. في المرحلة الأولى، نوظف تقنية ضبط الدقة الرفضية (Rejection Fine-Tuning - RFT)، التي تتيح التطور المتزامن والمستمر للبيانات والنماذج ضمن حلقة مستقلة تمامًا. أما المرحلة الثانية فتقدم تقنية التمييز الذاتي النسبي الجماعي (Group Relative Self-Distillation - GRSD)، التي تحدد نقاط التفرع الحرجة في عمليات المحاكاة الجماعية (group rollouts)، وتبني إشرافًا كثيفًا على مستوى الخطوة استنادًا إلى المسارات الناجحة لتصحيح المسارات الفاشلة. وتُظهر التجارب الواسعة التي أُجريت على منصة AndroidWorld أن نموذجنا المكون من 4 مليارات معلمات يحقق معدل نجاح Pass@1 يبلغ 81.0%، متفوقًا بذلك على العديد من الأسس المرجعية الحديثة، ومتجاوزًا حتى أداء البشر. كما تؤكد دراسات الحذف (Ablation studies) وحالات الاستخدام فعالية تقنية GRSD. ويمثل منهجنا قفزة كبيرة نحو أتمتة واجهات المستخدم الرسومية المتنقلة الفعالة، والقادرة على التطور الذاتي، وعالية الأداء، دون الحاجة إلى تعليقات يدوية مكلفة للبيانات.

One-sentence Summary

Tencent Hunyuan researchers propose UI-Voyager, a two-stage self-evolving mobile GUI agent that utilizes Rejection Fine-Tuning and Group Relative Self-Distillation to overcome sparse rewards in long-horizon tasks. This approach achieves human-level performance on AndroidWorld without expensive manual annotation, significantly advancing efficient mobile automation.

Key Contributions

  • The paper introduces UI-Voyager, a two-stage self-evolving mobile GUI agent that automates the co-evolution of data and models through Rejection Fine-Tuning without requiring manual annotation.
  • A Group Relative Self-Distillation method is presented to resolve ambiguous credit assignment by identifying critical fork points in group rollouts and generating dense step-level supervision from successful trajectories to correct failed ones.
  • Experiments on the AndroidWorld benchmark demonstrate that the 4B model achieves an 81.0% Pass@1 success rate, outperforming recent baselines and exceeding reported human-level performance.

Introduction

Autonomous mobile GUI agents are critical for enabling AI to navigate complex, dynamic smartphone interfaces, yet current approaches struggle with data inefficiency and ambiguous credit assignment when learning from sparse, trajectory-level rewards. Prior methods often discard failed interaction attempts and lack the precision to identify which specific steps caused a task failure, hindering stable policy optimization for long-horizon tasks. To address these issues, the authors introduce UI-Voyager, a self-evolving agent that employs a two-stage pipeline combining Rejection Fine-Tuning for autonomous data-model co-evolution and Group Relative Self-Distillation to generate dense, step-level supervision from successful trajectories to correct failed ones.

Dataset

  • The authors focus on the AndroidWorld environment, a dynamic interactive benchmark designed for training and evaluating GUI agents on mobile devices.
  • The dataset consists of 116 diverse, programmatic tasks that vary in complexity and optimal interaction steps to challenge agent performance.
  • Unlike static datasets, this environment allows actions to alter the UI state and provides reward signals upon successful task completion.
  • The paper utilizes this setup to evaluate how well agents handle unpredictable UI behaviors and learn from trial-and-error rather than relying solely on pre-collected interaction logs.
  • No specific training splits, mixture ratios, or image cropping strategies are detailed in the provided text, as the environment serves primarily as a rigorous evaluation benchmark.

Method

The authors propose UI-Voyager, a self-evolving training pipeline designed to optimize mobile GUI agents through a two-stage iterative process. The overall framework integrates Rejection Fine-Tuning (RFT) for high-quality data curation and Group Relative Self-Distillation (GRSD) for step-level policy refinement. Refer to the framework diagram for the complete pipeline structure.

The first stage, Rejection Fine-Tuning (RFT), establishes a closed-loop system to enhance data quality. The base policy generates multiple trajectories for a given task, which are then evaluated by a rule-based verifier. Only trajectories that successfully complete the task or pass the verifier are retained for Supervised Fine-Tuning (SFT). This rejection sampling mechanism ensures that the model is updated using high-fidelity samples, fostering a co-evolutionary cycle where model improvements lead to better trajectory synthesis in subsequent rounds.

The second stage, Group Relative Self-Distillation (GRSD), addresses the credit assignment problem inherent in long-horizon multi-turn tasks. Unlike standard reinforcement learning methods like PPO or GRPO that rely on sparse trajectory-level rewards, GRSD leverages successful trajectories within a group to provide dense, step-level supervision for failed attempts. As shown in the figure below, the method identifies specific divergence points between successful and failed trajectories to guide self-correction.

The core mechanism of GRSD is Fork Point Detection. Given a successful trajectory τ+\tau^{+}τ+ and a failed trajectory τ\tau^{-}τ, the system searches for steps where the agent observes the same screen state but takes different actions. To determine state equivalence, the authors utilize the Structural Similarity Index (SSIM) on preprocessed screenshots. A fork point is identified at step jjj in the failed trajectory if there exists a step iii in the successful trajectory such that the observations match, denoted as SAME(oi+,oj)\text{SAME}(o_{i}^{+}, o_{j}^{-})SAME(oi+,oj), but the subsequent transitions diverge, denoted as DIVERGE(i,j)\text{DIVERGE}(i, j)DIVERGE(i,j).

Once fork points are identified, the model undergoes Step-Level Self Distillation. For each matched pair, a training sample is constructed by using the context from the failed trajectory and the correct action from the successful trajectory as the target. The training objective minimizes the autoregressive next-token prediction loss over these constructed samples:

LGRSD=1DxD1Txt=1Txlogπθ(yts1,,spx,y<t),\mathcal { L } _ { \mathrm { G R S D } } = - \frac { 1 } { | \mathcal { D } | } \sum _ { \mathbf { x } \in \mathcal { D } } \frac { 1 } { T _ { \mathbf { x } } } \sum _ { t = 1 } { T _ { \mathbf { x } } } \log \pi _ { \boldsymbol { \theta } } ( y _ { t } \mid s _ { 1 } , \ldots , s _ { p _ { \mathbf { x } } } , y _ { < t } ) \, ,LGRSD=D1xDTx1t=1Txlogπθ(yts1,,spx,y<t),

where D\mathcal{D}D represents the set of samples derived from fork points. This approach effectively transforms sparse feedback into precise corrective signals, allowing the agent to learn from its own mistakes without requiring external teacher policies.

Experiment

  • Evaluation on the AndroidWorld benchmark demonstrates that UI-Voyager achieves superior performance compared to diverse baselines, including specialized GUI agents and large-scale proprietary models, while surpassing reported human-level success rates with only 4B parameters.
  • Rejection Fine-Tuning (RFT) is validated as a critical initialization strategy that provides consistent performance gains and serves as an efficient warm-start, whereas direct application of standard RL algorithms like GRPO and PPO from the base model yields marginal improvements and high sample inefficiency.
  • Fork point detection effectively identifies critical divergence moments between successful and failed trajectories, enabling the system to provide dense, step-level supervision and correct erroneous actions at key decision points in long-horizon tasks.
  • The GRSD framework leverages self-distillation to transform failed trajectories into high-quality supervised data, significantly outperforming standard RL baselines in sparse-reward environments by addressing credit assignment challenges and facilitating rapid error correction.
  • Analysis of real-time execution highlights that while SSIM-based matching is effective, it faces challenges from temporal misalignment and transient visual perturbations, suggesting future improvements through time-aware matching and noise reduction techniques.

بناء الذكاء الاصطناعي بالذكاء الاصطناعي

من الفكرة إلى الإطلاق — سرّع تطوير الذكاء الاصطناعي الخاص بك مع المساعدة البرمجية المجانية بالذكاء الاصطناعي، وبيئة جاهزة للاستخدام، وأفضل أسعار لوحدات معالجة الرسومات.

البرمجة التعاونية باستخدام الذكاء الاصطناعي
وحدات GPU جاهزة للعمل
أفضل الأسعار

HyperAI Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp