HyperAIHyperAI

Command Palette

Search for a command to run...

RoboPocket: تحسين سياسات الروبوتات فورياً باستخدام هاتفك

Junjie Fang Wendi Chen Han Xue Fangyuan Zhou Tian Le Yi Wang Yuting Zhang Jun Lv Chuan Wen Cewu Lu

الملخص

يُعد توسيع نطاق تعلم المحاكاة مقيدًا جوهريًا بكفاءة جمع البيانات. وفي حين برزت الواجهات المحمولة كحل قابل للتوسع لجمع البيانات في بيئات العالم الحقيقي، فإنها تعمل في الغالب ضمن حلقة مفتوحة: حيث يقوم المشغلون بجمع أمثلة بشكل أعمى دون معرفة نقاط الضعف في السياسة (Policy) الكامنة، مما يؤدي إلى تغطية غير فعالة لتوزيعات الحالات الحرجة. وعلى النقيض، فإن الطرق التفاعلية مثل خوارزمية DAgger تتصدى بنجاح لمشكلة انزياح المتغيرات المشتركة (Covariate Shift)، لكنها تعتمد على تنفيذ فيزيائي للروبوت، وهو أمر مكلف وصعب التوسع. وللمصالحة بين هذه المفاضلة، نقدم نظام "RoboPocket"، وهو نظام محمول يتيح تكرار السياسات الفوري دون الحاجة إلى روبوت فعلي، وذلك باستخدام هواتف ذكية استهلاكية فردية. تكمن ابتكارته الأساسية في إطار عمل "الاستدلال عن بُعد" الذي يُصوّر المسار المتوقع للسياسة عبر التنبؤ البصري المعزز بالواقع (AR Visual Foresight). يتيح هذا التغذية الراجعة الغامرة للمجمّعين تحديد حالات الفشل المحتملة بشكل استباقي والتركيز على جمع البيانات في المناطق الضعيفة للسياسة، دون الحاجة إلى روبوت فيزيائي. وعلاوة على ذلك، قمنا بتنفيذ خط أنابيب "ضبط دقيق عبر الإنترنت" (Online Finetuning) غير متزامن يقوم بتحديث السياسة باستمرار مع البيانات الواردة، مما يغلق حلقة التعلم فعليًا خلال دقائق. وتُظهر التجارب الشاملة أن نظام RoboPocket يلتزم بقوانين توسيع البيانات، ويضاعف كفاءة البيانات مقارنة باستراتيجيات التوسع غير المتزامن، متغلبًا بذلك على اختناق الكفاءة المستدام الذي طال أمده. كما أن حلقة التكرار الفوري الخاصة بنا تعزز أيضًا كفاءة العينات بنسبة تصل إلى ضعفين في البيئات الموزعة، وذلك بعدد قليل من التصحيحات التفاعلية لكل فرد. صفحة المشروع والفيديوهات: https://robo-pocket.github.io.

One-sentence Summary

Researchers from Shanghai Jiao Tong University and Noematrix Ltd. introduce RoboPocket, a smartphone-based system that uses AR Visual Foresight to enable robot-free instant policy iteration, allowing users to proactively identify failures and refine policies in minutes while doubling data efficiency compared to traditional offline methods.

Key Contributions

  • RoboPocket addresses the scalability bottleneck in robot learning by transforming passive handheld data collection into an active, computationally guided workflow that provides real-time on-device feedback for higher quality demonstrations.
  • The system introduces a novel Robot-Free Instant Policy Iteration paradigm that uses AR Visual Foresight to visualize predicted trajectories, allowing users to proactively identify and correct policy weaknesses without physical robot deployment.
  • Experiments across diverse manipulation tasks demonstrate that this approach adheres to data scaling laws and achieves up to a 2× improvement in data efficiency compared to offline strategies while enabling rapid distributed learning.

Introduction

Scaling imitation learning in robotics is hindered by the high cost and logistical difficulty of collecting diverse, high-quality data from physical robots. Prior handheld interfaces allow for robot-free data collection but operate in an open-loop manner, forcing users to record demonstrations blindly without knowing where the current policy fails. Conversely, interactive methods that correct these failures require physical robot deployment, which is slow, risky, and impossible to scale across distributed environments. The authors introduce RoboPocket, a system that transforms a consumer smartphone into an intelligent co-pilot for robot learning by using Augmented Reality Visual Foresight to project the policy's predicted trajectory directly onto the user's screen. This approach enables users to proactively identify and correct policy weaknesses in minutes without a physical robot, while an asynchronous online finetuning pipeline instantly updates the model with new data to close the learning loop.

Dataset

  • Dataset Composition and Sources: The authors construct a dataset for the "Mouse Arrangement" task to validate data scaling laws, drawing from 32 distinct environments and 47 unique object pairs. The environments span both indoor and outdoor settings to ensure diverse lighting conditions and textures, while object pairs are formed by combining various mice and mouse pads.

  • Key Details for Each Subset:

    • Environment Selection: Two object pairs are randomly selected for data collection within each of the 32 environments.
    • Demonstration Volume: The team collects 25 demonstrations for every single environment-object pair combination.
    • Evaluation Setup: Testing occurs across 3 different scenes, utilizing 2 initial robot poses and 3 initial object poses to assess generalization.
  • Model Usage and Training Strategy: Following the protocol from Data Scaling Laws, the authors use this dataset to verify that their RoboPocket system generates high-quality data adhering to power-law scaling relationships. The study emphasizes that increasing diversity in environments and objects is more critical for zero-shot generalization than simply increasing the number of demonstrations per scene.

  • Processing and Hardware Configuration:

    • Physical Setup: Data collection utilizes a Flexiv Rizon 4 robot arm with a Robotiq 2F-85 adaptive gripper fitted with TPU soft fingers to match the handheld collector.
    • Data Streaming: An iPhone mounted on the gripper streams camera feeds in real-time to a workstation acting as both the Data Serving Node and Training Server.
    • Infrastructure: The system runs on a workstation equipped with an Intel Core i9-12900K CPU and NVIDIA GeForce RTX 3090 GPU, powered by an EcoFlow DELTA 3 MAX portable station.
    • Inference: A separate workstation with an Intel Core i9-13900K CPU and NVIDIA GeForce RTX 4090 GPU serves as the Inference Server during Robot-free Instant Policy Iteration.

Method

The authors propose RoboPocket, a system designed to transition from passive data recording to computationally guided learning. Refer to the framework diagram which contrasts the traditional offline iteration loop, characterized by prolonged feedback and limited scenarios, with the proposed instant policy update process that operates without a physical robot. This new workflow enables distributed environments and instant policy updates through a three-step cycle of policy updating, following the policy's intent, and collecting corrections.

The system relies on a specialized hardware-software co-design to ensure physical consistency and real-time interaction. Refer to the hardware and software interface diagram which details the isomorphic gripper, fisheye lens, and the AR-based interaction design. The hardware architecture utilizes an iPhone Pro as an Edge-Compute Hub to run real-time VIO and kinematic solving. It features an isomorphic adaptive gripper that replicates the underactuated dynamics of the target robot to minimize the embodiment gap. Additionally, a custom fisheye lens expands the visual context, while a magnetic encoder captures gripper width with high fidelity. On the software side, the interface provides active data verification through SLAM monitoring and an on-device IK solver, alongside an AR trajectory replay feature that allows users to visualize the end-effector path in real-time.

The core research question driving the system design is how to efficiently collect specific data distributions that the robot actually needs. The authors formulate the robotic manipulation task as a Markov Decision Process (MDP) defined by the tuple (S,A,P,R,γ)(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma)(S,A,P,R,γ). Standard Imitation Learning utilizes a static dataset to train a policy πθ(atst)\pi_{\theta}(\mathbf{a}_t|\mathbf{s}_t)πθ(atst) that minimizes the divergence from the expert distribution. However, due to compounding errors, the policy inevitably encounters out-of-distribution (OOD) states. Formally, the objective is to minimize the loss under the induced distribution:

J(π)=Esdπ[(π(s),π(s))]J ( \pi ) = \mathbb { E } _ { \mathbf { s } \sim d _ { \pi } } [ \ell ( \pi ( \mathbf { s } ) , \pi ^ { * } ( \mathbf { s } ) ) ]J(π)=Esdπ[(π(s),π(s))]

To facilitate continuous learning, the backend employs a distributed server architecture. Refer to the system architecture diagram which illustrates the flow from human operators identifying weaknesses to the training server performing online finetuning. The process begins with human operators identifying anticipated failures or OOD states in the real world. Collected corrective data is immediately streamed to the Data Serving Node. The Training Server then performs online finetuning using a weighted sampling strategy, constructing batches with 50% from the original offline dataset and 50% from the new online dataset to prevent catastrophic forgetting. Finally, updated model weights are synchronized to the Inference Server, achieving a round-trip latency of under 150ms. This architecture creates a tight feedback loop where the user sees a failure, collects corrective data, and the AR visualization reflects the updated policy's improved behavior within minutes.

Experiment

  • System capability verification confirms that RoboPocket achieves high-fidelity trajectory tracking with superior stability compared to standard SLAM systems, while significantly reducing data collection time through online processing and ensuring physically plausible motion data.
  • Validation of data scaling laws demonstrates that policy performance on diverse object arrangements follows a power law, proving the system's suitability for large-scale robot learning.
  • Experiments on four challenging manipulation tasks show that Robot-Free Instant Policy Iteration breaks the performance plateau of standard imitation learning by enabling targeted collection of failure recovery data, achieving results comparable to expert manual intervention without physical robot access.
  • Distributed deployment across multiple environments reveals that the system facilitates rapid policy adaptation and robust generalization, allowing users to substantially improve success rates in new scenes with minimal interactive corrections.
  • User studies indicate that non-expert participants effectively utilize real-time feedback and virtual foresight to identify model weaknesses, collecting correction data with state coverage comparable to that of experienced experimenters.

بناء الذكاء الاصطناعي بالذكاء الاصطناعي

من الفكرة إلى الإطلاق — سرّع تطوير الذكاء الاصطناعي الخاص بك مع المساعدة البرمجية المجانية بالذكاء الاصطناعي، وبيئة جاهزة للاستخدام، وأفضل أسعار لوحدات معالجة الرسومات.

البرمجة التعاونية باستخدام الذكاء الاصطناعي
وحدات GPU جاهزة للعمل
أفضل الأسعار

HyperAI Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp