HyperAIHyperAI

Command Palette

Search for a command to run...

RoboPocket : Améliorez instantanément les politiques de robotique à l'aide de votre téléphone

Junjie Fang Wendi Chen Han Xue Fangyuan Zhou Tian Le Yi Wang Yuting Zhang Jun Lv Chuan Wen Cewu Lu

Résumé

L'extension de l'apprentissage par imitation se heurte fondamentalement à l'efficacité de la collecte des données. Bien que les interfaces portables aient émergé comme une solution évolutive pour l'acquisition de données « in-the-wild » (en conditions réelles), elles opèrent majoritairement en boucle ouverte : les opérateurs collectent des démonstrations de manière aveugle, sans connaissance des faiblesses sous-jacentes de la stratégie (policy), ce qui conduit à une couverture inefficace des distributions d'états critiques. À l'inverse, les méthodes interactives telles que DAgger (Dataset Aggregation) résolvent efficacement le problème du décalage de covariable, mais reposent sur l'exécution physique par un robot, une approche coûteuse et difficilement scalable. Pour concilier ce compromis, nous introduisons RoboPocket, un système portable permettant une itération instantanée de la stratégie sans robot, en s'appuyant sur des smartphones grand public uniques. Son innovation centrale réside dans un cadre d'inférence à distance qui visualise la trajectoire prédite par la stratégie grâce à la « prémonition visuelle » (Visual Foresight) par réalité augmentée (AR). Cette boucle de rétroaction immersive permet aux collecteurs d'identifier proactivement les défaillances potentielles et de concentrer la collecte de données sur les zones de faiblesse de la stratégie, sans nécessiter de robot physique. De plus, nous implémentons un pipeline de micro-ajustement (finetuning) en ligne asynchrone qui met à jour continuellement la stratégie avec les données entrantes, fermant ainsi la boucle d'apprentissage en quelques minutes. Des expériences extensives démontrent que RoboPocket respecte les lois d'échelle des données et double l'efficacité d'échantillonnage par rapport aux stratégies d'échelle hors ligne, surmontant ainsi leur goulot d'étranglement d'efficacité de longue date. En outre, notre boucle d'itération instantanée améliore également l'efficacité d'échantillonnage d'un facteur jusqu'à deux dans des environnements distribués, grâce à un nombre réduit de corrections interactives par personne. Page du projet et vidéos : https://robo-pocket.github.io.

One-sentence Summary

Researchers from Shanghai Jiao Tong University and Noematrix Ltd. introduce RoboPocket, a smartphone-based system that uses AR Visual Foresight to enable robot-free instant policy iteration, allowing users to proactively identify failures and refine policies in minutes while doubling data efficiency compared to traditional offline methods.

Key Contributions

  • RoboPocket addresses the scalability bottleneck in robot learning by transforming passive handheld data collection into an active, computationally guided workflow that provides real-time on-device feedback for higher quality demonstrations.
  • The system introduces a novel Robot-Free Instant Policy Iteration paradigm that uses AR Visual Foresight to visualize predicted trajectories, allowing users to proactively identify and correct policy weaknesses without physical robot deployment.
  • Experiments across diverse manipulation tasks demonstrate that this approach adheres to data scaling laws and achieves up to a 2× improvement in data efficiency compared to offline strategies while enabling rapid distributed learning.

Introduction

Scaling imitation learning in robotics is hindered by the high cost and logistical difficulty of collecting diverse, high-quality data from physical robots. Prior handheld interfaces allow for robot-free data collection but operate in an open-loop manner, forcing users to record demonstrations blindly without knowing where the current policy fails. Conversely, interactive methods that correct these failures require physical robot deployment, which is slow, risky, and impossible to scale across distributed environments. The authors introduce RoboPocket, a system that transforms a consumer smartphone into an intelligent co-pilot for robot learning by using Augmented Reality Visual Foresight to project the policy's predicted trajectory directly onto the user's screen. This approach enables users to proactively identify and correct policy weaknesses in minutes without a physical robot, while an asynchronous online finetuning pipeline instantly updates the model with new data to close the learning loop.

Dataset

  • Dataset Composition and Sources: The authors construct a dataset for the "Mouse Arrangement" task to validate data scaling laws, drawing from 32 distinct environments and 47 unique object pairs. The environments span both indoor and outdoor settings to ensure diverse lighting conditions and textures, while object pairs are formed by combining various mice and mouse pads.

  • Key Details for Each Subset:

    • Environment Selection: Two object pairs are randomly selected for data collection within each of the 32 environments.
    • Demonstration Volume: The team collects 25 demonstrations for every single environment-object pair combination.
    • Evaluation Setup: Testing occurs across 3 different scenes, utilizing 2 initial robot poses and 3 initial object poses to assess generalization.
  • Model Usage and Training Strategy: Following the protocol from Data Scaling Laws, the authors use this dataset to verify that their RoboPocket system generates high-quality data adhering to power-law scaling relationships. The study emphasizes that increasing diversity in environments and objects is more critical for zero-shot generalization than simply increasing the number of demonstrations per scene.

  • Processing and Hardware Configuration:

    • Physical Setup: Data collection utilizes a Flexiv Rizon 4 robot arm with a Robotiq 2F-85 adaptive gripper fitted with TPU soft fingers to match the handheld collector.
    • Data Streaming: An iPhone mounted on the gripper streams camera feeds in real-time to a workstation acting as both the Data Serving Node and Training Server.
    • Infrastructure: The system runs on a workstation equipped with an Intel Core i9-12900K CPU and NVIDIA GeForce RTX 3090 GPU, powered by an EcoFlow DELTA 3 MAX portable station.
    • Inference: A separate workstation with an Intel Core i9-13900K CPU and NVIDIA GeForce RTX 4090 GPU serves as the Inference Server during Robot-free Instant Policy Iteration.

Method

The authors propose RoboPocket, a system designed to transition from passive data recording to computationally guided learning. Refer to the framework diagram which contrasts the traditional offline iteration loop, characterized by prolonged feedback and limited scenarios, with the proposed instant policy update process that operates without a physical robot. This new workflow enables distributed environments and instant policy updates through a three-step cycle of policy updating, following the policy's intent, and collecting corrections.

The system relies on a specialized hardware-software co-design to ensure physical consistency and real-time interaction. Refer to the hardware and software interface diagram which details the isomorphic gripper, fisheye lens, and the AR-based interaction design. The hardware architecture utilizes an iPhone Pro as an Edge-Compute Hub to run real-time VIO and kinematic solving. It features an isomorphic adaptive gripper that replicates the underactuated dynamics of the target robot to minimize the embodiment gap. Additionally, a custom fisheye lens expands the visual context, while a magnetic encoder captures gripper width with high fidelity. On the software side, the interface provides active data verification through SLAM monitoring and an on-device IK solver, alongside an AR trajectory replay feature that allows users to visualize the end-effector path in real-time.

The core research question driving the system design is how to efficiently collect specific data distributions that the robot actually needs. The authors formulate the robotic manipulation task as a Markov Decision Process (MDP) defined by the tuple (S,A,P,R,γ)(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma)(S,A,P,R,γ). Standard Imitation Learning utilizes a static dataset to train a policy πθ(atst)\pi_{\theta}(\mathbf{a}_t|\mathbf{s}_t)πθ(atst) that minimizes the divergence from the expert distribution. However, due to compounding errors, the policy inevitably encounters out-of-distribution (OOD) states. Formally, the objective is to minimize the loss under the induced distribution:

J(π)=Esdπ[(π(s),π(s))]J ( \pi ) = \mathbb { E } _ { \mathbf { s } \sim d _ { \pi } } [ \ell ( \pi ( \mathbf { s } ) , \pi ^ { * } ( \mathbf { s } ) ) ]J(π)=Esdπ[(π(s),π(s))]

To facilitate continuous learning, the backend employs a distributed server architecture. Refer to the system architecture diagram which illustrates the flow from human operators identifying weaknesses to the training server performing online finetuning. The process begins with human operators identifying anticipated failures or OOD states in the real world. Collected corrective data is immediately streamed to the Data Serving Node. The Training Server then performs online finetuning using a weighted sampling strategy, constructing batches with 50% from the original offline dataset and 50% from the new online dataset to prevent catastrophic forgetting. Finally, updated model weights are synchronized to the Inference Server, achieving a round-trip latency of under 150ms. This architecture creates a tight feedback loop where the user sees a failure, collects corrective data, and the AR visualization reflects the updated policy's improved behavior within minutes.

Experiment

  • System capability verification confirms that RoboPocket achieves high-fidelity trajectory tracking with superior stability compared to standard SLAM systems, while significantly reducing data collection time through online processing and ensuring physically plausible motion data.
  • Validation of data scaling laws demonstrates that policy performance on diverse object arrangements follows a power law, proving the system's suitability for large-scale robot learning.
  • Experiments on four challenging manipulation tasks show that Robot-Free Instant Policy Iteration breaks the performance plateau of standard imitation learning by enabling targeted collection of failure recovery data, achieving results comparable to expert manual intervention without physical robot access.
  • Distributed deployment across multiple environments reveals that the system facilitates rapid policy adaptation and robust generalization, allowing users to substantially improve success rates in new scenes with minimal interactive corrections.
  • User studies indicate that non-expert participants effectively utilize real-time feedback and virtual foresight to identify model weaknesses, collecting correction data with state coverage comparable to that of experienced experimenters.

Créer de l'IA avec l'IA

De l'idée au lancement — accélérez votre développement IA avec le co-codage IA gratuit, un environnement prêt à l'emploi et le meilleur prix pour les GPU.

Codage assisté par IA
GPU prêts à l’emploi
Tarifs les plus avantageux

HyperAI Newsletters

Abonnez-vous à nos dernières mises à jour
Nous vous enverrons les dernières mises à jour de la semaine dans votre boîte de réception à neuf heures chaque lundi matin
Propulsé par MailChimp