HyperAIHyperAI

Command Palette

Search for a command to run...

pi0.7: نموذج روبوتي أساسي عام وقابل للتوجيه مع قدرات ناشئة

الملخص

نقدم نموذجاً روبوتياً تأسيسياً (robotic foundation model) جديداً يُدعى π0.7\pi_{0.7}π0.7، والذي يمكنه تقديم أداء قوي ومباشر (out-of-the-box) في مجموعة واسعة من السيناريوهات. يستطيع π0.7\pi_{0.7}π0.7 اتباع تعليمات لغوية متنوعة في بيئات غير مسبوقة، بما في ذلك المهام متعددة المراحل التي تتضمن أجهزة مطبخ مختلفة، كما يوفر قدرة على التعميم عبر الهياكل المادية المختلفة (cross-embodiment generalization) بنمط zero-shot؛ فعلى سبيل المثال، يمكنه تمكين الروبوت من طي الغسيل دون أن يكون قد سبق له رؤية هذه المهمة من قبل، بالإضافة إلى تنفيذ مهام صعبة مثل تشغيل آلة صنع الإسبريسو مباشرة بمستوى أداء يضاهي النماذج الأكثر تخصصاً والتي خضعت لعمليات RL-finetuning.تتمثل الفكرة الأساسية وراء π0.7\pi_{0.7}π0.7 في استخدام توظيف سياقي متنوع (diverse context conditioning) أثناء التدريب. هذه المعلومات السياقية، المضمنة في الـ prompt، تجعل من الممكن توجيه النموذج بدقة لأداء العديد من المهام باستراتيجيات مختلفة. ولا يعتمد هذا التوجيه على أمر لغوي يصف ما يجب القيام به فحسب، بل يعتمد أيضاً على معلومات multimodal إضافية تصف الكيفية أو الاستراتيجية التي يجب اتباعها، بما في ذلك البيانات الوصفية (metadata) المتعلقة بأداء المهمة وصور الأهداف الفرعية (subgoal images). وهذا ما يُمكّن π0.7\pi_{0.7}π0.7 من استخدام بيانات متنوعة للغاية.

One-sentence Summary

pi0.7 is a steerable generalist robotic foundation model utilizing diverse context conditioning with multimodal prompt information to precisely steer task strategies, delivering strong out-of-the-box performance in unseen environments and zero-shot cross-embodiment generalization for tasks like laundry folding while matching specialized RL-finetuned models on challenging tasks such as operating an espresso machine.

Key Contributions

  • The paper introduces π0.7\pi_{0.7}π0.7, a robotic foundation model designed to deliver strong out-of-the-box performance across a wide range of scenarios without task-specific post-training.
  • The method utilizes diverse context conditioning during training by augmenting language commands with strategy metadata and subgoal images to resolve ambiguity in diverse datasets.
  • Evaluation results demonstrate zero-shot cross-embodiment generalization and the ability to perform challenging tasks at a level matching specialized RL-finetuned models.

Introduction

Physical intelligence seeks to establish generalist capabilities in robotics similar to large language models, but prior vision-language-action models lack compositional generalization and often require task-specific fine-tuning. Training on diverse datasets often leads models to average out different strategies, resulting in suboptimal performance. The authors introduce pi_0.7, a steerable generalist robot foundation model that leverages diverse context conditioning to resolve ambiguity in mixed-quality data. By enriching prompts with detailed language, subgoal images, and strategy metadata, the model learns to compose skills effectively without fine-tuning, enabling zero-shot cross-embodiment transfer and robust performance on complex dexterous tasks.

Dataset

  • Composition and Sources: The authors aggregate demonstration data from diverse robot platforms (static, mobile, single, and bimanual) operating in lab, home, and wild environments. The mixture also includes autonomous data from policy evaluations, human interventions, open-source robot datasets, egocentric human videos, and auxiliary web data for visual question answering and object prediction.
  • Suboptimal Data Strategy: Departing from classic pipelines, the dataset intentionally includes lower quality demonstrations, failure episodes, and trajectories from prior model versions. This approach enables the model to distill capabilities from RL-trained specialists and improves robustness across varied states.
  • Metadata Processing: Episode metadata is constructed to label task execution attributes. Speed is discretized into 500-step intervals, quality receives a score from 1 to 5, and human annotators identify mistake segments within action sequences.
  • Training and Usage: Context modalities including instructions, images, and metadata undergo dropout during training to ensure flexible prompting. At inference, the model uses ground-truth metadata to condition performance on desired speed, quality, and accuracy.

Method

The π0.7\pi_{0.7}π0.7 model is a Vision-Language-Action (VLA) foundation model designed for generalist robot manipulation. It builds upon the π0.6\pi_{0.6}π0.6 architecture and the MEM memory system, extending them with multi-modal context conditioning. The model consists of a 4B-parameter VLM backbone initialized from Gemma 3, which includes a 400M-parameter SigLIP vision encoder, and a separate 860M-parameter action expert. The total parameter count is approximately 5B.

The authors leverage a flow matching objective for the action expert to predict continuous action chunks. The VLM backbone processes visual observations and language inputs, while the action expert attends to these activations to generate robot commands. This separation allows for fast inference at runtime while maintaining stable training for the backbone via discrete cross-entropy loss on FAST tokens, a technique known as Knowledge Insulation.

Refer to the architecture diagram below for a detailed view of the model components and data flow.

A key innovation in π0.7\pi_{0.7}π0.7 is the expansion of the context prompt CtC_tCt beyond simple language instructions. The model accepts a rich set of inputs including multi-view observation memory, task instructions, subtask instructions, episode metadata, and subgoal images. This multi-modal prompting enables the model to learn from diverse and heterogeneous datasets, including suboptimal behaviors and failures.

The system integrates a high-level policy and a world model to generate these contextual elements at runtime. The high-level policy produces semantic subtask instructions, while the world model generates subgoal images that depict the desired near-future state of the scene. These subgoal images provide spatial grounding that language alone may lack.

Refer to the system overview below illustrating how robot and non-robot data feed into the training pipeline and how the model operates during inference.

During training, the model is exposed to a combination of real future images and generated subgoal images. To handle the variability in image quality and delay, the authors employ a specific sampling scheme where real images are sampled from future timesteps or generated by the world model. The training objective maximizes the log-likelihood of the action chunk given the observations and context:

maxθED[logπθ(at:t+HotT:t,Ct)]\operatorname* { m a x } _ { \theta } \mathbb { E } _ { \mathcal { D } } \left[ \log \pi _ { \theta } ( \mathbf { a } _ { t : t + H } \mid \mathbf { o } _ { t - T : t } , \mathcal { C } _ { t } ) \right]maxθED[logπθ(at:t+HotT:t,Ct)]

The model utilizes a block-causal masking scheme where observation and subgoal tokens use bidirectional attention, while text tokens use causal attention. This structure is visualized in the attention mask diagram below.

At inference time, the model supports Classifier-Free Guidance (CFG) on the episode metadata to elicit specific behaviors such as higher speed or quality. The subtask instructions and subgoal images are refreshed whenever the semantic intent changes or after a fixed time interval. The following sequence demonstrates the model executing a complex task involving an air fryer using step-by-step verbal coaching and subtask instructions.

Experiment

The evaluation assesses the π0.7\pi_{0.7}π0.7 model across diverse robot platforms and tasks, specifically testing out-of-the-box dexterity, instruction following, cross-embodiment transfer, and compositional generalization. Results demonstrate that π0.7\pi_{0.7}π0.7 matches specialized fine-tuned models on complex manipulation tasks without post-training and successfully transfers skills to unseen robot morphologies by adapting manipulation strategies. Furthermore, the model exhibits superior language following capabilities that allow it to overcome dataset biases and perform new long-horizon tasks through verbal coaching, while effectively leveraging large, mixed-quality datasets for improved generalization.


بناء الذكاء الاصطناعي بالذكاء الاصطناعي

من الفكرة إلى الإطلاق — سرّع تطوير الذكاء الاصطناعي الخاص بك مع المساعدة البرمجية المجانية بالذكاء الاصطناعي، وبيئة جاهزة للاستخدام، وأفضل أسعار لوحدات معالجة الرسومات.

البرمجة التعاونية باستخدام الذكاء الاصطناعي
وحدات GPU جاهزة للعمل
أفضل الأسعار

HyperAI Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp