HyperAIHyperAI

Command Palette

Search for a command to run...

إلى بحوث الذكاء الاصطناعي الآلي المبنية على التنفيذ

Chenglei Si Zitong Yang Yejin Choi Emmanuel Candès Diyi Yang Tatsunori Hashimoto

الملخص

تحظى الأبحاث الآلية المدعومة بالذكاء الاصطناعي بإمكانات كبيرة لتسريع الاكتشاف العلمي. ومع ذلك، فإن النماذج اللغوية الكبيرة (LLMs) الحالية غالبًا ما تُولِّد أفكارًا تبدو واقعية ولكنها غير فعّالة. قد يساعد التأصيل العملي في التغلب على هذه المشكلة، لكن لا يزال غير واضح ما إذا كان التنفيذ الآلي ممكنًا، أو ما إذا كانت النماذج اللغوية الكبيرة قادرة على التعلّم من التغذية الراجعة الناتجة عن التنفيذ. وللتحقق من هذه المسائل، قمنا أولًا ببناء منفذ آلي لتنفيذ الأفكار، ونفّذنا تجارب متوازية على وحدات معالجة الرسوميات (GPU) على نطاق واسع للتحقق من فعالية هذه الأفكار. ثم قمنا بتحويل مشكلتين بحثيتين واقعيتين—وهما التدريب المسبق (pre-training) والتدريب اللاحق (post-training) للنماذج اللغوية الكبيرة—إلى بيئات تنفيذ، وأظهرنا أن منفذنا الآلي قادر على تنفيذ جزء كبير من الأفكار التي تم عينتها من النماذج اللغوية المتقدمة. وتم تحليل طريقتين لاستخلاص التعلم من التغذية الراجعة الناتجة عن التنفيذ: البحث التطوّري (evolutionary search) والتعلم المعزّز (reinforcement learning). أظهر البحث التطوّري الموجه بالتنفيذ كفاءة عالية من حيث العينات: حيث تمكّن من اكتشاف طريقة تتفوّق بشكل ملحوظ على الأساس (GRPO) في التدريب اللاحق (69.4% مقابل 48.0%)، كما تمكّن من اكتشاف وصفة تدريب مسبق تفوق وصفة nanoGPT (19.7 دقيقة مقابل 35.9 دقيقة)، وذلك في غضون عشرة دورات بحث فقط. وغالبًا ما تولّد النماذج اللغوية المتقدمة أفكارًا خوارزمية ذات معنى أثناء عملية البحث، لكنها غالبًا ما تصل إلى حالة تشبع مبكرًا، وتُظهر اتجاهات التوسع (scaling trends) بشكل نادر جدًا. أما التعلم المعزز المستند إلى مكافأة التنفيذ، فيعاني من مشكلة انهيار النمط (mode collapse): إذ نجح في تحسين المكافأة المتوسطة لنموذج توليد الأفكار، لكنه لم يُحسّن الحد الأقصى للمكافأة، وذلك بسبب تقارب النماذج على أفكار بسيطة. وتم تحليل شامل للأفكار التي تم تنفيذها وديناميكيات التدريب، بهدف تمكين الجهود المستقبلية في مجال الأبحاث الآلية المدعومة بالذكاء الاصطناعي والمستندة إلى التنفيذ.

One-sentence Summary

Chenglei Si, Zitong Yang, and colleagues from Stanford propose an automated executor for AI research that tests LLM-generated ideas via GPU experiments, using evolutionary search to efficiently outperform baselines in LLM pre- and post-training, while revealing limitations in reinforcement learning and early saturation of frontier models.

Key Contributions

  • We introduce a scalable automated executor that implements and evaluates LLM-generated research ideas for open-ended problems like LLM pre-training and post-training, achieving over 90% execution success with frontier models such as Claude-4.5-Opus.
  • Execution-guided evolutionary search proves sample-efficient, discovering post-training and pre-training recipes that significantly outperform baselines (69.4% vs 48.0% and 19.7 vs 35.9 minutes) within ten epochs, though scaling trends remain limited for most models.
  • Reinforcement learning from execution reward improves average idea quality but suffers from mode collapse, converging to simple, low-diversity ideas and failing to enhance the upper-bound performance critical for scientific discovery.

Introduction

The authors leverage large language models to automate AI research by generating, implementing, and evaluating research ideas through an execution-grounded feedback loop. Prior work in AutoML and LLM-based research agents either operates in constrained search spaces or lacks mechanisms to learn from execution results—limiting their ability to improve idea quality over time. The authors’ main contribution is a scalable automated executor that implements and runs hundreds of LLM-generated ideas in parallel for open-ended problems like LLM pre-training and post-training, achieving over 90% execution rates. They use this system to train ideators via evolutionary search and reinforcement learning, finding that evolutionary search efficiently discovers high-performing ideas while RL suffers from diversity collapse and fails to improve peak performance. Their work demonstrates feasibility and exposes key limitations for future systems to address.

Dataset

The authors use two research environments to train and evaluate their automated idea executor:

  • Pre-Training Environment (nanoGPT)

    • Source: Adapted from the nanoGPT speedrun (Jordan et al., 2024), using a 124M GPT-2 model trained on FineWeb corpus (Penedo et al., 2024).
    • Objective: Optimize for validation loss (or its reciprocal, 1/loss) under a fixed 25-minute wall-clock budget on 8 H100 GPUs.
    • Modifications:
      • Proxy reward (1/loss) replaces raw training time as the optimization target.
      • Evaluation hyperparameters are frozen; inference is restricted to single-token prediction to prevent attention-based reward hacking.
    • Final validation uses a locked inference function to ensure fair comparison with human solutions on the original leaderboard.
  • Post-Training Environment (GRPO)

    • Source: Baseline GRPO algorithm (Shao et al., 2024) finetuning Qwen2.5-Math-1.5B (Yang et al., 2024) on MATH dataset (Hendrycks et al., 2021).
    • Objective: Maximize validation accuracy on MATH within a fixed wall-clock time.
    • Safeguards: Validation code is isolated in a separate file; the executor cannot access or modify it to prevent reward manipulation.
  • General Setup

    • Both environments allow unrestricted ideation scope—from hyperparameter tuning to novel architectures or training methods.
    • No constraints are imposed on the types of improvements the ideator model can propose.
    • The environments are designed to be open-ended yet measurable, combining innovation space with clear benchmarking.

Method

The system architecture consists of two primary components: the Automated Idea Executor and the Automated AI Researcher, which operate in a closed-loop feedback system. The Automated Idea Executor functions as a high-level API that transforms a batch of natural language ideas into benchmark performance metrics. This component is composed of three core modules: the Implementer, Scheduler, and Worker. The Implementer, hosted on a CPU machine with high I/O capacity, receives a batch of natural language ideas and generates executable code changes. It makes parallelized API calls to a code execution LLM, prompting it with both the idea and the baseline codebase to sample multiple code diff files. To ensure patchability, the model undergoes a sequential self-revision process up to two times, returning the first successfully applied diff. The patched codebase is then uploaded as a .zip file to a cloud bucket. The Scheduler, operating under a fixed clock frequency, periodically downloads new codebases from the cloud. For unexecuted codebases, it assesses the resource requirements of the research environment and prepares a job configuration. The Worker, a GPU-equipped cluster, connects to available resources upon receiving a job configuration from the Scheduler. It runs the experiment and uploads the results, including performance metrics and full metadata (idea content, code change, execution log), to a cloud bucket (e.g., wandb). The Automated AI Researcher, which includes the ideator model, receives the experiment results and uses them to update the ideator via reinforcement learning or evolutionary search, generating new natural language ideas to continue the cycle.

Experiment

  • Built automated executor to implement LLM-generated ideas and validate them via GPU experiments on LLM pre-training and post-training tasks.
  • Execution-guided evolutionary search found superior methods: 69.4% accuracy on GRPO (vs 48.0% baseline) and 19.7 min training time on nanoGPT (vs 35.9 min baseline) within 10 epochs.
  • Claude-4.5-Sonnet and Claude-4.5-Opus achieved high execution rates (up to 90%) and outperformed baselines in best-of-50 sampling; GPT-5 showed lower execution rates.
  • When using GPT-5 as executor, open-weight models like Qwen3-235B still achieved non-trivial completion rates and outperformed baselines.
  • Evolutionary search outperformed best-of-N sampling under equal budget, showing effective use of feedback across epochs.
  • Claude-4.5-Opus showed scaling trends; Claude-4.5-Sonnet saturated early but found optimal hyper-parameter combinations.
  • RL from execution reward improved average performance but caused mode collapse, converging on simple ideas (e.g., RMSNorm→LayerNorm, EMA) without improving upper-bound performance.
  • RL training reduced thinking trace length, correlating with higher execution rates but lower idea complexity.
  • Models generated algorithmic ideas resembling recent research papers, suggesting potential to support frontier AI research.
  • Top solutions from evolutionary search surpassed human expert benchmarks on GRPO (69.4% vs 68.8%) but lagged behind human speedrun on nanoGPT (19.7 min vs 2.1 min).

Results show that execution-guided search achieves a validation accuracy of 69.4% on the post-training task, significantly outperforming the baseline of 48.0% and surpassing the best human expert's result of 68.8%. On the pre-training task, the search reduces training time to 19.7 minutes, improving upon the baseline of 35.9 minutes and approaching the best human solution of 2.1 minutes.

The authors use an automated executor to evaluate and optimize ideas generated by large language models in two environments: GRPO for post-training and nanoGPT for pre-training. Results show that execution-guided evolutionary search enables models to discover high-performing solutions, with Claude-4.5-Sonnet achieving 69.4% accuracy on GRPO and Claude-4.5-Opus reaching a validation loss of 3.1407 on nanoGPT, both outperforming their respective baselines.

The authors use an automated executor to evaluate and refine ideas generated by large language models for improving LLM training methods. Results show that execution-guided evolutionary search enables models like Claude-4.5-Opus and Claude-4.5-Sonnet to discover methods that significantly outperform baseline approaches, with Claude-4.5-Sonnet achieving a validation accuracy of 69.4% on the GRPO task and Claude-4.5-Opus reducing training time by 45% on the nanoGPT task.


بناء الذكاء الاصطناعي بالذكاء الاصطناعي

من الفكرة إلى الإطلاق — سرّع تطوير الذكاء الاصطناعي الخاص بك مع المساعدة البرمجية المجانية بالذكاء الاصطناعي، وبيئة جاهزة للاستخدام، وأفضل أسعار لوحدات معالجة الرسومات.

البرمجة التعاونية باستخدام الذكاء الاصطناعي
وحدات GPU جاهزة للعمل
أفضل الأسعار

HyperAI Newsletters

اشترك في آخر تحديثاتنا
سنرسل لك أحدث التحديثات الأسبوعية إلى بريدك الإلكتروني في الساعة التاسعة من صباح كل يوم اثنين
مدعوم بواسطة MailChimp
إلى بحوث الذكاء الاصطناعي الآلي المبنية على التنفيذ | مستندات | HyperAI